In this video, discover how delivering small batches of change quickly reduces risk, improves quality, and restricts technical debt.
- You can't get around DevOps without lots of conversations about Continuous Integration and Continuous Delivery. In this session, we're going to cover the top five benefits. - In the old way of delivering software, for most of the time spent in development, the application goes without being run, at least not in whole, you only build and deploy the application and submit it for a test phase at the end, getting bug reports in large batches, late in the project. - In Continuous Delivery, you have an application that on every CodeCommit is built automatically. Unit tests are run and the application is deployed into a production like environment. Also automated acceptance tests are run and the change either passes or fails testing minutes after it's checked in. Code is always in a working state with every Continuous Delivery. - Now some definitions, Continuous Integration is the practice of automatically building and unit testing the entire application frequently, ideally on every source code check-in. - Continuous Delivery is the additional practice of deploying every change to a production like-environment and performing automated integration and acceptance testing. - Continuous Deployment extends this to where every change goes through full enough automated testing that it's deployed automatically to production. Large scale web properties like Facebook, Etsy, and Wealthfront use Continuous Deployment. - One of the most compelling reasons to use these techniques is a huge decrease in the time it takes to get a product to market. - The 2016 State of DevOps Report found that high performing IT organizations could deploy on demand as compared to their peers that took days, weeks or months. High-performing IT organizations are able to quickly move from concept to cash, allowing for rapid experimentation and market validation of ideas. - We are now seeing organizations deploy given application 10 or even hundreds of times a day. You might think with rapid cycles and high frequency of change, you'd see a decrease in quality, but in fact the reverse is true. - Yeah, the State of DevOps Report also found that these same high performing organizations have a three times lower change failure rate than their peers. - This increase in quality happens because instead of doing inspection at the end of the development life cycle, we're integrating testing earlier in the delivery pipeline. And instead of one huge go live consisting of hundreds of changes, we evaluate and deploy changes one by one testing every commit and making sure the software is in a running state. - Lean taught us that a high level of work in progress in other words the number of tasks that are all in flight at once is really dangerous. One of the highly debated but important principles of Continuous Integration is the idea that developers must work off of master or trunk. - One interesting finding of the State of DevOps Survey is that having branches or forks with very short lifetimes, less than a day before being merged into trunk and less than three active branches in total, all contribute to higher performance. - Yeah.In short we're shrinking the amount of unintegrated changes that are the software equivalent of work in progress. For me, all this really clicked when I was reading Jez Humble's book on Continuous Delivery and it proposed a batch size of one. One of the phrases to think about from this book is it's not how much you can deliver but how little - The State of DevOps Report also says, high-performers reported that the lead time required to deploy changes into production, was less than one hour, whereas low performers required lead times between one to six months. - I mean, that's a huge competitive advantage. - How fast can you recover from a failure state? The interesting part about working in a Continuous Delivery environment is that there are two vectors making your meantime to recover shorter. First, once you're in a failure state and you've come up with a remediation action, it can be treated just like any other change and rolled out quickly without breaking your usual process. - The second and less obvious benefit is using Continuous Delivery to find the cause of failures. For example, at my job we had a slow growth of database connections over a few days time. It grew and grew until we reached our limit and everything broke. But by overlaying the connection growth graph with the deploys that happened in that same time window, I could quickly figure out exactly which commit introduced the error. In my previous job, that same exercise took weeks because the change happened in a quarterly release where hundreds of changes happened all at once, to me this was proof that Continuous Delivery actually reduced MTTR and had huge operations implications. - In the next sections, we'll go over Continuous Integration and Continuous Delivery Pipelines and practices that should be present in each.
In this course, well-known DevOps practitioners Ernest Mueller and James Wickett provide an overview of the DevOps movement, focusing on the core value of CAMS (culture, automation, measurement, and sharing). They cover the various methodologies and tools an organization can adopt to transition into DevOps, looking at both agile and lean project management principles and how old-school principles like ITIL, ITSM, and SDLC fit within DevOps.
The course concludes with a discussion of the three main tenants of DevOps—infrastructure automation, continuous delivery, and reliability engineering—as well as some additional resources and a brief look into what the future holds as organizations transition from the cloud to serverless architectures.
- What is DevOps?
- Understanding DevOps core values and principles
- Choosing DevOps tools
- Creating a positive DevOps culture
- Understanding agile and lean
- Building a continuous delivery pipeline
- Building reliable systems
- Looking into the future of DevOps