Join Ernest Mueller for an in-depth discussion in this video The continuous delivery pipeline, part of DevOps Foundations.
- In the last video, we talked about the first phase of a continuous delivery pipeline, continuous integration. In this video, we'll cover the rest of the continuous delivery flow. - We're going to discuss what does continuous delivery mean? And five practices that we think are critical for getting it right? - Continuous delivery. It's the practice of deploying every change to a production like environment and performing automated integration and acceptance testing along the way. - The definitive work on continuous delivery is the excellent book of the same name by Jez Humble and Dave Farley. And I highly recommend it. - In the video on continuous integration. We discussed the artifacts that are created upon the successful completion of each build. - These artifacts shouldn't be rebuilt for staging testing and production environments. - Yeah.They should be built once and then used in all the environments, this way you know that your testing steps are valid since they all use the same artifact - Your artifacts also shouldn't be allowed to change along the way. They need to be stored and have permission set in such a way that they're immutable. - In the continuous delivery pipeline that I built at my job. I set the permission so that the CIS system can only write the artifact to the artifact repository and the deployment system that we call deployer only has read access to the artifact. - We want artifacts to be built once and immutable for two reasons. - First, it's going to create trust between the teams when they're debugging an issue. You want the Dev and the Ops and the QA, all the teams to have confidence that the underlying bits didn't change underneath them between the different stages. - Yeah. And then a quick checksum can prove that you're all looking at the exact same artifact version. - Yeah. And then the second reason is auditability. One of the great parts about building a continuous delivery pipeline is that you can trace specific code versions in source control to a successful build artifact to a running system. - Rebuilding or changing an artifact along the way would break your auditability. - Okay, before going much further. Let's talk about how the artifacts are flowing through the system . Code is checked into version control system that commit triggers a build in your CIS system. Once the build finishes the resulting artifacts are published to a central repository. - Next, we have a deployment workflow to deploy those artifacts to a live environment. That's as much of a copy of production as possible. You may call this environment CI, staging, test or pre-prod. At this point, smoke testing, integration testing and acceptance testing all happen. And they should be automated as much as possible. - Once it passes all those tests the artifact is released. When you want you to deploy the artifacts to your production environment. - And finally, you want to have a preproduction environment that's as identical as possible to your production environment. - In the cloud. That's really easy. In other situations that can be a bit more challenging. This environment needs to include all the load balancers, network settings, security controls along with all the data that matches production. - One of the reasons we move code to this environment is to do the acceptance testing, smoke tests and integration tests. That are difficult to fully simulate on Dev Desktops or Bolt build servers. - Yeah. This gives you confidence that both your code and your deployment process is going to work in production. - This brings up another crucial point. Your system needs to stop the pipeline if there's breakage at any point - Right.Yeah. Humans should be able to lock the CD pipeline using an an Andorn Cord, which we talked about earlier. - But even more importantly the CD pipeline shouldn't allow progression from stage to stage without assurance that the last stage was run successfully. - Yeah. For our CD pipeline we have two checks implemented mainly. First if there's any failure encountered in the deployment system it's going to lock up and it'll notify all the team in chat. Second, each stage of the deployment audits the previous stage and checks that not only that no errors occurred but also that it's in the expected state that it should be. - We talked a bit about this, in our infrastructure as code chapter but it's good to reiterate that item potency is key for your deployments. In other words redeploying should leave your system in the same state. - Yeah, you can accomplish this by using a mutable packaging mechanism like Docker containers or through a configuration management tool like Puppet or Chef. But this is another area where trust and confidence factors into your pipeline. I found these five practices to be really important when building out a continuous delivery pipeline. - In closing, once you've planned out your CD pipeline, trace a single code change through it and answer these two questions. - Are you able to audit a single change and trace it through the whole system? This is going to be called your cycle. And how fast can you move that single change into production? This is your overall cycle time. - I encourage you to start recording metrics off your pipeline. Focus on cycle time, the measure of how long it takes a code check-in to pass through each of the steps in the process, all the way to production. - You know another thing I like to do is to understand team flow and you can do that by keeping a pulse on the team, through tracking the frequency of deploys as they're happening. - That's right. And one way to improve those metrics is in how you perform QA. Which we're going to discuss in the next video.
In this course, well-known DevOps practitioners Ernest Mueller and James Wickett provide an overview of the DevOps movement, focusing on the core value of CAMS (culture, automation, measurement, and sharing). They cover the various methodologies and tools an organization can adopt to transition into DevOps, looking at both agile and lean project management principles and how old-school principles like ITIL, ITSM, and SDLC fit within DevOps.
The course concludes with a discussion of the three main tenants of DevOps—infrastructure automation, continuous delivery, and reliability engineering—as well as some additional resources and a brief look into what the future holds as organizations transition from the cloud to serverless architectures.
- What is DevOps?
- Understanding DevOps core values and principles
- Choosing DevOps tools
- Creating a positive DevOps culture
- Understanding agile and lean
- Building a continuous delivery pipeline
- Building reliable systems
- Looking into the future of DevOps