Learn how Docker can greatly simplify continuous deployment by allowing safer, more reliable deployment and testing environments.
- [Instructor] As we get started, let's talk a little bit about where we're all coming from, and what to expect. Everybody's job is a little bit different, and everybody's business is probably more different, than it is the same. We're all striving towards a happier work environment and we want our tools to help us build that. Docker can give you a tangible thing to make sense of continuous integration, continuous deployment, and work towards this goal. Very few of us are at a point in our career, where we have the time and energy to build out robust automation and we're starting from a clean slate with the ability to choose all of our tools to match what we're doing.
By the time you get to the point of building out automation for your tooling, you've probably already made a lot of decisions. Some of those decisions are probably codified in institutional knowledge and built up into your tools. So Docker helps you divide up these evolved complex flows, into smaller bite-size pieces, that you can then compose back together into a cohesive whole that will work nicely together and be something you can understand. It gives you the chance to have the exact same build system on your laptop that's in use on the official builder's, and gives you a chance to keep it that way automatically.
Here's a little made up example of an environment that has continuous integration but without Docker. This is a mature company that's spent a lot of effort and time building up their infrastructure and they're pretty happy with it. First, our intrepid programmer writes some code, and compiles it on their laptop. Of course they're going to need a compiler, and they're going to need a bunch of other code from other parts of the company, and they're going to need several other runtime tools for checking the code, checking the style guides, and many other things.
So they installed those all on their laptop. Okay, they're happy with their code, they're ready to go, and they're proud of what they built. So they push it to GitHub, now Jenkins the builder over here, is watching GitHub and says, ah, there's new code, let me build it. And Jenkins grabs the new code, feeds it to its version of the compiler, which is pretty close to the compiler the person has on their laptop, and combines it with the most recent versions of the code dependencies from the rest of the company, feeds it through a slightly older version of the runtime tools, and produces a runnable working binary.
Okay, this build result is then, pushed on over to the test server. The test server runs a robust set of tests and says, yep, this is good to go. So it pushes it over to the production server, which has its own copy of the runtime tools, has its own copy of all of the code dependencies, and it starts serving it out to customers, and everybody is happy. At least most of the time. Now, if we use Docker to carve up that previous example, it starts to look a little bit more manageable. Again, we'll begin with our intrepid programmer, writing some code on their laptop.
Then their laptop connects to the Docker Registry, and gets the latest official image, with the correct compiler, the correct code dependencies, the correct runtime dependencies, all bundled together and downloads it to the laptop. They run it, they see that it's good, and they push it to GitHub. Once again, Jenkins is watching GitHub and says, ah, there's some new code for me to build. So Jenkins pulls the same image, with the same compiler, the same code dependencies, the same runtime dependencies, pulls it down to Jenkins, builds an official result, and pushes it back to the Docker Registry.
Then Jenkins signals the test server and says, hey test server, you've got something new to check. So the test server connects up to the Registry, pulls down the thing that Jenkins built, and says, ah yes, this is good to go, it will work, and pushes it on over to the production servers, and start serving it to happy customers. So, are we there yet? Understand that this is an indefinite process. There's always going to be room for improvement, and that's a good thing. If we got to the point where there was no chance to make our system better, would it still be interesting to work on? So, your system will grow with you as a team.
You're always going to have to incorporate some little piece of your system that doesn't quite fit with the automated lifestyle. Most of the time, Docker can help you work around that, encapsulate the ugly bits, and present them as manageable pieces. You can work towards a repeatable, happy, build.
- How Docker can greatly simplify continuous deployment
- Building your CI/CD toolbox
- Building a deployment job
- Continuous deployment using hosted Docker
- Deploying to AWS with Jenkins
- Automated testing using Docker and AWS
- Goals and expectations for integration testing
- Creating and deploying an integration test job