From the course: Docker: Continuous Delivery

Use Docker to build a composable architecture - Docker Tutorial

From the course: Docker: Continuous Delivery

Start my 1-month free trial

Use Docker to build a composable architecture

- [Instructor] I mentioned earlier using Docker to partition your environment and split things up. I'm going to go a little more in detail about what goes into those partitions and which things can go where. So taking our classic environment here, we have a developer's laptop, and the thing that customers actually see. The pieces that make up this relation include all of the programs that need to be there to actually run the code. Of course, the code itself needs to be included, and all those external bits and libraries and all the code fetched from out there on the Internet needs to be wrapped up in a coherent way and included in the final result, as well as all your internal libraries needed to interface with the rest of your organization, and of course, the result of all this compilation needs to be included in the result as well. And all of that is built on top of a particular version of the operating system. There's a lot of pieces here that get tied together by hand. There's really no chance that this is going to be the same on your laptop as it is in production, but we're professionals, we make it work. One way to make these components tie together much more nicely is to wrap them up in Docker containers, which is, of course, what this course is all about. But to be a little more nuanced about it, we don't necessarily need to take one big bundle of all of these pieces, and build the whole thing every time. Docker images are designed to be built on top of each other, so you can take the pieces that don't change, the pieces you don't want to change, and build them up into their own image, with all that stuff fixed. Then, every time you have some new code, you take the base image that has all the fixed stuff, you compile the code, you stick it into that image, and you have an image that you can actually run. So the base image comes out of the Docker registry, gets the code stirred into it, and then gets put back in the Docker registry to be passed on to the rest of the flow. So then your flow ends up looking a little bit more like this. You have a builder, which takes the base image, produces a runnable product Docker image. That image gets tested. Then the identical image gets staged and maybe reviewed by humans, and the identical image goes to production, and it's the Docker registry that ties it all together. As you can see, the Docker registry is the anchor point that ties our environment together. You're almost certainly going to want to have your own private, secure registry for storing your images. You can run it locally on a machine you already have. In fact, if you're running other services like Nexus within your environment, you may already have one of these. Or, it's easy to rent them from a variety of sources. In this course, I'm going to rely on Amazon's AWS hosted registry, the ECR, though the Docker hosted registries work very well, as well as offerings from many other companies, such as Google. Now, it is totally possible to not use your own private registry. You can move Docker images around by saving them locally, SCPing them over to a machine and loading them there. It totally works. It's not as much fun, and it is possible. Now, like all good things, Docker can create its own set of problems too. It's really easy to lose track of what went into an image. What were the build tools that built this image? What version of NPM were we using last September? So it's very helpful to save all that stuff in some central location in your organization, so that if you ever have to, you can go back and rebuild an image or build a new version of an image that has some updated library. Storing all of your dependencies separately makes rebuilding old images possible, when otherwise it might be a bit daunting. So here's some best practices that really do help make working with images just a little bit smoother. Start by giving every image a unique name that describes where the source code that built that image came from. So a convenient way to go about this is to just use the Git hash as the image tag. That way if you go to production and see that you're running a particular image, you can go straight over to Git, look for the same Git hash, and know immediately which code was used to build that image. Also, generally prefer the full tag structure of organization name/project colon tag over simple names like project or latest, that don't give you much of a clue where the code came from, and also some hosting providers only work with the full format, so you probably will end up adding it eventually. And most importantly, build all your Docker images from Docker files. It's just a lot easier to go back and figure out what went into an image if you've got a Docker file than trying to go to the person who built it and say, hey, do you remember what you did when you build that image last Tuesday? I'd kind of like to change it. You can also put multiple tags on an image. Let's say you really do want to have a latest tag that just includes the newest thing. Sure, go ahead and tag that image as latest, and also tag it with the Git hash. That way you can know what was built and you can easily get the latest thing if you're not in a position to be super concerned about exactly which release you have. And of course, automate early, automate often. That's what this is all about. Try to avoid things that tie a container so that it can only be run on a particular host, things like linking to shared libraries that are on the host instead of inside the container. This can really frustrate things when you try to move to a new hosting environment. And also, carefully avoid hand building images. It's just too easy to forget how something was built and then get to the point where you're afraid to upgrade it because you don't know how it was made and so you don't know if you can change a thing safely without breaking something in production, and it's just best to avoid that whole line of thinking and carefully build each image from a Docker file. And keeping in mind all of this great advice, try to accept that you're going to have to build something that was just not made with containers in mind. Maybe it makes bi-directional network connections or depends on some shared library that's really only reasonable to store in a central host. And also, try to accept that at some point exceptions will arise, things that are difficult to put into Docker containers. Don't let this derail your process. Keep working towards automating everything and just accept that there will be a couple of things that don't get automated right away.

Contents