With the rise of containers, different CM patterns are gaining currency.
- Welcome back. In our last video, we explained the uses of configuration management and orchestration tools. There's some new changes going on in the CM landscape, however. While the path of CM evolution continues, there are also interesting developments at the provisioning level, where public and private Cloud computing led to a rise in model-driven automation, where a declarative model of your base systems can be used to create the systems. Amazon has Cloud formation, Azure has Azure Resource Manager templates, and so on. So the quite reasonable question arises, why would I use one model for my systems and another for my OS configuration, and maybe another for my applications? By any measure, that's suboptimal. James and I were actually involved in writing a unified model tool at one company and the ability to instantiate and control your systems and applications from the same model is very powerful. The rise of containers has accelerated this question. In a container based architecture, the server becomes less and less a part of the equation. Applications are packaged in a container with just enough OS and dependencies to support them, and then are swarmed across bare bones physical infrastructure. Even very large Cloud players like Netflix, it started to return to the golden image model for efficiency reasons. If you have sufficient automated artifact management, then we might just as we might package JARs up into WAR files and WAR files into a DEB file for distribution. The image is just another level of artifact. So Netflix bakes an entire Amazon images, their build artifact, and does minimal configuration upon deployment. Because running the same identical upgrade activities across 1,000 nodes, is really just asking for one or more of them to fail, the deployment process to be slower, and to contribute to the heat death of the universe. In the container world, this is becoming even more customary, and is called immutable infrastructure for the systems and immutable delivery for apps. If the entire container, OS dependencies, and app code is your artifact, then you have no reason to ever change it state via configuration management. Once it's deployed, it's immutable. When it's time to upgrade, you roll out an entire new system. Of course, this doesn't always work for your data store. Although, actually some of the newer no SQL data stores that keep multiple copies of data for resilience can be done in this model. Also, while there weren't good tools to handle the VM, golden images of days of yore, that's changing. Docker repositories look a lot like any other build artifact repository with all the same versioning and semantics you'd expect. At my current job, we use Maven to do our builds, which generate both DEB files and Docker containers that we push into Artifactory. When we provision a Cloud instance, it gets the current versions of both. We had been using run deck and puppet to upgrade the DEBs and containers as needed on those systems. But as more of our system migrates into the containers, we find less need for it. I anticipate one day will retire our CM in favor of just containers running on base OS Cloud images. So let's pause and talk about configuration management databases for a while. Back in the ITIL times, was born the idea of a central CMDB. A CMDB is supposed to be a data warehouse containing information on all your IT assets and the relationships between them. It sounds like a super idea. In practice, though, CMDB implementations were hateful. They were frequently updated manually, but even automated approaches didn't keep pace with the increasingly rapid pace of change of your actual systems. Except for diehard enterprises, these had fallen out of favor. Chef and Puppet servers have basic node registries and the ability to store additional data. So people just did that and called it enough. But the need for something more real time persisted. The Apache Hadoop project needed to tightly manage and coordinate workers for MapReduce jobs. And they built out a lot of internal orchestration tooling, including a project called ZooKeeper that was used for state management and holding configuration. This garnered a lot of attention and people started using it as the center of other orchestration solutions. In fact, the Model Driven CM tool James and I built at NI use ZooKeeper as its central coordinator. And then more projects emerged around this concept, simple stores that were designed to be high velocity central state and configuration sources and service discovery mechanisms. As containers grew in popularity, and practically mandated a store of this sort to provision and manage in mass, LCD and Hashey corpse console emerged as popular options. The information that would have been in a CMDB resides in a combination of the model used to create the system and the state information in that service discovery store. These in turn power higher level container orchestration tools, like Mesos and Kubernetes. There are also newer stabs that traditional CMDBs like Tumblr's Collins project. In an environment that's all Cloud and Docker based, there's nothing to store that's not available programmatically as part of the service fabric. But if you have hardware, you may also need a CMDB. But resist the temptation to try to handle both documents storage core CMDB functionality, and service discovery and configuration with the same tool. Integrate two solutions, instead. No one tool does them both well. Kubernetes and Mesos are the container answer to orchestration. Since for containers, CM is basically just a simple Docker file, the main work ends up being service discovery and controlling the swarm of images. Since the container basically is the app, this gets you very close to a unified model-driven solution. This is also an attractive model that promises to reduce complexity eventually. All of these tools are in there pretty early stages, and there's a lot of rough edges. So that's the basics of current configuration management thought. I've mentioned a bunch of tools and context, but in the next section, we'll talk more about the infrastructure as code tool space.
In this course, well-known DevOps practitioners Ernest Mueller and James Wickett provide an overview of the DevOps movement, focusing on the core value of CAMS (culture, automation, measurement, and sharing). They cover the various methodologies and tools an organization can adopt to transition into DevOps, looking at both agile and lean project management principles and how old-school principles like ITIL, ITSM, and SDLC fit within DevOps.
The course concludes with a discussion of the three main tenants of DevOps—infrastructure automation, continuous delivery, and reliability engineering—as well as some additional resources and a brief look into what the future holds as organizations transition from the cloud to serverless architectures.
- What is DevOps?
- Understanding DevOps core values and principles
- Choosing DevOps tools
- Creating a positive DevOps culture
- Understanding agile and lean
- Building a continuous delivery pipeline
- Building reliable systems
- Looking into the future of DevOps