In this video, learn about configuration management, automated provisioning, deployment, and orchestration.
- The heart of infrastructure automation and the area of best served with tools is configuration management. There are many approaches to building systems, maintaining and upgrading their configuration, and deploying applications to them. The space could be confusing because many of the tools can be used to perform multiple functions in different ways. And sometimes it's a good idea, and sometimes it's not. So let's start with some definitions of common CM terms and then examine techniques. First, provisioning is the process of making a server ready for operation, including hardware, OS, system services and network connectivity. Deployment is the process of automatically deploying and upgrading applications on a server. And then orchestration is the act of performing coordinated operations across multiple systems. Configuration management itself is an overarching term dealing with change control of system configuration after initial provisioning, but it's also often applied to maintaining and upgrading applications and application dependencies. There are also a couple important terms describing how tools approach configuration management. Imperative, also known as procedural. This is an approach where commands desire to produce a state are defined and then executed. And then there's declarative, also known as functional. This is an approach where you define the desired state and the tool converges the existing system on the model. Idempotent. This is the ability to execute the CM procedure repeatedly and end up in the same state each time. And finally self-service is the ability for an end-user to kick off one of these processes without having to go through other people. I'll be using these terms as we continue to discuss configuration management. These definitions are also on the course handout for easy reference. So let's take a look at the evolution of config management. In the early days of the dev and ops approaches were very separate. CFEngine gave rise to Puppet, which gave rise to Chef. These were primarily provisioning tools used by operations teams to configure systems. Then developers either used simple push frameworks like Capistrano or ad hoc code to automate our application deployment. Commercial IT provisioning tools like Ghost were common and large integrated to enterprise suites like Tivoli or HP were the enterprise's answer. An early conceptual shift was driven by a 2009 article called golden image or foil ball by Luke Kaneese, founder of Puppet. In it, he argued that image management, especially of more or less completely prebuilt VM and system images led to image sprawl and configuration drift. The community largely agreed and shifted toward a STEM cell system approach where initial provisioning is as minimal as possible. And then the CM tool picks up to provision the rest of the system and runs incrementally later to prevent configuration drift and provide later updates using the same mechanism. This is an example of the Chef DSL you used to configure your systems. Puppet and Chef, and to a lesser degree CFEngine and other CM tools became the de facto standard. These tools use declarative idempotent DSLs to define desired system configuration. And then the systems automatically converged their state to them. When virtualization gave way to cloud, all these tools enjoyed a huge jump in popularity. When you're getting in new servers once in a while, and it takes weeks to get them set up, anyway, many people just continued to use manual runbooks or ad hoc automation for setups, but with cloud instances coming and going, CM becomes table stakes to have a well managed environment. There was a problem though. The problem of orchestration. The default run pattern of Puppet and Chef is to wake up every 15 minutes or so from cron, check a master for changes, and then pull and apply them with each system acting as an independent agent. For a lab full of systems without high availability requirements, that's fine, but for a more typical three tier web application system where you need application servers to not all go down at the same time and need to orchestrate database changes with app changes, it's not so fine. Requests for orchestration, to be honest, were initially met by the CM vendors with you don't need orchestration, and if you think you do, you don't understand configuration management. As a result, some people packaged up their applications as OS level packages like dabs or RPMs and use the CM tools to deploy them. But others continued to deploy their applications via alternate means. This led to another wave of tools like Ansible, and SaltStack that switched to a push mechanism to perform a more explicitly orchestrated deployment. Joining the earlier dev friendly pushed deployments of Capistrano with the idempotents ideas of other CM tools. These tools crossover with pure play runbook orchestration tools like Rundeck. They can be used to automate common tasks across your fleet of servers. Since all these tools adhere well to the toolchain model, many implementations are a mix and match. For example, one of the projects I'm working on uses puppet manifests, but without a puppet master server. Instead we use Rundeck to execute them in an orchestrated manner on demand. A variety of orchestrated deployment techniques have arisen. There's the Canary deployment, where you upgrade one server in a fleet and see how it works before upgrading the rest. There's a blue green deployment where you have two identical environments, one of which is production, and one of which is staging. New code is put onto the staging environments and then the two environments are swapped. There are variants on this practice like cluster immune system deployment. There's also immutable deployments where you never upgrade software in production at all. You discard old virtual systems and put new ones in place. Netflix is a big proponent of this method. That's the basics of configuration management and orchestration. But as I'll discuss in the next video, the golden image is making a comeback.
In this course, well-known DevOps practitioners Ernest Mueller and James Wickett provide an overview of the DevOps movement, focusing on the core value of CAMS (culture, automation, measurement, and sharing). They cover the various methodologies and tools an organization can adopt to transition into DevOps, looking at both agile and lean project management principles and how old-school principles like ITIL, ITSM, and SDLC fit within DevOps.
The course concludes with a discussion of the three main tenants of DevOps—infrastructure automation, continuous delivery, and reliability engineering—as well as some additional resources and a brief look into what the future holds as organizations transition from the cloud to serverless architectures.
- What is DevOps?
- Understanding DevOps core values and principles
- Choosing DevOps tools
- Creating a positive DevOps culture
- Understanding agile and lean
- Building a continuous delivery pipeline
- Building reliable systems
- Looking into the future of DevOps