What exactly does an infrastructure build pipeline look like? Learn how to go from infrastructure code to artifacts.
- [Instructor] In this video, we're going to talk about the pipeline you'll set up for your systems to take them from code to artifacts to running systems. Don't worry, you don't have to learn Java to do DevOps. You probably already write code. Maybe it's shell scripts, apache config files, build definitions or OS files. The task ahead is to realize that these are code, and we'll apply coding best practices to them. So, for folks that aren't developers, what does a normal code flow look like? Here's an illustration of a continuous delivery flow, where a code gets checked in, goes through successive levels of testing, and finally gets released.
First, you'll want to use source control. If you're not experienced with it, it's easy. We'll use Git in this course, and there's plenty of simple online tutorials that will teach Git. But if your shop uses something else, ask a developer to show you the ropes, it won't take long to get up and running. There are two common approaches to configuration management code. Imperative and declarative. Imperative, also known as procedural, is an approach where the commands to produce a desired state are defined and executed.
Your average Bash script is imperative. You're creating a list of things to do, and the system does them. Then there's declarative, also known as functional. This is where you define the desired state, and the tool conforms the system to the model of your desired state. Puppet manifests are declarative, as are SQL queries and makefiles. You don't describe the control flow of exactly what to do, you specify what you want done and the tool does it.
Declarative is more efficient, assuming the declarative framework supports exactly what you want it to do. Like application code, you want to have tests for your infrastructure code and for your infrastructure. It's such a big and important topic to itself, that we'll cover it in depth in the next video. Code usually gets compiled, parsed, or even just bundled up into what we call artifacts. Artifacts are what get versioned, tested, and deployed.
Consider what artifacts you intend to use as the deliverables in your infrastructure code pipeline. For app code, it's usually executable binaries or JARs or similar. For infrastructure, you usually see DEBs, RPMs, or other OS packages. Docker images, AMIs, VM images, and stuff like that. You may have multiple layers of artifacts. Java WAR files that then get built into RPMs and then get built into VM images, for example.
But deciding on what your artifacts are and how to manage and version them, is very important. In my shop, we build everything from OS packages, to scripts, to Java and Python applications into DEBs or Debian packages so that we can leverage that format's built in dependency management. Then as a second layer, we build VM images and AMIs using Packer, and build Docker images directly with docker files. All this is controlled through our last seen Bamboo build system.
We then use Artifactory as our artifact repository for both infrastructure and application artifacts so that our provisioning and deployment can leverage that single-source of truth. We can't stress enough the importance of creating and managing artifacts. After they're built, they never get changed, and as they're deployed from tier to tier, they're provably identical. Don't just pull code out of Git, use artifacts. But how do you know these artifacts are any good? Testing.
In the next video, James will lead you through the fine points of testing your infrastructurous code, then I'll be back to talk about the second half of the deployment pipeline, where we take the artifacts and create running systems from them.
- Testing your infrastructure
- Going from infrastructure code to artifacts
- Unit testing your infrastructure code
- Creating systems from your artifacts
- Instantiating your infrastructure from a defined model
- Provisioning with CloudFormation
- Immutable deployment with Docker
- Container orchestration with Kubernetes