Automate application management, deployments, and scaling through containerization technologies. We discuss the differences between these technologies and when you would consider Kubernetes.
- [Instructor] Let's look at container orchestration. Containers are a way to package code and it's dependencies, so you can run it in any number of environments. So how is this different from using VMs? As you can see from the diagram's reference from Google, VMs require an underlying hypervisor and an OS. After this, we deploy applications and libraries onto the system. In the container scenario, we aren't concerned with the OS, rather we share those resources. Each container installs this application within it's sandbox. Now this makes them incredibly lightweight, as you can have multiple containers in a single DM sharing resources, such as RAM, they are horizontally scalable, since we can deploy them into VMs, servers, basically any environment with a container run time. We're also not concerned about the OS they're running on. Which means they can be provisioned, replicated or destroyed within seconds. Let's talk about an example where it makes sense to use containers. Development teams often have people working with different systems, so it's possible to have engineers on Windows and Linux-based systems. We want our engineers to be able to work on their preferred systems, but the deployment for UAT and production needs to be on Linux. Through containerization, we don't have to worry about the binaries in Windows working differently than those in Linux. We could provide the team with containers for development and can deploy those same containers into UAT and production. This allows teams to release faster. Let's look at another example in which we have a microservices architecture. Imagine there's a lucrative sale and everyone is trying to add their items to the cart before stock runs out. We may have a service checking for inventory, and it may be at capacity. With containerization, we can deploy additional instances to that service to alleviate it's load. Now there are several container runtimes that allow us to run our containerized applications in any environment. The most popular of these is Docker, and it's the one we'll be using for this course. Once we start deploying a large number of containers, we need the ability to manage them at scale. This brings up several areas we need to address. Like monitoring health of those containers and resources, providing high availability and load balancing to meet demand for traffic, understanding the number of containers we need and where they will be deployed, and providing an easy method to configure applications within those containers. Popular providers in this space include Docker Swarm, which is the same company that creates Docker Containers. Kubernetes, which is an opensource project from Google and is part of the Cloud Native Computing Foundation. Mesosphere, which originated from Apache Mesos and allows you to run both containerized and non-containerized workloads. And finally, container orchestration services from cloud vendors, including Azure Container Services and Amazon Elastic Container Service. These two can run any of the clusters we just talked about in their environments. Azure even provides it's own service, Azure Service Fabric, to host containers. Now we understand what containerization is, why you might use it and the need for orchestration for those containers.
- Developing an infrastructure strategy
- Managing technical debt
- Managing Amazon Kubernetes Service (AKS)
- Deploying applications on AKS
- Scaling your Kubernetes clusters
- Implementing infrastructure as code
- Deploying Azure resources via Terraform
- Implementing security and compliance