Join Brandon Rich for an in-depth discussion in this video Understand AWS EC2, part of Amazon Web Services: Deploying and Provisioning.
- [Narrator] When we talk about compute resources in AWS, it all comes down to Elastic Compute Cloud, or EC2. EC2 instances are virtual machines that you launch in the AWS Cloud. They're created on demand, or when combined with more advanced AWS services in response to conditions you define. EC2 instances use a pay-as-you-go pricing model. You only pay for the time your instances are running. The cost varies in the type of instance you create. For example, an instance with more RAM costs more to run.
You may choose from many different operating systems, such as Windows, Ubuntu Linux, Red Hat, and AWS's own Amazon Linux. The flexibility of the pay-as-you-go model, and the ability to resize instances, combined with services that let you create and destroy instances when you need them give you truly elastic capacity, hence the name Elastic Compute Cloud. One of the first choices you make when creating a new EC2 instance is what machine image to use as a foundation. Amazon Machine Images, or AMIs, are the molds from which new instances are built.
They consist of basic installations of operating systems like Windows Server 2012, and Red Hat Linux. They may require you to have a license for that OS in the case of Windows. Finally, you can create your own AMIs. So if you want to create an instance, install and configure the software you need, and then use that as the basis for future machine creations, you can. Creating an EC2 instance isn't like building a computer from scratch. Instead of deciding on the components, such as CPU, RAM, and network interface, AWS allows you to choose from a number of predefined hardware configurations.
These are called instance types. Each type has a different on-demand hourly rate, and they fall into several general categories, or families. The families are as follows: General purpose, storage optimized, GPU instances, compute optimized, and memory optimized. Within a family, instance types have names like t2.micro, and m4.large, which are both members of the general purpose family. The t2.nano is the smallest machine you can provision with one CPU and half a gig of RAM.
The r3.large, belonging to the memory optimized family, gives you two virtual CPUs, and 15 gigabytes of RAM. If you have the money, you can even provision the incredible x1.32xlarge, which boasts 128 virtual CPUs, and nearly two terabytes of RAM. The T2 series are the work horses of AWS instance types. They are general purpose machines that come in a few different sizes, but who all share one characteristic: their CPUs are burstable, meaning you get extra CPU power when you need it.
Your instance's ability to burst depends on a system of what AWS calls CPU credits, which you accrue over time during normal operation. You should also know that you can change an EC2 instance's type at any time from the AWS console. All it takes is a simple restart. You can head to docs.aws.amazon.com for more details, but suffice it to say that the t2.mirco is a fine place to start when provisioning a web server, small database, or similar environment. When you create an EC2 instance, where's it actually physically located? Regions are geographical areas that are logically isolated from each other.
You choose the region in which to launch your resources. In the United States there are four regions: Northern California, or us-west-1, Oregon, AKA us-west-2, Virginia, called us-east-1, and the newest, Ohio, called us-east-2. Moving to Europe, we have regions in Ireland and Frankfurt. There are also regions in Singapore, Seoul, Tokyo, Mumbai, and Sao Paulo. Almost everything you build in AWS is built within the context of a region.
Only very few resources are cross-region. When selecting your region, you want to think about latency. Where will your customers be? Where will resources be that your instances need to connect to? This is especially important if your AWS instances will need to talk to hosts in an on-premises data center. A good rule of thumb is to pick the region closest to you. Regions represent large geographical areas, and they are logically isolated from each other. Within a region, AWS still gives you options to geographically separate your resources by dividing regions into availability zones.
Availability zones, or Azs, are the physical data centers that make up an AWS region. They're named by letter, so the first AZ in the us-east-1 region is called us-east-1 b. Geographical separation is important when you need to build applications that must not go down even if a disaster strikes. Each AZ is located far enough apart that a physical threat to one AZ, such as a fire or flood, will not affect the others, but close enough together so that network latency between them is negligible.
This makes multi-AZ load bouncing highly effective for building fault tolerance into your applications. What about disk storage? You can either use what's called instance-backed storage, basically the hard drive on the physical volume where your VN lives, or a more sophisticated option called elastic block storage. EBS is a bit like an automatic hard drive installer. You specify what you'd like in terms of drive size, type and volume mapping, and AWS handles everything for you. You can map EBS as one or more multiple volumes, and conveniently, EBS lives independently of EC2, so you can reuse these drives with other instances.
Unlike instance-backed storage, which will not survive a reboot, EBS data lives on until you explicitly delete it. As we continue using EC2, we'll learn even more. How to connect to instances with SSH, how to define network security using AWS security groups, and how to use tags to add metadata to resources to help you track them better. So those are the basics of EC2. We'll learn even more about this fundamental service as we delve into the AWS deployment and provisioning functions that build upon it.
This course is also part of a series designed to help you prepare for the AWS Certified SysOps Administrator – Associate certification exam.
- Understanding AWS EC2
- Creating an EC2 instance
- Provisioning with CloudFormation
- Architecting apps for horizontal scaling
- Creating an Elastic Beanstalk environment and app
- Using OpsWorks
- Deploying apps with CodeDeploy