Beginning with an overview of the AWS global infrastructure, explore the difference between regions, availability zones, and edge locations. This video also explores the core network, compute, database, and storage services offered by AWS as well as the AWS shared responsibility model.
- [Instructor] You've decided to delve into security concepts in Amazon Web Services. Before we get hands on in the AWS console, it's important to understand the AWS shared responsibility model. You want to be confident you understand what AWS is responsible for, and where that responsibility ends. AWS provides infrastructure services on a global scale. AWS subdivides the world into regions. A region is a completely independent physical location such as ap-northeast-1, located in Tokyo, Japan. Or us-east-2, located in the US state of Ohio. Each region contains at least two availability zones, or AZ's. Each AZ is a completely independent data center, connected through low latency high speed network links. Independent of region or availability zone AWS also has edge locations throughout the world. These edge locations are found in places like Osaka, Japan; Mulan, Italy; Sau Paulo, Brazil; and South Bend, Indiana. These locations power Cloud Front, the AWS content delivery network. This combination of regions, availability zones, and edge locations represent the AWS global infrastructure. AWS is completely responsible for the physical security controls of these layers. Riding on top of these layers are the services AWS makes available to its customers. From a broad categorical perspective, these offerings include various compute, database, networking, and storage services. For example, there are a number of compute services provided by AWS. This includes Elastic Compute Cloud, or EC2, for virtual servers. EC2 container service supports container workloads. Lamdba is a serverless offering for purely event driven programming. Elastic Beanstalk is a low friction way to get web applications up and running. AWS has a number of storage offerings. The oldest and best known is Simple Storage Service or S3, which is used for object storage. I like to think of S3 as the hard drive of the internet. EC2 block storage, or EBS is used to provide local block storage to EC2 servers inside of AWS. Elastic File System, or EFS, is a pay per use file system storage offering while Glacier is a cost effective service intended for archival storage. AWS has many database offerings. Including relational database services, or RDS, which provides managed relational databases, like MySQL, SQL server, and Oracle. DynamoDB is a managed NoSQL database offering. ElastiCache is used for in memory caching. Redshift is available for data warehousing. And there's also Neptune, available for your graph database needs. Among the networking services, you'll find virtual private cloud, or VPC. VPCs let you create independent, isolated virtual networks. You can think of VPCs as your virtual data center in the cloud. Route 53 provides domain name system services, and CloudFront provides a global content delivery network. AWS is responsible for the security of the cloud. That means the responsibility of securing all the global infrastructure and core services I just described lies on the shoulders of AWS. This is liberating for you, since physical security controls and auditing them can be added to the list of things you are not worried about. That said, what you decide to put in the cloud is another matter entirely. Most enterprises provide services where application and platform level controls are implemented using identity and access management. Services you operate rely on appropriate network, firewall, and operating system configurations. The data within these systems can be of varied data classification status. Some of which may require encryption. And of course, all enterprises possess valuable data. The security of how you configure as well as what you place and operate within AWS is your responsibility. Let's explore an example. Suppose one of your services is a typical three tiered application. Looking at the AWS tools available, you decide to use route 53 for DNS, CloudFront as your content delivery network, and S3 for storing static resources. You also decide to go with a load balanced web tier, a load balanced application tier, and RDS for your database needs. The security of route 53, CloudFront, S3, load balancing, EC2, and RDS services is the responsibility of AWS. Securing the data and appropriately configuring each tool is on you. More specifically, what's on your plate? Your responsibility lies in the configuration and patching of the operating systems and software packages you put on EC2 servers. You are also responsible for security controls for each of the compute, database, storage, and networking components you use. For example, if you use the S3 storage offering, you are responsible for its access controls. You are also responsible for configuring identity and access management. This is true for administrative access for your sysops personnel as well as application access for your end users. Looking ahead, you can focus your attention on securing what you put into the cloud while you can rest easy knowing that Amazon Web Services is maintaining the security of the components you use.