Learn how to attach multiple EBS volumes to an EC2 instance and understand the relationship between EBS volumes and the resulting device mount.
- [Instructor] In this video, we're going to look at how to create and configure multiple ebs volumes for an Amazon Linux ec2 instance. So we'll head into the ec2 console of AWS, and choose launch instance, the big blue button here. We're going to go with Amazon Linux, which is Amazon's distribution of Linux. Hit select. For an instant size, we'll go with the very cheap t2 nano. We don't need a very big machine to show off what ebs can do. Click next configure instance details. Keep everything here as is.
Now were at the add storage screen, which is what we're really interested in. You'll see that this instance already has one volume attached. That's the root volume type. You can see its snapshot id and the fact that it's not encrypted and that value can't be changed. So what's going on there? Well, recall that when we selected Amazon Linux, we were actually selecting an AMI to form the basis for this instance. That's a shapshot, and since ebs snapshots cannot change their encryption status, we have to go with not encrypted because that's what the AMI has.
However we can add a new volume. The device name is selected for us by default and that's fine. We could add this volume based on an existing snapshot. In fact if you click it, you can see a lot of public snapshots that are available for you to choose from however, that is not what we want to do right now. This will be a blank blog store. We're going to set its size at six gigabytes, so it will be a little easier to tell apart later. The volume type of SSD is fine for our purposes. We can go ahead however, and encrypt this volume.
And one other thing. You see the delete on termination checkbox here? We want to leave that unchecked. That way if we terminate the ec2 instance to which we are attaching this volume the ebs volume will not go with it. The next screen is tags for the ec2 instance, and we'll give it a name tag. We'll call the instance ebs demo one. I'm going to configure security groups to allow this instance to accept traffic on port 22 from the world. That's the default rule that we already have, so I'm going to leave this screen as-is under create a new security group.
AWS warns you that this is dangerous, and it is, to allow port 22 open to the world, but for the purposes of this demo, I'll allow it. Review and launch, and you can see the summary of what we selected, and click launch. Now we'll be prompted to choose a key pair. I need to create a new ssh key for this instance, so I'll call it demo ssh key. And I'll download that key pair so I can connect to this instance later. Now I will click launch instances. At this point, we need to wait for the instance to start running before we can get onto it.
Now that the instance has been created and is up and running, we can take a look in the ec2 console at some of the things that have been done to it. Down here in the description tab, we can scroll all the way to the bottom and see the volumes that we created. Dev slash xvda, that's our root volume. Slash sdb, that's the additional six gigabyte volume that we added in the creation process. Scroll up a little bit, and you'll see the IPV4 public IP. This is the IP we'll need to ssh into this box. So now I'll head to the terminal to do just that.
Don't forget that you need to have downloaded the private key that you used to create the instance. And you need to have done a chmod 400 action on it, if you're in a Linux-based system. That sets the appropriate permission bits so that we can use it to ssh. Ssh ec2 user at ip address, and then if you're on a Mac or on a Linux-based system usually, you can use the dash i flagged pass in the ssh key that stands in for your password. Now that we've logged in to the instance, we can take a look around at the storage situation.
I'm going to run df dash h, that's the disk-free command with the human readable flag. And we'll see what we have here. Well, we were expecting two volumes: xvda and sdb, and neither one of them is exactly here. We have xvda one but that's not exactly the same. What's going on here? Lets run ls block to list block devices. Now we can see that xvda one is actually a partition of the xvda device.
It's mounted on slash, so that means that all the storage on this instance right now is mounted on that point to the device xvda. This other block device, xvdb, what's that? It doesn't match the slash dev sdb that we expected, but it has the six gigabyte size that we specified for our additional volume. If you look in the Amazon documentation for ebs, it says that Amazon Linux AMIs will create a symbolic link with the name you specify at launce that points to the re-named device path.
So we need to look at that device path. We'll do an ls on dev slash sdb. And look at that. A symbolic link that maps to xvdb. So this volume and our expected name are one in the same. So now that we've identified the block device, we need to format it, mount it, and update its file system ownership so that it's usable on this instance. Let's find out if we need to format this device. We expect to do so because it's a brand new ebs volume not restored from any snapshot, so it should have no file system on it.
We can find out for sure by using the file command with sudo privileges. We'll run it on dev slash xvdb. If you get this response back that just says data, that means that this is an unformatted drive. So what we need to do at this point, is format the drive so its usable by Linux. We're going to apply the ext four file system. We'll use another sudo command: make file system, mkfs. The dash t flag will tell it what sort of format to use.
We'll say ext four. And then we just give the path to the device. Dev slash xvdb. After a moment you can see the output, and you can see that we have a file system. We can check by re-running the file command. Now, instead of just saying data, you can see that it says we have a Linux revision one point o ext four file system. Very good. Now let's look at df dash h again, the disc-free command, to see what things look like. Hm, not much has changed. We still have the xvda one partition, which is the 100% size partition of the root volume, but no sign of xvdb.
What do we need to do here? Well, it's time to create a mount point. As you can see the root mount is on slash, so we need to create another folder path. We'll use sudo and mkdir to make a directory. And we'll call it ebs encrypted, because that's the kind of volume that we have. Right now that's just a folder, and you can see it right here. What we want to do, is mount our file system to that folder. The command we'll use is called mount. Using sudo, we'll call mount, then we'll give the path to the volume, xvdb, and the path that we want to mount, which is ebs encrypted.
Run that and, tada! I can go to ebs dash encrypted and it's a file system. But how do I really know that it's a file system? Well, we need to be able to create some sort of file. Let's try it. I'll use the text editor vi to try to create a file, and save the file. But now I'm getting an error. Can't open the file for writing. That's because there's one more step. We have to set the permission bits on this new mount point. So exit out cleanly and I'll do an ls dash lart to get a detailed look at what's going on here.
Here we go. The dot, which represents the current place that I am, is owned by root and group root. We need to change things. I'm logged in as ec2 user, so I'm going to run the change owner command, c, h, o, w, n. I'm going to make this file system be owned by ec2 user and group ec2 user. And even though it's not really necessary right now, I'll give it the dash r flag to be recursive. Now let's run that same ls command again. Now we can see that the dot folder, our current folder, is owned by ec2 user.
So I should be able to create that hello dot text file that I wanted. I'll write it out, and there we go. Now we can see that hello dot text is created, and it's owned by ec2 user. Great, now this encrypted ebs volume is mounted and usable on this instance.
Join AWS architect Brandon Rich and learn how to configure object storage solutions and lifecycle management in Simple Storage Service (S3), a web service offered by AWS, and migrate, back up, and replicate relational data in RDS. Find out how to leverage flexible network storage with Elastic File System (EFS), and use the new AWS Glue service to move and transform data. Plus, learn how Snowball can help you transfer truckloads of data in and out of the cloud.
- What is data management?
- AWS S3 basics
- S3 bucket creation
- S3 upload and logging
- S3 event notifications
- S3 data lifecycle configuration
- Working with Amazon Elastic Block Store volumes
- Creating and mounting an EFS
- Creating an AWS RDS instance
- RDS backup and recovery
- Moving data with AWS Database Migration Service
- Moving data with Data Pipeline and Glue