In this video, learn how vSAN can be used with vSphere 6.7 to leverage local storage devices to create a shared datastore.
- [Rick] In this video I'll walk you through some of the basic architecture of Virtual SAN. And we'll start with the very most basic, the host cluster. So, a cluster is simply a logical grouping of ESXi hosts. So, let's say you have a group of ESXi hosts and you want to allow virtual machines to automatically failover to another host if their host fails, that's high availability, all right. And we have to create a cluster in order to enable high availability. We have to create a cluster in order to enable DRS. So, with DRS we can have virtual machines that automatically get V motioned from host to host for load bouncing purposes. Those are a couple features that require a host cluster in order for us to enable them. And another feature that requires it is Virtual SAN. So, step one of setting up vSAN is to create an ESXi host cluster. That's going to be the very first step in our process. Now that being said, there are some prerequisites. We have to be at the right version of Vsphere, we have to have the right version of VCenter and we also have to have some supported hardware and we also need to set up some VMkernel ports. So, on each of these ESXi hosts here, you can see we've got a couple things going for us. Let's focus on ESXi01 for a moment. ESXi01 has two vmnics and a vmnic is a physical ethernet port on the ESXi host, right. So, this host has two physical ethernet adapters. Let's say that they're ten gigabit per second ethernet adapters and each one of these physical adapters is connected to a different physical switch. And you can say the same thing on host ESXi02 and the same thing on ESXi03. So, all three hosts have this in common. They have two physical 10 gig vmnics and each of those vmnics is connected to a different physical switch. And then what we've also done on each of these ESXi hosts is we have created a VMkernel port and we have tagged that VM kernel port for vSAN traffic. So, if you're not really familiar with VMkernel ports, what this basically means is we've created this little port and we've given it an IP address and we've said, hey, if there is traffic related to vSAN, if a virtual machine needs to transmit traffic related to vSAN from host to host use this VMkernel port. So, we have to have that network under the surface in order for vSAN to work properly and we'll see it in action here in a couple of slides. Now, one final thing that I want to note in regards to this network that I've shown you here, there's a couple design best practices that I have incorporated. Number one, I've got physical redundancy. If either of these switches fails there is still another switch up and running that can be used to pass all of the necessary traffic. So, I've got physical redundancy enabled. I've also got nothing else connected to these switches. This is a dedicated physical network specifically just for vSAN traffic. Okay, so how are my virtual machine objects actually stored and how do these VMkernel ports come into this picture. So, here we see VM1 and VM1 is one of my virtual machines that is stored on vSAN and as VM1 has reads or writes that need to be executed, they are going to be pushed over the physical network using this vSAN VMkernel port to the appropriate destination host. So, here we can see the active VMDK for this particular virtual machine. And there's also going to be another copy of the VMDK over here. This is a mirror copy just in case the primary copy is on a host that fails. So, the vSAN VMkernel port is there to basically handle all of the traffic that's going to have to flow over this vSAN network. The virtual machine is running on one host. It's virtual disk is on another host so when it wants to read and write to and from that virtual disk, we're going to leverage a VMkernel port to push that traffic over the network. And hopefully what'll end up happening is the majority of the read operations will be satisfied by our flash capacity. So, what we see here is something called a hybrid configuration. We're going to have a lesson that breaks down the difference between hybrid and all flash but for the moment we're focused strictly on what we call the hybrid configuration. So, what does that mean? Well, on each of these ESXi hosts, we have some traditional magnetic storage devices. These are what we call our capacity devices. We've got traditional hard disks and then we've also got a cached here which is SSD. And the SSD is a lot faster than the traditional hard disks. So, on each of these hosts I've got kind of these big capacity devices, these hard disks that are going to store a whole lot of data. And then sitting in front of them I've got this cache tier of SSD which is much faster and more expensive. So now, let's look at what happens when virtual machine one wants to read some sort of data from its virtual disk. The VMkernel port is used to push that read over the physical network and it eventually hits the destination host where it's active, VMDK resides and look what's happening. It's hitting this SSD on host ESXi02 and you'll notice it's happening very quickly, right. This read is happening very fast, it's hitting the SSD and the SSD is acting as a read cache. So, the purpose of the read cache is to store the most frequently read data on SSD. So, 70% of this SSD is going to be dedicated to a read cache. A copy of all of the most frequently read data is going to be located in that SSD. There's also a copy of that same data along with a whole lot of other data here on this capacity device but the hope is that when data is read from the VMDK most of the time the data will get read from that SSD because it's so fast. If the data is not present on the SSD, this is what we call a cache miss and you can see this read operation is happening much more slowly. The virtual machine needed some sort of data that actually was not present in the read cache and therefore the data had to get served up by the capacity device. In this case, in the hybrid configuration our capacity device is a hard disk and so this read is going to be much slower than the read from SSD. How about writes? We've been talking about reads so far. What if my virtual machine needs to write some sort of data it's disk? Well, here's the first thing we have to consider. Number one, there are multiple copies of this VMDK. This virtual machine has one copy of the VMDK on ESXi02 but we have to prepare for the possibility that ESXi02 could fail. So, in this case another copy of that VMDK is being mirrored to ESXi03 and that way if ESXi02 fails, my virtual machines data is not lost. So, when the write occurs here's what's going to happen, when the virtual machine needs to execute a write the write is going to be sent to both of those ESXi hosts, all right. It's going to be mirrored. If you're familiar with raid, this is very similar to the way that rights are mirrored across a raid array. One copy of the data is sent to each of these ESXi hosts. That way they both always have a current version of that virtual machines VMDK just in case one of the hosts fails. And the other thing that you may notice here is watch this right. It's going to hit the SSD first. That's what we call the right buffer. So, what happens is anytime these virtual machines that are on vSAN need to write some sort of data, the rights are carried out against the right buffer on SSD. 30% of my SSD is dedicated to being a right buffer. And I sort of equate this to checking a book back into the library. So, if I want to check a book into the library, I can just walk in, drop it on the front desk and I'm done. The librarian is going to take that book and re-shelve it. They're going to do the hard work, the time-consuming work. My experience is I just simply drop it on the desk and walk away. It's very quick for me. And it's the same thing with this write operation. When the virtual machine needs to write some sort of data to its VMDK, it's going to be written to the right buffer and that's going to happen very quickly. So, from the perspective of the virtual machine, once this write hits the right buffer, it's done. And then on the back end, the data is actually written from the right buffer to the capacity device. So, to our virtual machines, it always feels like they're writing to SSD. The write speeds are always really quick and then after the fact Virtual SAN handles getting that object written from the SSD to the capacity tier. Okay, so in review Virtual SAN can only be enabled on a cluster of ESXi hosts and each one of those hosts has to have a VM kernel port that is marked for vSAN traffic. That's where all of our Virtual SAN reads and writes are going to flow over that VMkernel network. Virtual machine objects are striped and mirrored across hosts just in case we have a host failure and read caches and write buffers are used to improve performance. Then on the back end we have the actual capacity devices.
Note: This series is also designed to prepare candidates for the VMware Certified Professional - Data Center Virtualization 2019 (VCP-DCV 2019) exam.
This course was created by Rick Crisci. We are pleased to host this training in our library.
- Storage performance
- VMFS and NFS data stores
- Connecting hosts and storage with iSCSI
- iSCSI and ESXi 6.7
- Storage DRS clusters
- vSAN disk groups
- Virtual Volumes
- Storage IO Control (SIOC)