This video provides a description of advancements starting in vSphere 6.0 that improved the scalability of compute resources on the host.
- [Instructor] In regard to Compute Enhancements in vSphere 6 and vSphere 6.5, there are a tremendous number of them. And they are sweeping in their impact. Most of them actually happened at 6.0, but then there are also some advances at 6.5. There isn't anything really for me to demonstrate here, it's just a bunch of exciting facts. The first thing is we went from 32 hosts per cluster all of a sudden to 64 hosts per cluster, which meant that large companies who were managing hundreds and hundreds of hosts, or thousands and thousands of hosts maybe, could create larger clusters, larger DRS clusters.
Less to manage in each cluster. Larger resources to work with. So the number of hosts per cluster increasing changed things for some companies that can take advantage it. Also we have larger hosts. We can hosts with virtual CPUs up to 4,096, in other words, the virtual CPUs that the virtual machines are using that are on the hosts can go up to 4,096. That's higher than it's ever been.
As a matter of fact, with the overhead reduction that was part of 6.0 and 6.5, we could potentially have 1024 virtual machines on the same host sharing the same resources. We've got physical RAM support all the way up to 12 terabytes. We talked about that earlier, but it bears mentioning again because physical RAM? We're not talking about disk, we're talking about RAM.
Physical RAM support up to 12 terabytes of RAM, that's a whole lot of RAM. And again, would you need that for every host? No, would you need it maybe for any host? Maybe not, but if you do, then the capability is there. We talked about that earlier, that VMware does that because they don't like to say No. If somebody can do it in the physical world then they want to make that possible to do with a virtual machine as well. And physical CPU support up to 480.
So, we've got a tremendous amount of resource available on each host, but wait, there's more. We also have USB 3.0 support, kind of had it 5.5 but really began to support it for CAC, for Common Access Cards for the government and for the military, fully supported in vSphere 6. We have NVM Express support.
These are cards that can be added to physical devices that are automatically recognized. The host can recognize those. Also we have SMART card support for SSDs, self-monitoring cards that will tell you when they're getting sick. Used to be that when we just spinning disks we had smart support for those, and they can tell you when they were getting sick but solid straight drives really couldn't do that, we didn't have the tools to get information.
Now we have commands to retrieve data about SSDs so we can get an idea of when it's going to all of the sudden go out on us. We don't want that to necessarily be all of a sudden, we want to have the fault tolerant support with that. And speaking of fault tolerance, beginning with vSphere 6.0, we have Fault Tolerance support for 4 virtual CPU virtual machines. That takes that into a different realm as well. Whereas, some companies would have said Well, we'd love to use that fault tolerance because that actually allows the virtual server to run in two different physical environments basically at the same time within milliseconds of each other so if one fails then the other one can take over.
So we'd like to use that for our very important machines, like our SQL and our Exchange and our Java. And then VMware had to say Well, those are multi-threaded applications and you're going to get a whole lot better performance if you have multiple virtual CPUs, and at this time, before vSphere 6, we do not have support for multiple, virtual CPUs. And now we do, we can support up to four virtual CPUs so that takes that into a whole new realm. As I said, I'm saying 6.x here because most of this stuff actually happened at 6.0.
But there are some things that happened right at 6.5. As the technology changes VMware keeps up with it. Raw Device Mapping support has been increased. Now, do you always use Raw Device Maps? No, you should use a data store. And with all the new storage technologies coming up, Raw Device Mapping should be a one-off kind of thing anyway, where I want to have a virtual machine that connects to a disk that's not actually a data store but it's actually on the SAN. And if I want to go through ethernet support to get to it rather than some type of a fiber channel support then I've got that capability on 6.5.
All the latest Input/Output drivers, support for the latest Intel chipset and, back to fault tolerance for a second, now there's support for multi-NIC. If I'm going to set up Fault Tolerance then what I want to do is eliminate every single point of failure. So, being able to have multiple physical NICs associated with the VMkernal ports that are associated with fault tolerance, certainly make sense.
That's one less single point of failure, and I don't want any single points of failure. I want to have separate NICs, separate switches, the whole path be separate from everything else or have two separate paths at least, so that I actually have that fault tolerance in place. So these Compute Enhancements in vSphere 6 and 6.5 allow us to do those kinds of things. So as you can see, there are tremendous number of compute enhancements that actually put things in different realm for some companies.
What that means to your company remains to be seen, but it's there if you need it.
- New host, web, and vSphere clients
- Configuration maximums
- Security enhancements such as VM encryption
- Kernel and host profile enhancements
- vSAN 6.5
- Configuring Network I/O Control v3
- vCenter High Availability
- Predictive DRS