This segment describes the components and features of Virtual Machine Queues (VMQs) and the introduction in Windows Server 2016 of VMQs.
- [Narrator] As is the case with any high performance project, as soon as you get one thing running as fast as it can, you're going to find something else to work on. In the first chapter, we looked at the infrastructure. In the second chapter, we explored how to amp up the sharing service. Well now we're going to turn our attention to virtualization and what Microsoft has given us to open up that bottleneck. We've looked at how RSS and even virtual RSS can speed up network communication by distributing the processing of network traffic among all processors.
And in a single server, RSS looks a little like this. The network traffic all comes into the NIC which has a logic unit that uses RSS technology to sort the traffic into queues and then feed those queues to different processors on our server. And that's a pretty effective way to distribute that load. But let's take a look at that same server if it's a Hyper-V host running two virtual machines that are each using virtual RSS to balance their load.
From the network through the NIC to the logical processors, everything looks good. And here's our problem. Coming off the logical processors, everything has to get funneled through a single virtual NIC to get to the switch and out to our virtual machines. Well, this course has been all about performance, so let me tell you what kind of bottleneck that really is. In the last chapter, we talked about physical NICs that are capable of 10 to 50 gigabit per second. The virtual NIC inside Hyper-V is capable of no more than about three and a half.
Now three and a half gigabit per second is nothing to sneeze at unless you're trying to send multiple streams of 40 gigabit through that tiny pipe. So now, both sides of this diagram are working so fast that our bottleneck is right here in the middle. In Windows Server 2012, Microsoft introduced something they called virtual machine queueing. It changed the game by working with the NIC queues differently. The virtual NIC for the host moves out of the way and becomes nothing more than a link for network communication between the host and the virtual machine, and it opens up the path for the operating system to send information directly to the virtual switch.
One NIC queue is assigned to each virtual machine so traffic can be sorted at the physical NIC. That helps cut down on processing later. In fact, the MAC address of the virtual machine's NIC is assigned to the virtual machine queue. This gives each virtual machine its own path through the host and through the virtual switch, making the switch less of an obstructive layer in the communication. With Windows Server 2016, we received another boost.
We've received a benefit of better queues, and more importantly, more of them. They're specifically designed for virtualization environments. They can be combined in the same way that RSS uses multiple processor cores. So now, virtual machine queueing has become virtual machine multi queues. Network cards are getting more featured and more and more have the capacity for more queues than you have virtual machines on a host server. VMMQ takes better advantage of that technology by using receive-side scaling along with virtual machine queues to give each virtual machine more paths from the physical network to the virtual machine.
Using this technology, we can start to see traffic to and from the virtual machines on separate hosts really start to use the high performance hardware.
- Configuring a network interface controller team
- Switching embedded teaming
- Remote enabled direct memory access NICs
- Configuring virtual machine queue
- Enabling and configuring SR-IOV
- Understanding software-defined networks (SDN)
- Reviewing SDN network requirements and deployment scenarios