Join Brandon Neill for an in-depth discussion in this video Understand Network I/O Control, part of VMware vSphere: Configure and Manage VDS.
- [Voiceover] Network I/O Control version 3 was introduced in vSphere 6.0. Both version 2 and version 3 are supported in 6.0, so if you were upgrading from an earlier version of vSphere, you can maintain your old I/O configuration until you are ready to upgrade to version 3. The upgrade process to version 3 is disruptive. There are some major changes between version 2 and version 3. For example, we lose user defined resource pools and instead resource pools are created underneath the Virtual Machine system pool. The configuration interface for version 3 is slightly different then version 2 and it adds reservations, making Network I/O Control more similar to resource allocations for CPU and memory.
I/O Control is primarily intended for 10 gig adapters. It is possible to use I/O Control with one gig adapters, however with one gig adapters it's easier to put different traffic types on different physical uplinks. It would be prohibitively expensive to use separate adapters for different traffic types with 10 gig adapters. So we need a different method to divide up the available bandwidth. There are nine system traffic pools. For each pool we can control shares, limits, and reservations. System traffic reservations are based on a single uplink.
The smallest uplink on the distributed switch so it is a good idea to have identical uplinks across the entire distributed switch. We can reserve up to 75% of the bandwidth of a single uplink for system pool reservations. The reservations will be divided among the system traffic pools and we don't have to reserve all of the available bandwidth, nor do we have to assign reservations to all system pools. In this example, I've assigned one gig to iSCSi, vMotion, and Virtual Machines, and half a gig to Management. This allocation is propagated to every uplink on every host of the distributed switch.
This means that for every uplink that is connected to the distributed switch, one gig of that adapter is going to be reserved for iSCSi. However, if there is no iSCSi, traffic on that uplink, the unused reservation allocation is available for other types of traffic, however it can't be reserved by any other system traffic pools. To calculate the size of the Virtual Machine reservation pool, I take the reservation I assigned to the Virtual Machine system pool and multiply it by the number of uplinks in the distributed switch. If I have four adapters, this means I'll have four one gig reservations for a total of four gig available for Virtual Machine traffic reservations.
There are two ways I can allocate reservations to Virtual Machines, using network resource pools, or using individual reservations. For network resource pools, I take the total available Virtual Machine traffic reservation and assign it to Virtual Machine reservation pools. I then attached distributed port groups to the resource pools and then any Virtual Machines that I power on on those port groups will share the available reservation. My second option for allocating reservations is to allocate them to Virtual Machines individually.
Individual Virtual Machine reservations are done on a per uplink basis. This means the maximum reservation I can create is the Virtual Machine pool reservation assigned to a single uplink. In my example, the largest possible Virtual Machine reservation is one gig. If I create multiple Virtual Machines, each with a one gig reservation on this host, I can start two Virtual Machines. One per uplink adapter and I've now consumed all of the available reservation and I can't power on a third Virtual Machine with a reservation.
It is possible to do both individual Virtual Machine reservations and system pool reservations, however both reservations would have to be met for that Virtual Machine to be able to power on. For instance, I would have to have available reservations in the pool and be able to met the entire reservation on a single uplink for that Virtual Machine. It's generally best to only use one or the other reservation options. Unreserved bandwidth is allocated using shares. Shares are calculated on a per uplink basis.
Shares are calculated based on the system pools that are active on an uplink, and they represent the relative priority of a system traffic type compared to other system traffic types. To do the calculation, we look at the total number of active shares on the uplink. For example, if I have three active pools, iSCSi, vMotion, and Virtual Machine traffic, iSCSi and vMotion are assigned 50 shares and Virtual Machine is assigned 100 shares, that means the total outstanding shares is 200. If all three pools are fully utilizing their bandwidth, the first two will get 25% of the available bandwidth and the third will get 50%.
If I then add a fourth pool, now the total outstanding shares is 250 and the relative percentages are now 20, 20, 40, and 20. Keep in mind that shares and reservations only effect traffic when there is congestion and that the shares calculation occurs after any reservations are met. For example, this uplink currently has iSCSi, vMotion, and Virtual Machine pools active. Based on our earlier reservations, my reserved bandwidth is one gig for each of the pools, for a total of three gig of reservations.
This leaves unreserved bandwidth of seven gig, which is divided among the active pools based on their shares. We can now calculate the total bandwidth used by each pool as 2.75, 2.75, and 4.5 gigabits respectively. Now let's take a look at configuring Network I/O Control in the lab.
- Overview of the vSphere Distributed Switch architecture
- Creating a vSphere Distributed Switch
- Configuring a distributed switch
- Managing distributed switch traffic
- Managing VDS health and recovery options
- Monitoring VDS traffic