Join Brandon Neill for an in-depth discussion in this video Connect VNICs, part of VMware vSphere: Configure and Manage Networking.
- [Voiceover] In order to connect a VM to a network it has to have a VNIC configured. By default, when creating a VM, it will have one VNIC, but we can add up to 10, either during VM creation or later by editing settings on the virtual machine. I'm going to edit settings on my powered off virtual machine here, there's more options available when a virtual machine is powered off. Right-click on the VM, go down to Edit Settings, this will open up the hardware configuration for the virtual machine. And we can see Network adapter 1 already listed. I can click on the triangle to open up the settings for that.
The first setting is which port group this network adapter is currently connected to. I can select from all of the available port groups if I want to change it to a different network. Next up, I select whether or not I want this virtual machine to actually be connected. This would be the functional equivalent of unplugging it from the physical switch port. You'll notice the adapter type is currently grayed out, this is because I can't change an adapter type once it's already been created. If I want to switch to a different type of adapter, I would have to delete this network adapter, and then add in a new one.
The Mac address is automatically configured for us, however if we need to assign a specific Mac address to a virtual machine, you can select manual, and then type in your own Mac address. Notice by default, all virtual machine Mac addresses are going to start with 00 50 56. Be cautious when creating your own manual Mac addresses, and make sure that you don't end up with a Mac address collision on your network, or it will create communication problems for your virtual machines. I'm going to switch mine back to automatic.
To add a new network adapter to this virtual machine, down here at the bottom, I select Network, and then click on Add, and we'll see a new device gets created. The first thing I want to do is switch this over to a different port group. The only reason to create additional VNICS on a virtual machine is to connect it to different port groups or VLANS. There's no performance advantage of connecting two VNICS to the same port group, as we don't support link aggregation on the VNICs, and the VNICs aren't bound by a wire speed anyway.
So I'm going to move this one over to the Workstations port group. And again click on the triangle, so we can see additional options here. And you'll see that now I can change the adapter type. The adapter types available is going to be determined based off the version of ESXi that you're running, the operating system that you selected, and the hardware version that this virtual machine is currently running. So for this particular virtual machine you'll see we have four options. We have the E1000 and the E1000E adapter. These are emulated Intel network adapters.
They do actually have a physical counterpart that you could go out and purchase. The E1000 is a one gig adapter and the E1000E is a 10 gig adapter. One thing to be aware of is that reported speed has no relation to actual throughput possible on a VNIC. Due to the fact that there's no actual wire or clockspeed that it is attached to, it is theoretically possible to push speeds faster than the reported NIC speed, as all we're doing essentially is moving packets around in memory, not across an actual physical wire.
The biggest determining factors will be CPU performance, and virtual machine configuration. However, configuring a 10 gig NIC will affect the window size and the buffer size calculations inside of the virtual machine. So if you do actually have 10 gig physical NICs, it's a good idea to configure 10 gig virtual NICs. The third option available here is SR-IOV passthrough. SR-IOV stands for single route, IO virtualization, and essentially what this option allows for us to do is to take a physical and network adapter that is installed on our ESXi host, and present it directly to the virtual machine.
This will be the lowest latency option available to us, slightly lower latency, and it's also going to prevent us from moving the virtual machine to another ESXi host, or to recovering the virtual machine using HA if this ESXi host crashes. So in most cases, that is not a preferred option unless super low latency is a major factor for this virtual machine. The final option here is the VMXNET 3 adapter. The VMXNET 3 adapter is a paravirtualized adapter.
This means it doesn't have a physical counterpart, and the adapter is aware of the fact that it is running inside of a virtualized environment. This means that the adapter can pass configuration options such as TCP checksum offloading, and TCP segmentation offloading off to the actual physical adapters. This will reduce the CPU load on the ESXi host as it's not having to emulate all adapter functions and it will therefore potentially increase the throughput available to the adapter. I'm going to select the VMXNET 3, and I'll let it assign a Mac address manually, and just click on OK, and my new network adapter will be added to the virtual machine.
Next up, I'll take a look at connecting and configuring our physical adapters.
- Overview of the vSphere networking basics
- Connecting ESXi networking components
- Creating a vSwitch
- Configuring VLANs, security, and traffic shaping
- Getting network performance data
- Configuring VMkernel networks and firewalls
- Managing VMkernel routing and TCP/IP stacks