In the ESXi Host video, Daniel discusses the type one and type two hyper visor differences. He will also discuss the differences between traditional and virtualized server deployments. He also discusses physical hardware and the virtual counterparts followed by common ESXi Maximum limits.
- [Voiceover] The primary component which allows us to virtualize our hardware and create virtual machines is known as an ESXi host. The ESXi software is known as a type one hypervisor. It is meant to run on bare metal hardware. A type two hypervisor runs as an app within a guest operating system. When we first launch ESXi, a Linux-based kernel driver launches and then loads up the hardware drivers before loading the VMkernel. The VMkernel contains the functionality for our hypervisor and how it interacts within our hardware as well as our virtual machines.
ESXi has a minimal footprint, less than 150 megabytes, which means the installation media can be very small or even embedded into the server hardware and accessed through the BIOS. While we can run most guest operating systems on ESXi, starting with ESXi 5 update one, we're able to virtualize modern operating systems such as Windows 8 and Windows Server 2012. With ESXi 6, there's Windows 10 and Windows Server 2014 virtualization support available.
Traditionally, we would take the server hardware, install the server operating system, and then install the application on top of that operating system. We were limited to just one guest operating system and a few apps per server. In many cases, we were only utilizing 30% or less of the server's hardware resources. With a type one hypervisor such as ESXi, we can allow our hypervisor to access the bare metal hardware and provide access to that hardware directly to our virtual machines.
We can have several guest operating systems running on a single server hardware while running single or multiple apps from within. This allows us to make use of previously underutilized hardware resources. The hypervisor via the VMkernel performs memory and CPU management as well as directing storage and networking resources to our virtual machines. This allows us to maximize the full capacity of our server hardware. The communication from physical to virtual hardware is pretty straightforward.
There is a virtual counterpart for every physical component as it relates to CPU, memory, storage, and networking. For every physical CPU, there is a virtual CPU along with virtual CPU cores. The amount of physical RAM is represented as virtual RAM. Our storage devices, such as hard disks, are represented as data stores, which creates a layer of abstraction between the different types of storage and the storage that the virtual machine actually sees.
This allows us to have flexible storage capabilities and capacities. The networking component is broken down into a virtual NIC card, which connects to a virtual switch with uplinks and port groups. By virtualizing these components, ESXi can maximize the utilization of resources amongst the virtual machines that need them. This means we can run as many virtual machines on a single server, as long as there's capacity and resources available, without causing any negative impact to performance or stability.
One thing to know about our ESXi host is that it does all the heavy lifting. You can have up to 480 logical CPUs and six terabytes of RAM. You can also have 1,024 virtual machines. Our ESXi host can also handle 64 terabyte LUNs or 62 terabytes of raw disk mapping LUNs or RDMs. Our ESXi host can also handle 4,096 virtual switch ports or 4,088 standard switch ports.
In total, we can have 60,000 ports per distributed switch, which is 16,000 virtual distributed switches per host and that's the core information that you'll need to understanding your ESXi host.
- Gathering reference materials
- Surveying the vSphere 6 architecture and components
- Understanding licensing and support options
- Protecting data
- Replicating data
- Reviewing the Enterprise edition features