In the world of Server Virtualization, there are two types of hypervisors: Layer 2 hypervisors are installed as an application (or service) on an existing operating system (such as Microsoft Windows). Layer 1 hypervisors are in and of themselves operating systems that are installed on the ‘bare metal’ – directly on the hardware.
The hypervisor is the virtualization layer – the platform on which the virtual servers are hosted. Because all operating systems require resources (some more than others) it is axiomatic that the Layer 1 Hypervisors – those that are themselves thin operating systems – are going to be more efficient than the Layer 2 Hypervisors, which have to first allow the parent operating system to take the resources that it requires, and then meter our the available resources to its applications and services as it sees fit.
It used to be easy enough to know which virtualization platforms were which, based on how you installed them. So when Microsoft released Hyper-V as a role on Windows Server 2008 (and all subsequent versions) it was an easy mistake to make that it, as was its predecessors, a Layer 2 Hypervisor. However that assumption is wrong.
As with all other Roles on Windows Server, Hyper-V is installed by first installing the operating system, then adding the role. it requires a total of 10 clicks, two reboots, and it is done.
Two reboots… that is a bit unusual, isn’t it? Usually Roles either do not require a reboot, or occasionally a single reboot. Only when you install multiple roles would you need to reboot multiple times, and even then only occasionally. So why does Hyper-V require two?
The following is going to feel, for a couple of paragraphs, as if I accidentally cut and pasted a completely irrelevant article below. Please read on, I will tie it all together in a few paragraphs!
If you have ever been to downtown Montreal you may have seen Christ Church Cathedral. According to the church’s website the building was completed in 1859, and consecrated in 1867 (not sure why the 8 year lag… but then, I am not entirely sure why a building needs to be consecrated). In other words, it recently celebrated its 150th birthday… and despite the efforts of the best architects (Frank Wills, Thomas S. Scott) and masons, older buildings tend to require a certain level of care to maintain. They may have built them well back then, but ask any Egyptologist to confirm that the pyramids are crumbling… slowly.
Now, the following story is my interpretation of a historical discussion that I have no insight into. The facts are there, but the story behind it is simply pure guesswork. In the mid-1980s the church (which it should be mentioned is also the home of the Anglican Diocese of Montreal) evaluated its resources and holdings and determined that financially they were lacking. Their most prominent holding – the plot of land on which the church was built – was worth millions (at the heart of downtown Montreal, in the booming building economy of the 1980s), and they needed a way to leverage that if they were to remain (or return to) financially healthy.
The board called for ideas of how to leverage the property… remember, this was before Matt Groening gave us the idea to commercialize the church. Some of the ideas were certainly money-makers, but unrealistic.
- They could tear down the church and build a commercial property. Unfortunately, this would essentially eliminate the point of the church… couldn’t do that!
- They could build OVER the church… however there were several issues with that, not the least of which that building over an architectural wonder like the cathedral would mean masking its true beauty. However from a more practical standpoint, building onto a building that old would have all sorts of concerns, some of them involve the scary words ‘building could fall down.’
- The strangest idea is what they actually ended up doing… they dug under the church, essentially putting the building on stilts, and built an underground shopping mall, which today is known as Promenades de la Cathedrale. It is a multi-level mall with over fifty stores and a food court, along with underground parking. It is an architectural feat that must have taken a year to design and longer to plan. The steeple of the cathedral, however, is no higher than it was in 1867, and the project was executed successfully with movements never exceeding 3/16” inch.
Hyper-V installs in much the same way. It lifts the base operating system up off the bare-metal, injects the thin-layer hypervisor onto the bare-metal hardware, and instead of placing the original back where it was, it condenses it into what I call a para-virtual machine, and creates the Parent Partition, which is a concept unique to Microsoft. The Parent Partition is the ‘first among equals’ which controls the drivers, and allows the administrator to use the console rather than remoting into the system. It does not use a .vhd (virtual hard drive) for storage, but rather writes directly to the hard drive. There is no way to differentiate it from a non-virtual machine… except that the system boots to Hyper-V and then loads the Parent Partition.
The hypervisor loads in Ring –1… there are no hooks into it for any external code – it is purely written by Microsoft and read-only. However on top of that the virtual machines (or Child Partitions) are all created equally… or at least three of the four types have equal access to the distribution of resources, with the fourth type (the Parent Partition) being the only partition that can reserve its own resources off the top – by default 20% of the CPU and 2GB of memory, but those numbers are adjustable.
One primary difference between the Parent Partition and the Child partitions is seen in the following graphics. In the first graphic (Image1) we see the Device Manager for the Parent Partition. The expanded information is what you would expect – HP LOGICAL VOLUME denotes the HP RAID Array, the Display Adapter is ATI, there are two HP NC371i Multifunction Gigabit NICs, and the iLO Management Controller driver. The second graphic (Image2) is a similar screenshot from an operating system running in a Child Partition on the same physical box. It is the same ACPI x64-based PC… and it even has the same Dual-Core AMD Opteron™ Processor 8220 SE CPUs… it just has fewer of them (while Hyper-V allows us to assign up to four virtual CPUs to a VM, this one only has two). Where the Parent Partition has HP LOGICAN VOLUMES, ATI ES1000 video, and HP NC371i network adapters, the corresponding drivers for the Child Partition are MSFT Virtual Disk Devices, Microsoft Virtual Machine Bus Video Device, and Microsoft Virtual Machine Bus Network Adapters. While they have similar performance to the physical, the virtual partition has virtual hardware, unlike the para-virtual machine, which has physical hardware… sort of.
Because the actual drivers for the physical hardware run in the Parent Partition, it also has a feature called the ‘Virtual Service Provider (VSP).’ The VSP communicates with the feature in the Child Partitions called the ‘Virtual Service Client (VSC).’ This is how the virtual machines can perform as well as their virtual counterparts, with the limitations of their virtual hardware only being how many of the resources are allocated to (or shared with) the VM.
Because of how the hypervisors differ, ESX (and ESXi) does not have a Parent Partition… their ‘operating system’ is their hypervisor. With Microsoft Windows the hypervisor kernel is still Windows, so it works differently. However, benchmark performance tests of both prove that there is slight to no difference in performance between ESX and Hyper-V**, whether testing against the full installation of Windows Server, Server Core, or Hyper-V Server.
Incidentally, I mentioned earlier that there are three types of Child Partitions… while this is true, the only differentiator is the operating system installed in the Child Partition… so the three types are:
- Child Partition with Hyper-V supported OS
- Child Partition with a non-supported (Legacy) version of Windows (or non-supported x86 OS)
- Child Partition with a supported Xen-Enabled Linux Kernel (SLES, RHEL, CentOS)
Where VMware claims to support many more versions of many more operating systems than Hyper-V does, Microsoft is more realistic. For example, Microsoft wrote Windows NT, but stopped supporting it years ago. It, like any other x86 operating system, will install in a Hyper-V virtual machine, it will not have Integration Components. You could will not be able to fully leverage the gigabit Ethernet adapter or high resolution video… but if you are still running NT chances are you didn’t have that anyways. Microsoft also recognizes that it would be impossible to support many Linux builds, especially the ones that are primarily supported by community. On the other hand, the three kernels that are supported account for well over 90% of Linux in professional datacenters. Chances are there will be more kernels supported in the future… but the majority are covered currently.
If your operating system of choice is Linux, then vSphere may be your best bet. However, if you run a Windows-centric datacenter, but happen to have a number of Linux machines that you need to run, then Hyper-V with System Center is definitely for you… especially since you now understand why Hyper-V is really a Layer 1 Hypervisor, despite what some may claim!
**Although I have performed these tests, the End User License Agreement of vSphere 4.0, 4.1, and 5.0 all prohibit the publication of these benchmarks, and I would be stripped of my VMware certifications and subject myself to legal action if I did. Solution… build them for yourself