Technology has a way with change and change is necessary. We have witnessed many advances in the world of computing, with improvements and innovations being released at the drop of a hat, be it the room-sized hard drives squeezed down to thumbnail-sized memory cards, or mainframes giving way to distributed traditional servers and then to virtualized workloads. With virtualization at its fore, cloud computing has now taken the IT world by storm. Microsoft has become a major stakeholder in it with its earlier releases of Hyper-V Server 2008 R2 and Azure. Later on, it grabbed the attention of medium and enterprise businesses with Windows Server 2012 Hyper-V. Now it has put its best foot forward with the Release 2.0 of Windows Server 2012.
In the forthcoming pages, we will look into the Hyper-V architecture, which will help you understand what runs under the hood and realize what to fix if the setup does not deliver as expected. We will also look at the technical prerequisites, scalable options, and features introduced with Windows Server 2012 R2 Hyper-V.
Some features are new to this hypervisor platform, while others are improvements to earlier offerings with Windows Server 2012 Hyper-V, with more support for Linux VMs now.
There is also a basic overview of the licensing aspects and the Automatic Virtual Machine Activation (AVMA) feature released with Windows Server 2012 R2. It's imperative to understand the licensing requirements when designing a solution and ensure that you pay for what you use.
A discussion on Hyper-V always invites a comparison with the market leaders—VMware's ESXi servers. After almost a decade of catching up, Microsoft has delivered a product that matches up to its worthy competitor. We will close this chapter with a comparison chart of VMware's latest offering, ESXi 5.5, and Citrix XenServer 6.2 in order to show the features' differences and similarities.
In this chapter, we will broadly discuss the following topics:
An insight into virtualization
Cloud computing
The Hyper-V architecture and technical requirements
Features of Windows Hyper-V 2012 R2
Before we proceed further with the technical know-how about Windows Hyper-V 2012 R2 and the concepts of virtualization, it's necessary to know where it all started and how it grew into what we see today.
The origin of virtualization dates back to the 1960s, when IBM was building its mainframes as a single-user system to run batch jobs. Thereafter, they moved their focus to designing time-sharing solutions in mainframes, and invested a lot of time and effort in developing these robust machines. Finally, they released the CP-67 system, which was the first commercial mainframe to support virtualization. The system employed a Control Program (CP) that was used to spawn virtual machines, utilizing resources based on the principle of time-sharing. Time-sharing is the shared use of system resources among users of a large group. The goal was to increase the efficiency of both the users and the expensive computer resources. This concept was a major breakthrough in the technology arena, and reduced the cost of providing computing capabilities.
The 1980s saw the debut of microprocessors and the beginning of the era of personal computers. The demerits of mainframes, primarily their maintenance cost and inflexibility, saw personal computers and small servers move into the main scene. The low cost of implementation, performance, and scalability with networked computers gave rise to the client-server model of computation and pushed virtualization to the backseat. During the 1990s, the cost of computing soared again, and remediating the rising costs made the IT industry come full circle and revisit virtualization. There were several disadvantages of client-server technology that showed up with time, primarily low infrastructure utilization, increasing IT management costs and physical infrastructure costs, and insufficient failover and disaster management.
The 1990s saw the rise of two major players in the virtualization history, namely Citrix and VMware. Citrix started off with desktop virtualization and brought in the concept of remote desktops along with Microsoft, then known as WinFrame. Even since it was released, WinFrame has evolved into MetaFrame and Presentation Server, and nowadays it is called XenApp. VMware introduced server virtualization for x86 systems and transformed them into shared hardware infrastructure, which allowed isolation and operating system choices for application workloads, as well as defined rules for their mobility.
The reasons for the return of virtualization to industry-standard computing were the same as they were perceived decades ago. The resource capacity of a single server is large nowadays, and it is never effectively used by the installed workloads. Virtualization has turned out to be the best way to improve resource utilization and simplify data center management simultaneously. This is how server virtualization evolved.
Virtualization has a broader scope nowadays and can be applied to different resource levels. The following are a few ideal forms of it:
Server virtualization
Storage virtualization
Network virtualization
Desktop virtualization
Application virtualization
Let's look at their purpose and meanings, though in this book, we will focus primarily on server virtualization, and towards the end the focus would shift to desktop virtualization.
In an ideal situation, a role/application would be installed on a Windows-based server (or any other OS platform), which may have been a blade or a rack server. As and when there was a further requirement, the number of physical servers increased, which also raised the requirement of real estate, maintenance, electricity, and data center cooling. However, the workloads were mostly underutilized, thereby causing a higher OPEX (short for Operational Expenditure).
Server virtualization software, better known as a Hypervisor, allows the abstraction of physical hardware on a server/computer and creates a pool of resources consisting of compute, storage, memory, and network. The same resources are offered to end consumers as consolidated virtual machines. A virtual machine is an emulation of a physical computer, and it runs as an isolated operating system container (partition), serving as a physical machine. At any point in time, there could be one or more than one virtual machine (VM or guest machine) running on a physical machine (host). Its resources are allocated among the VMs as per their specified hardware profile. The hardware profile of a VM is similar to real-life hardware specifications of a physical computer. All running VMs are isolated from each other and the host; however, they can be placed on the same or different network segments.
The equation for hosting the VMs is dealt with by the virtualization stack and the hypervisor. The hypervisor creates a platform on which VMs are created and hosted. The hypervisor ensures the capability of installing the same or different operating systems on the virtual machines, and sharing the resources that are deemed fit by hard profiles or dynamic scheduling. The hypervisor is classified into two types:
Type 1: This is also referred to as a bare-metal or a native hypervisor. The software runs directly on the hardware and has better control over the hardware. Also, since there's no layer between the hypervisor and the hardware, the hypervisor has direct access to the hardware. Type 1 is thin and optimized to have a minimal footprint. This allows us to give most of the physical resources to the hosted guest (VM). One more advantage is decreased security attack vectors; the system is harder to compromise. A few well-known names are Microsoft's Hyper-V, VMware's ESXi, and Citrix's XenServer.
Type 2: This is also referred to as a hosted hypervisor. It is more like an application installed on an operating system and not directly on the bare-metal. The hosted hypervisor is a handy tool for lab or testing purposes. There are many merits of the Type 2 head, given that it's very easy to use and the user does not have to worry about the underlying hardware—the OS on which it is installed controls the hardware access. However, this is not as robust and powerful as Type-1 heads. Popular examples are Microsoft Virtual Server, VMware Workstation, Microsoft Virtual PC, Linux KVM, Oracle Virtual Box, and a few others.
The following diagrams should illustrate these concepts better:

Figure 1-1: Differentiating the Type 1 and Type 2 Hypervisors
Storage virtualization allows abstraction of the underlying operations of storage resources and presents it transparently to consumer applications, computers, or network resources. It also simplifies the management of storage resources and enhances the abilities of low-level storage systems.
In other words, it introduces a flexible procedure, wherein storage from multiple sources can be used as a single repository and managed minus knowing the underlying complexity. The virtualization's implementation can take place at multiple layers of a SAN, which assists in delivering a highly available storage solution or presenting a high-performing storage on demand, with both instances being transparent to the end consumer. One closest example is offered with Windows Server 2012 and 2012 R2, called Storage Spaces. Storage Spaces enables you to abstract numerous physical disks into one logical pool.
Note
For more information, refer to www.snia.org/education/storage_networking_primer/stor_virt (Storage Virtualization: The SNIA Technical Tutorial).
Network virtualization is the youngest of the lot. Now, with network virtualization, it is possible to put all network services into the virtualization software layer. It introduced Software-defined networking (SDN), which uses virtual switches, logical routers, logical firewalls, and logical load balancers, and allows network provisioning without any disruption of the physical network while running traffic over it. So, it not only helps utilize the complete virtual network's feature set, from layer 2 to 7, but also provides isolation and multi-tenancy (yes, cloud!). This also allows VMs to retain their security properties when moved from one host server to another, which may be located on a different network. Network Virtualization using General Routing Encapsulation (NVGRE) is a network virtualization mechanism leveraged by Hyper-V Network Virtualization.
Desktop virtualization is a software technology that separates a desktop environment and any associated application programs from the physical client device that is used to access it. Each user retains their own instance of the desktop operating system and applications, but that stack runs in a virtual machine on a server that is accessed through a low-cost thin client. The fundamentals are similar to those of mainframes, which were later inherited by Remote Desktop Services (RDS; also known as Terminal Services) and finally evolved into true desktop virtualization, called Virtual Desktop Infrastructure (VDI). In principle, VDI is different from a remote desktop, and it is expensive. In VDI, users get their own small VMs from the desktop pool (Windows 7 or 8), whereas in the case of using a remote desktop, it's a shared environment with desktop experience of a Windows Server. In RDS, users can't customize their user experience as with virtual machines or real desktops.
Application virtualization allows applications to run seamlessly on unsupported platforms, or along with their own older or newer conflicting versions on the same device. There can be two variants for this, namely hosted or packaged:
In a hosted app virtualization, servers are used to host applications and allow users to connect to the server from their device. A good example of this is RemoteApp.
In a packaged app virtualization, as the name indicates, an application is packaged with a pre-created environment that assures the execution of the app on a different operating system from where it was packaged. In practice, you may run a Windows XP application on a Windows 7 or 8 desktop without having to customize the app as per the newer platform. A few contenders can be Microsoft App-V and VMware Thinapp (integrated with the VMware Horizon Suite). One more example is Citrix XenApp, but that has been discontinued by Citrix.
Cloud computing is one phrase that has captured everyone's imagination in the 21st century. The debate on the topic of whether cloud computing is really a revolution or an evolution won't settle anytime soon. However, during the last couple of years, there have been multiple new start-ups based around this "new" form of technology, as well as some big players joining the league of service providers on the cloud.
Cloud computing is a way of delivering hosted services. However, it is more than just outsourcing, as it has to offer more flexibility, scalability, and automation. Another interesting aspect is self-service, wherein the consumer can request a VM on the fly, build an app and host it on the cloud, or request an infrastructure. Then the service gets provisioned in a transparent way. Of course, with the abilities, limitations, and possibilities with cloud computing, vendors have coined their definitions for the same, which may send wrong signals to the end consumer.
The National Institute of Standards and Technology (NIST) is the federal technology agency in USA that works with the industry to develop and apply technology, measurements, and standards. It published its definition of cloud computing, which has general acceptance among cloud adopters and IT gurus.
The link to download the documentation is http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
As per NIST, the cloud model is composed of five essential attributes, three service models, and four deployment models.
Every cloud model has these five attributes, regardless of the deployment or service models:
On-demand self-service: Consumers can request and get services provisioned without any intervention by IT teams or service providers.
Broad network access: There is support for most kinds of client platforms over the network.
Resource pooling: Providers pool their resources and allocate and remove the allocation to the consumers as per their requirement. This process is transparent to the consumer.
Rapid elasticity: Consumers notice that resources are available in abundance. These can be committed to them as and when required and released accordingly.
Measured service: Optimization and metering of resources for chargebacks.
Cloud computing services can be availed primarily as per the following service models:
Software as a Service (SaaS): This service model enables a consumer to use an application hosted by the service provider on the cloud rather than deploy it on their premises. Applications using this service model are messaging and collaboration apps, Office apps, finance apps, and a few others. Google Docs and Microsoft Office 365 are good examples of this model.
Platform as a Service (PaaS): This service model grants more flexibility to the consumer, and they can upload and deploy a custom app or database. They also get to control the configuration of the application-hosting environment. Cloud Foundry, one of the subsidiaries of VMware/EMC, is a PaaS provider.
Infrastructure as a Service (IaaS): In this model, the providers offer a subset of infrastructure that may consist of both virtual and, at times, physical machines, with complete control over the OS and installed apps and limited control over the storage and other networking components (host firewalls). There are a few contenders in this league, such as Amazon Web Services, Rackspace, and Microsoft Azure that provide both PaaS and IaaS service models.
A cloud environment setup is determined by factors such as cost, ownership, and location. So, there are different deployment models for different sets of requirements. The following are the deployment models for a cloud-based implementation:
Private: A private cloud is provisioned and dedicated to a single consumer. It can be managed by the organization on premise or off-premise, or it can also be run and managed by a service provider. Microsoft System Center suite of products assists the customer to set up an in-house private cloud with manageability over hypervisors from different vendors, namely Hyper-V, ESXi, and XenServer.
Public: A public cloud is provisioned and shared by many tenants and is managed by the service provider off-premise.
Community: A community cloud is a rare collaborative environment spanning across participants with a common objective. The participants are consumer organizations, and they put their resources under a common pool. This model is managed and maintained by one or more members of the community.
Hybrid: As the name indicates, this is a merger of two or more cloud models, and the usability is decided by the consumer. In principle, the cloud models are unique but connected by proprietary technology, and allow portability of relevant data between the models.
Cloud computing has generated a lot of excitement in recent years. However, it is still less mature than regular outsourcing. Nonetheless, the winning bid from cloud models lies in the concept of automation and self-service, giving the consumer the freedom of choice and manageability.
Moreover, we have seen some new acronyms being coined around service models in recent times, in addition to what has been stated earlier, such as XaaS and NaaS. For example, XaaS, or anything-as-a-service, makes the SPI (short for Software/Platform/Infrastructure) model converge as demanded and delivers it as a service. NaaS, or network-as-a-service, is based around network virtualization and allows provisioning of virtual network service to consumers. This is just an indication that cloud computing is changing and will be changing the face of IT in the times to come.
The year 2012 saw one of the biggest platform and system management releases from Microsoft Windows Server 2012 and System Center 2012. The new face of IT and new expectations and requirements from customers made Microsoft develop a mature product in Windows Server 2012, with an objective to "cloud-optimize IT." There were notable advancements made in the hypervisor's third release, Hyper-V 3.0, and the virtualization stack.
Windows Server 2012 was focused not only on virtualization and cloud aspects, but also on improvement of other OS aspects and their integration with Hyper-V, Azure, and VDI. Here are the names of a few important ones: dynamic memory management and smart paging, domain controller cloning, automation with PowerShell 3.0, SMB 3.0 with a Scale-Out File Server over the cluster shared volume (which has found many use cases), Storage Spaces, data deduplication, VHDX, IPAM, NIC teaming, Hyper-V Extensible Switch, Hyper-V Replica (MS's answer to VMware's SRM), and so on.
Windows Server 2012 and Hyper-V Server 2012 swept the market. However, there were still some missing pieces in the puzzle. Windows Server 2012 R2 (Release 2.0) was released in October 2013, with some key improvements to Hyper-V and other aspects. For starters, R2 brought back the forgotten Start button to the Metro UI. There were also significant improvements from the networking and storage perspectives:
Networking: With a clear focus and vision for Cloud OS, Microsoft has worked hard towards improvement in this division. New PowerShell cmdlets have been included for Windows networking roles for better automation and control. Windows Azure has progressed from being just a PaaS provider to an IaaS as well. Windows Server 2012 introduced the capabilities of hosting a multi-tenant cloud. With R2, Microsoft took network virtualization further. Windows Azure Pack for Windows Server and System Center 2012 R2 Virtual Machine Manager provide virtual network creation and management.
Storage: Microsoft has been focused in order to provide better manageability and control over storage options for admins. Many noteworthy features were brought in with Windows Server 2012, namely SMI-S, data deduplication, Storage Spaces, iSCSI Target Server, and DFSR enhancements. In R2, we saw classic improvements to the former listed features. Data deduplication is now supported on CSV and proves to be a boon for VDI setups. Storage Spaces allows storage tiers, which facilitate movement of data between faster or slower media, based on the frequency at which the data is accessed. Moreover, the old and reliable replication engine, FRS, along with the VDS provider has been deprecated.
It has been a late realization, but after a decade of research and understanding customer requirements and post multiple releases, Microsoft has finally come out with a stable, feature-rich, and yet economical virtualization platform in the third release of Hyper-V. The software vendor's goal and vision, as per their data sheet, is to provide a consistent platform for the infrastructure, apps, and data—the Cloud OS. They are almost there, but the journey up to this was an interesting one.
Hyper-V 1.0, released with the Windows Server 2008 64-bit platform, was mocked by the entire IT community, but it was more of a prototype meant to crash. Hyper-V does not come with 32-bit (x86) Windows platforms, though incidentally it was released as an x86 platform in beta versions. The next version was Hyper-V 2.0, which came out with Windows Server 2008 R2. This also marked the end of 32-bit server OS releases from Microsoft. Windows Server 2008 R2 was only available on x64 (64-bit) platforms. The second release of Hyper-V was quite stable and with dynamic memory and feasibility of the Windows GUI. It was well received and adopted by the IT community. However, it lacked the scalability and prowess of VMware's ESX and ESXi servers. The primary use case was cost when setting up an economical but not workload-intensive infrastructure. Windows Server 2012 came out with the third release of Hyper-V. It almost bridged the gap between ESXi and Hyper-V and changed the scales of market shares in Microsoft's favor, though VMware is still the market leader for now. There were many new features and major enhancements introduced to the virtualization stack, and features such as virtual SAN were added, which reduced the dependency of VMs on the parent partition. Windows Server 2012 R2 did not come with a major release but with some improvements and innovations to the third release. However, before we discuss the features and technical requirements of Hyper-V 2012 R2, let's first cover the architecture of Hyper-V.
It's imperative to know the underlying components that make up the architecture of Hyper-V, and how they function in tandem. This not only helps in designing a framework, but more importantly assists in troubleshooting a scenario.
In one of the previous sections, we discussed what hypervisors are and also that they run either bare-metal or hosted. However, before we proceed further with the terms related to Hyper-V, let's check out what OS Protection rings or access modes are. Rings are protection boundaries enforced by the operating system via the CPU or processor access mode. In a standard OS architecture, there are four rings.
The innermost, Ring 0, runs just above the hardware, which is the OS kernel and has high privileged CPU access. Ring 1 and Ring 2 are device drivers, or privileged code. Ring 3 is for user applications. On the Windows OS, there are just two rings: Ring 0 for the kernel mode and Ring 3 for the user mode processor access. Refer to the following diagram to understand this:

Figure 1-2: OS Protection Rings
Hyper-V is a Type-1 hypervisor. It runs directly on hardware, ensures allocation of compute and memory resources for virtual machines, and provides interfaces for administration and monitoring tools. It is installed as a Windows Server role on the host, and moves the host OS into the parent or root partition, which now holds the virtualization stack and becomes the management operating system for VM configuration and monitoring.
Since Hyper-V runs directly on hardware and handles CPU allocation tasks, it needs to run in Ring 0. However, this also indicates a possible conflict state with the OS kernel of both the parent partition and other VMs whose kernel modes are designed to run in Ring 0 only. To sort this, Intel and AMD facilitate hardware-assisted virtualization on their processors, which provide an additional privilege mode called the Ring-1 (minus 1), and Hyper-V (a Type 1 Hypervisor) slips into this ring. In other words, Hyper-V will run only on processors that support hardware-assisted virtualization. The following diagram depicts the architecture and various components that are the building blocks of the Hyper-V framework:

Figure 1-3: Hyper-V Architecture
Let's define some of the components that build up the framework:
Virtualization stack: This is a collection of components that make up Hyper-V, namely the user interface, management services, virtual machine processes, providers, emulated devices, and so on.
Virtual Machine Management service (VMM Service): This service maintains the state of the virtual machines hosted in the child partitions, and controls the tasks that can be performed on a virtual machine based on its current state (for example, taking snapshots). When a virtual machine is booted up, the VMM Service creates a virtual machine worker process for it.
Virtual Machine Worker Process (VMWP): The VMM Service creates a VMWP (
vmwp.exe
) for every corresponding Hyper-V virtual machine, and manages the interaction between the parent partition and the virtual machines in the child partitions. The VMWP manages all VM operations, such as creating and configuring, snapshotting and restoring, running, pausing and resuming, and live migrating the associated virtual machine.WMI Provider: This allows VMMS to interface with virtual machines and management agents.
Virtual Infrastructure Driver (VID): Also referred to as VID, this is responsible for providing partition management services, virtual processor management services, and memory management services for virtual machines running in partitions.
Windows Hypervisor Interface Library (WinHv): This binary assists the operating system's drivers, parent partitions, or child partitions, to contact the hypervisor via standard Windows API calls rather than hyper-calls.
VMBus: This is responsible for interpartition communication and is installed with integration services.
Virtualization/Virtual Service Providers (VSP): This resides in the management OS and provides synthetic device access through the VMBus to virtualization service clients in child partitions.
Virtualization/Virtual Service Clients (VSC): This is another one of the integration components that reside in child partitions, and communicates child partitions' device I/O requests over VMBus.
One entity that is not explicitly depicted in the preceding diagram is Integration Services, also referred to as Integration Components. It is a set of utilities and services—some of which have been mentioned in the preceding list—installed on the VMs to make them hypervisor aware or enlightened. This includes a hypervisor-aware kernel, Hyper-V-enlightened I/O, virtual server client (VSC) drivers, and so on. Integration services, along with driver support for a virtual device, provide these five services for VM management:
Operating system shutdown: The service allows the management agents to perform a graceful shutdown of the VM.
Time synchronization: The service allows a virtual machine to sync its system clock with the management operating system.
Data exchange: The service allows the management operating system to detect information about the virtual machine, such as its guest OS version, FQDN, and so on.
Heartbeat: The service allows Hyper-V to verify the health of the virtual machine, whether it's running or not.
Backup (volume snapshot): The service allows the management OS to perform a VSS-aware backup of the VM.
Here's a glimpse of the first Hyper-V setting for this title. The following screenshot shows the Integration Services section from a Virtual Machine Settings applet:

Figure 1-4: Integration Services
Hyper-V allows hosting of multiple guest operating systems in child partitions. Based on whether the VMs have IS installed, we can identify them as follows:
Enlightened Windows guest machines: The Windows virtual machines that are Hyper-V aware are referred to as enlightened. They should either have latest Integration components built in by default (for example, a Windows Server 2012 R2 VM is considered enlightened if it is based on a Windows Server 2012 R2 host), or have the Integration Services installed on them. Integration components install VSCs, as stated earlier, and act as device drivers for virtual devices. VSCs communicate and transfer VM device requests to VSP via VMBus.
Enlightened non-Windows guest machines: Beyond Windows, Microsoft supports multiple flavors of Linux (for example, RHEL, SUSE, and a few others), contrary to rumors spread by some communities that Hyper-V does not support Linux. Linux guest machines are very much supported, and MS provides LIS (short for Linux Integration Services) drivers for optimum performance from Hyper-V virtual devices integrated with them.
At the time of writing this book, the latest release of LIS is version 3.5. The LIC ISO is available for download for older Linux distributions. The newer distributions of Linux are pre-enlightened as they have the LIS built into them by default.
Unenlightened guest machines: Windows, Linux, or other platforms that are not enlightened or have Integration Services installed are unaware of Hyper-V. However, Hyper-V allows emulation for device and CPU access. The demerit is that emulated devices do not provide high performance and cannot leverage the rich virtual machine management infrastructure via Integration Services.
Before we move on to the feature review of Hyper-V 2012 R2, let's consider the prerequisites of a Hyper-V host implementation. In the next chapter, we will look at it in detail.
Ever since its inception in the RTM release, Hyper-V runs on an x64 (64-bit) platform and requires an x64 processor. The CPU should fulfill the following criteria:
Hardware-assisted virtualization: These processors include a virtualization option that provides an additional privilege mode below Ring 0 (Ring 1). Intel calls this feature Intel VT-x, and AMD brands it as AMD-V on their processors.
Hardware-enforced Data Execution Prevention (DEP): This feature is a security requirement from a Windows standpoint for preventing malicious code from being executed from the system memory locations. With DEP, the system memory locations are tagged as non-executable. The setting is enabled from BIOS. In Intel, the setting for DEP is called the XD bit (Execute Disable bit), and in the case of AMD, it is called the NX bit (No Execute bit). In Hyper-V, this setting is imperative, as it secures the VMBus to be used as a vulnerable connection to attack the host OS.
Windows Server 2012 was released with a box full of goodies for admins and architects, but there was room for more. In the previous section, we took a brief look at the features that were rolled out with Windows Server 2012. R2 introduced very few but significant changes, as well as some noteworthy improvements to previously introduced features. In the last section of this chapter, there will be long list of features and gotchas from Hyper-V 2012 R2 compared to VMware's ESXi, but here let's look at the few important features for consideration:
Generation 2 virtual machines: This has been one of the most talked-about inclusions in this release. In Hyper-V 2012 R2, there are two supported generations for virtual machines:
Generation 1: This still uses the old virtual hardware recipe available from previous Hyper-V releases, emulating the old Intel chipset.
Generation 2: This introduces a new set of virtual hardware, breaking the dependency on the older virtual hardware. It offers UEFI 2.0 firmware support and allows VM to boot off a SCSI virtual disk or DVD. It also adds the capability of PXE boot to a standard network adapter (doing away with legacy NIC). For now, four operating systems are supported on Generation 2 VMs: client OSes include Windows 8 and 8.1, and server OSes include Windows Server 2012 and 2012 R2.
Hyper-V replica: The disaster recovery solution inside Hyper-V has finally included the change requested by many admins. Previously, administrators could create an offline copy of a VM on a second Hyper-V Server. If the first server failed, as a disaster recovery process, the replica would be brought online. With 2012 R2, it is possible to extend the replication ability to a third replica server, which will ensure further business continuity coverage. Earlier, the replica could only be configured via Hyper-V Manager, PowerShell, or WMI, but now the feature has been extended to Azure, and you need a VMM to push a replica to the cloud.
Automatic Virtual Machine Activation (AVMA): This feature saves a lot of activation overhead for admins when it comes to activating product keys on individual virtual machines. AVMA allows a VM to be installed on a licensed virtual server and activates the VM when it starts. The supported operating systems on VMs for AVMA are Windows Server 2012 R2 Essentials, Windows Server 2012 R2 Standard, and Windows Server 2012 R2 Datacenter. Windows Server 2012 R2 Datacenter is required on the Hyper-V host for this function. This feature has a few use cases:
Virtual machines in remote locations can be activated
Virtual machines with or without an Internet connection can be activated
Virtual machine licenses can be tracked from the Hyper-V Server without requiring any additional access rights or privileges to the virtual machines
Shared virtual disks: With this exciting feature, admins may give iSCSI, pass-through, or even virtual SAN a miss. This feature, when added to a VHDX file, allows the file to be a shared storage for guest machine failover clustering.
Storage QoS: This is an interesting addition, wherein the admin can specify minimum and maximum loads for IOPS per virtual disk so that the storage throughput stays under check.
Linux support: Microsoft has put a lot of focus on building an OS independent virtual platform for hosting providers. Now, new Linux releases are Hyper-V aware, with Integration Services built in, and for older Linux platforms, MS has released LIS 3.5. This new IS allows a lot many feature additions for Linux VMs, which include dynamic memory, online VHD resize, and online backup (Azure Online Backup, SCDPM, or any other backup utility that supports backup of Hyper-V virtual machines).
Microsoft made an aggressive move with licensing from Windows Server 2012 and maintained the same rhythm with Windows Server 2012 R2. They just came out with two primary editions: Standard and Datacenter. The word "Enterprise" was deprecated from the listing. The other two editions, namely "Essentials" and "Foundation" are a small business with almost no VOSE (Virtual Operating System Environment). Our focus will be on primary editions only, since our agenda is virtualization.
Note
POSE stands for Physical Operating System Environment, wherein the running instance is on the physical server. VOSE indicates a virtual machine instance.
In principle, the "Standard" and "Enterprise" editions carry the same features as each other. However, the Enterprise version offers unlimited VOSE, and the Standard edition licenses only two virtual machine instances, or VOSE. For each edition, the license covers two processors or sockets (not cores). If the server has more than two processors, then for each edition, one additional license has to be purchased. An additional license purchase for the Standard edition can also provide for two more VOSEs as well as two processor sockets. Take a look at the the following table to understand this better:
Licensing examples |
Datacenter licenses required |
Standard licenses required |
---|---|---|
One 1-processor, non-virtualized server |
1 |
1 |
One 4-processor, non-virtualized server |
2 |
2 |
One 2-processor server with three virtual OSEs |
1 |
2 |
One 2-processor server with 12 virtual OSEs |
1 |
6 |
In the previous section, we looked at one of the new features, called Automated Virtual Machine Activation (AVMA). If the host OS is Windows Server 2012 R2 Datacenter, then this feature can be utilized via the SLMGR AVMA key on the virtual machine in the same way as the KMS client key. The virtual machine can be deployed as a template. As stated earlier, this is only available for Windows Server 2012 R2 Server versions as VMs only.
Note
This section is not a licensing guide, but an effort to help learners understand the basics. For more information, always contact your Microsoft reseller or refer to http://www.microsoft.com/en-us/server-cloud/buy/pricing-licensing.aspx.
It's a norm now; over the last couple of years, whenever there has been a discussion over the features of Hyper-V, experts and admins have pitted it against the other two leading competitors, though the focus used to be primarily on VMware. From being mocked at to becoming a serious competitor and now almost at par, the Hyper-V development team has pulled the reins strongly to catch up with VMware's ESXi. The community, which was once split in their opinion, is now adopting and becoming aware of both hypervisors, with experts from the other side honing their skills on Hyper-V. The rise of Hyper-V, in a way, can be attributed to VMware for their vision of server virtualization. The next notable contender in the list of hypervisors is Citrix's XenServer, which went fully open source last year, with its XS 6.2 release. There are similarities, yet there are differences between the products, both from the architecture and feature standpoints. Let's look at some of those striking features stacked together.
The following table depicts the hypervisor's host attributes and configuration limits. These are considered as guidelines for setting up a virtual data center:
Microsoft Windows Server 2012 R2 / System Center 2012 R2 Datacenter Edition |
VMware vSphere 5.5 Enterprise Plus with operations management / vCenter Server 5.5 |
Citrix XenServer 6.2 Single Product Edition / XenCenter 6.2 management console | |
---|---|---|---|
Hypervisor type and footprint |
|
|
|
Maximum memory (per host) |
4 TB |
4 TB |
1 TB |
Maximum number of processors (per host) |
320 (logical) |
320 (logical) |
160 (logical) |
Maximum number of active VMs / consolidation (per host) |
1,024 VMs |
512 VMs |
450 VMs (Windows) 650 VMs (paravirtualized and Linux-based) |
Maximum number of virtual CPUs (per VM) |
64 |
64 |
16 |
Hot-adding virtual CPU to VM |
Partial support by allowing alterations to virtual machine limits |
Supported (limitations from VOSE and VMware FT) |
Not supported |
Maximum virtual RAM (per VM) |
1 TB |
1 TB |
128 GB |
Hot-adding virtual RAM to VM |
Supported (via dynamic memory) |
Supported |
Not supported |
Dynamic memory management |
Supported (via dynamic memory ) |
Supported (via memory ballooning and transparent page sharing) |
Supported (via dynamic memory control, or DMC) |
Virtual NUMA support for VMs |
Supported |
Supported |
Not supported |
Maximum number of physical hosts per cluster |
64 nodes |
32 nodes |
16 nodes |
Maximum number of VMs per cluster |
8,000 VMs |
4,000 VMs |
800 VMs |
VM snapshots |
Supported—support for 50 snapshots per VM |
Supported—support for 32 snapshots per VM, (VMware, as best practice, recommends two to three snapshots.) If VMs are using an iSCSI initiator |
Supported—support for one snapshot per VM |
Bare-metal/automated host deployment |
Supported (System Center 2012 R2 Virtual Machine Manager) |
Supported (VMware's auto-deploy and host profiles make it possible to perform bare-metal deployment of new hosts on a pre-existing cluster. However, it will not perform bare-metal deployment of new clusters.) |
Supported (no integrated deployment application however possible via unattended installation over from a network repository) |
GPU advancements |
Supported via RemoteFX and VDI features in the RDS role |
Supported via vDGA and vSGA features |
Supported via HDX and vGPU (Kepler Architecture K1/K2) features |
Boot from SAN |
Supported via the iSCSI target server or third-party iSCSI / FC storage arrays |
Supported via third-party iSCSI/FC storage arrays |
Supported via third-party iSCSI / FC storage arrays |
Boot from USB/Flash |
Supported |
Supported |
Not supported |
The following table shows comprehensively how each hypervisor supports various operating system platforms as virtualized workloads:
Note
For the most recent and complete list of supported operating systems, please refer to these links:
Microsoft: Supported server and client guest operating systems on Hyper-V; http://technet.microsoft.com/library/hh831531.aspx
VMware: A compatibility guide for guest operating systems supported on VMware vSphere; http://www.vmware.com/resources/compatibility
Citrix: XenServer 6.2.0 Virtual Machine User's Guide; http://support.citrix.com/article/CTX137830
Microsoft Windows Server 2012 R2 / System Center 2012 R2 Datacenter Edition |
VMware vSphere 5.5 Enterprise Plus / vCenter Server 5.5 Standard Edition |
Citrix XenServer 6.2 Single Product Edition / XenCenter 6.2 | |
---|---|---|---|
CentOS 5.5-5.6, 5.7-5.8, 5.9, 6.0-6.3, and 6.4 - 6.5 |
Supported |
Supported |
Supported |
CentOS Desktop 5.5-5.6, 5.7-5.8, 5.9, 6.0-6.3, and 6.4 - 6.5 |
Supported |
Supported |
Supported |
Oracle Linux 6.4 and 6.5 with UEK |
Supported (Oracle certified) |
Supported (Oracle has not certified any of its products to run on VMware) |
Supported |
Mac OS X 10.7.x and 10.8.x |
Not supported |
Supported |
Not supported |
Red Hat Enterprise Linux 5.5-5.6, 5.7-5.8, 5.9, 6.0-6.3, and 6.4 - 6.5 |
Supported |
Supported |
Supported |
Red Hat Enterprise Linux Desktop 5.5-5.6, 5.7-5.8, 5.9, 6.0-6.3, and 6.4 - 6.5 |
Supported |
Supported |
Supported |
SUSE Linux Enterprise Server 11 SP2 and SP3 |
Supported |
Supported |
Supported |
SUSE Linux Enterprise Desktop 11 SP2 and SP3 |
Supported |
Supported |
Supported |
OpenSUSE 12.3 |
Supported |
Supported |
Supported |
Sun Solaris 10 and 11 |
Not supported |
Supported (Oracle has not certified any of its products to run on VMware) |
Not supported |
Ubuntu 12.04, 12.10, 13.04, and 13.10 |
Supported |
Supported |
Supported |
Ubuntu Desktop 12.04, 12.10, 13.04, and 13.10 |
Supported |
Supported |
Supported |
Windows Server 2012 R2 |
Supported |
Supported |
Supported (with SP1) |
Windows 8.1 |
Supported |
Supported |
Supported (with SP1) |
Windows Server 2012 |
Supported |
Supported |
Supported |
Windows 8 |
Supported |
Supported |
Supported |
Windows Server 2008 R2 SP1 |
Supported |
Supported |
Supported |
Windows Server 2008 R2 |
Supported |
Supported |
Supported |
Windows 7 with SP1 |
Supported |
Supported |
Supported |
Windows 7 |
Supported |
Supported |
Supported |
Windows Server 2008 SP2 |
Supported |
Supported |
Supported |
Windows Home Server 2011 |
Supported |
Not supported |
Supported |
Windows Small Business Server 2011 |
Supported |
Not supported |
Supported |
Windows Vista with SP2 |
Supported |
Supported |
Supported |
Windows Server 2003 R2 SP2 |
Supported |
Supported |
Supported |
Windows Server 2003 SP2 |
Supported |
Supported |
Supported |
Windows XP with SP3 |
Supported |
Supported |
Supported |
Windows XP x64 with SP2 |
Supported |
Supported |
Supported |
The following table depicts various storage-related features, from both the host and the VM perspective, promoted by each hypervisor platform:
Microsoft Windows Server 2012 R2/System Center 2012 R2 Datacenter Edition |
VMware vSphere 5.5 Enterprise Plus / vCenter Server 5.5 Standard Edition |
Citrix XenServer 6.2 Single Product Edition / XenCenter 6.2 Management Console | |
---|---|---|---|
Maximum number of SCSI virtual disks per VM |
256 |
60 (PVSCSI disks) and 120 (Virtual SATA disks) |
16 (VDI via VBD) |
Maximum size per virtual disk |
64 TB (VHDX) and 2 TB (VHD) |
62 TB |
2 TB |
4K Native (4K logical sector size) disk support |
Supported |
Not supported |
Not supported |
Boot VM from SCSI virtual disks |
Supported (generation 2 VMs onwards ) |
Supported |
Supported |
Hot-adding virtual SCSI (running VMs) |
Supported |
Supported |
Supported |
Hot-extending virtual SCSI (running VMs) |
Supported |
Supported (except 62 TB VMDKs) |
Supported (via XenConvert) |
Hot-shrink virtual SCSI (running VMs) |
Supported |
Not supported |
Supported (via XenConvert) |
Storage migration (running VMs) |
Supported, with unlimited number of simultaneous live storage migrations. Provides flexibility to cap at a maximum limit that is appropriate as per for your datacenter limitations. |
Supported, with two simultaneous storage vMotion operations per ESXi host. Alternatively, there can be eight simultaneous storage vMotion operations per data store. Also, the feature cannot be extended to VM Guest Clusters with MSCS |
Supported, with three simultaneous storage Xenmotion with a cap of one snapshot per VM undergoing migration |
Virtual FC to VMs |
Supported (four Virtual FC NPIV ports per VM) |
Supported (four virtual FC NPIV ports per VM). However, the feature cannot be extended to VM guest clusters with MSCS. |
Not supported |
Storage quality of service |
Supported (storage QoS) |
Supported (storage IO control) |
Supported (I/O priority on virtual disks) |
Flash-based read cache |
Supported |
Supported |
Not supported |
Flash-based write-back cache |
Supported—Storage Spaces |
Supported—Virtual SAN |
Not supported |
Storage virtualization abilities |
Supported—Storage Spaces |
Supported—Virtual SAN |
Not supported |
Deduplication of shared storage hosting VMs |
Supported (VDI workloads) |
Not supported |
Not supported |
This table mentions the networking features provided by each hypervisor model, which can help architects design their environments:
Microsoft Windows Server 2012 R2 / System Center 2012 R2 Datacenter Edition |
VMware vSphere 5.5 Enterprise Plus / vCenter Server 5.5 Standard Edition |
Citrix XenServer 6.2 Single Product Edition / XenCenter 6.2 Management console | |
---|---|---|---|
Distributed switch |
Logical switch in System Center VMM 2012 R2 |
vDS (vNetwork distributed switch) |
Open vSwitch (the distributed vSwitch is deprecated) |
Extensible virtual switch |
Supported. Extensions are offered by Cisco, Inmon, and 5nine |
Replaceable, and not truly extensible |
Supported via Open vSwitch |
NIC teaming |
Supported. Thirty-two NICs per team utilize dynamic load balancing |
Supported. Thirty-two NICs per team utilize the Link Aggregation Group |
Supported. Four NICs per bond utilize the Link Aggregation Group |
PVLANs (private VLANs) |
Supported |
Supported |
Supported |
ARP spoofing security |
Supported |
Supported, via an additional paid add-on vCloud Network and security (vCNS) or vCloud suite |
Supported |
DHCP snooping security |
Supported |
Supported, via an additional paid add-on vCloud Network and security (vCNS) or vCloud suite |
Not supported |
Router Advertisement (RA) guard protection |
Supported |
Supported, via an additional paid add-on vCloud network and security (vCNS) or vCloud suite |
Not supported |
Virtual port ACLs |
Built-in support for extended ACLs |
Supported, via traffic filtering and marking policies in vSphere 5.5 vDS |
Supported |
Software-defined Networking (SDN) / network virtualization |
Supported (the NVGRE protocol) |
Supported, via an additional paid add-on VMware NSX |
Supported, via paid add-on Cloud Platform SDN Controller and SDN plugins |
The final table depicts the high availability and mobility offerings by each hypervisor platform:
Microsoft Windows Server 2012 R2/System Center 2012 R2 Datacenter Editions |
VMware vSphere 5.5 Enterprise Plus/ vCenter Server 5.5 Standard Editions |
Citrix XenServer 6.2 Single Product Edition/XenCenter 6.2 Management console | |
---|---|---|---|
Live migration (running VMs) |
Supported. There can be unlimited simultaneous live VM migrations, depending on the data center's capacity |
Supported, but limited to four simultaneous vMotions for 1GbE and eight simultaneous vMotions for 10 GbE network adapters |
Supported, but one at a time, and in a sequence |
Live migration (running VMs without shared storage) |
Supported |
Supported |
Supported |
Live migration enabling compression of VM state |
Supported |
Not supported |
Not supported |
Live migration over RDMA network adapters |
Supported |
Not supported |
Not supported |
VM guest cluster (Windows Failover Clustering) live migration |
Supported |
Not Supported, as per the vSphere MSCS setup documentation |
Not supported |
Highly available (HA) VMs |
Supported |
Supported |
Supported |
Affinity rules for HA VMs |
Supported |
Supported |
Not supported (workload balancing is a retired feature) |
Orchestrated updating of hypervisor hosts. |
The Cluster-aware Updating (CAU) role service |
vSphere 5.5 Update Manager, with additional costs |
XenCenter Management with additional license costs |
Application monitoring and management for HA VMs |
System Center 2012 R2 Operations Manager |
VM Monitoring Service and vSphere App HA |
Not supported |
VM guest clustering (shared virtual hard disk) |
Shared VHDX |
Shared VMDK |
Not supported (Shared VDI) |
Maximum number of nodes in a VM guest cluster |
64 VM nodes |
5 VM nodes |
Not supported |
Fault-tolerant (Lockstep) VMs |
Not supported. As per Microsoft, application availability can be well managed via highly available VMs and VM guest clustering, which is more economical and easier to manage. In the case of stringent requirements, fault-tolerant hardware solutions can be opted for |
VMware FT |
Not supported |
This table lists a subset of feature considerations to bring to your attention how well the aforementioned products placed against are each other, with Hyper-V edging out VMware and Citrix in the race with its recent release. In later chapters, we will look at some of the these features closely.
This brings us to the end of the first chapter, so let's revisit what we have discussed so far. We saw how virtualization was first perceived and developed, and its evolution in recent times. It is now a building block of cloud computing. We also discussed different forms of virtualization based on resource layering. Thereafter, we looked at the characteristics and models of cloud computing.
In the section after that, we saw how well Windows 2012 was adopted, with the entire new arsenal it had to offer, from the OS and virtualization perspective. Windows 2012 R2 raised the bar further, with remarkable improvements to the original version.
We then took delved into the component architecture of Hyper-V, and discussed how the underlying entities communicate. This gave a little insight into OS protection rings and the place where the hypervisor (aka Hyper-V) gets stacked in the circles of trust (Ring 1). After that, we looked at the new features and improvements delivered with Windows Hyper-V 2012 R2, and how it is going to benefit modern data centers.
Further, we looked a bit into licensing considerations of Windows 2012 R2, from both operating system and Hyper-V standpoints. We closed the chapter with a showdown between Windows Hyper-V 2012 R2, VMware ESXi 5.5, and Citrix XenServer 6.2, and depicted the major areas of comparison. Now it is evident how well Hyper-V is placed in terms of its features and why it is gradually taking over market shares.
In the next chapter, we will delve further into the technical side of things, discussing how to identify virtualization needs and how to plan, design, and deploy Hyper-V in an environment.