Hyper-V has evolved since its release back in 2008. At that time, Hyper-V was released as an update to Windows Server 2008, KB950050 to be more precise (which can be found at https://support2.microsoft.com/kb/950050/en-us). Many of the features available today were not present at that point. If you take a look, you can actually see that virtualization has been one of the areas of major investments by Microsoft, not only with Hyper-V, but also to ensure that all its major products would be able to run perfectly on a virtualization environment. As an example of how Hyper-V has evolved, Microsoft Azure runs entirely on it. In the first release, Hyper-V did not have Live Migration, Storage Live Migration, Replica, Dynamic memory, and many other features. It also had support for only four virtual processors and 64 GB of virtual RAM per Virtual Machine (VM). At first, Hyper-V's only appeal was its price, or the fact that it is not charged at all.
Nowadays, Hyper-V is the leading virtualization solution in many markets and is rapidly gaining market share over its competitors. The reason behind this is actually simple. Hyper-V combines a solution that meets the higher expectations of large enterprises and since it's delivered free, even small companies can benefit from all Hyper-V features. Moreover, Microsoft Hyper-V Server is a totally free virtualization platform with no restrictions, compared to the Hyper-V from Windows Server, and is a perfect scenario for open source users too. Licensing and Utilization options will be explained in detail in Chapter 3, Licensing a Virtualization Environment with Hyper-V, so for now, all you have to keep in mind is that Microsoft delivers all its virtualization technologies at no cost.
However, before we go through all the Hyper-V features that this book will cover, it's important to understand the architecture and components of Hyper-V, so you'll have a better understanding on how all this works and will be able to make better decisions when planning your virtualization environment.
In this chapter, we will cover the following topics:
Type 1 and 2 Hypervisors
Microkernel and Monolithic Type 1 Hypervisors
Processor and memory configuration
Looking back in history, Hyper-V is not the first virtualization technology from Microsoft. Actually, virtualization, emulation, and other techniques have been used since the first computer was released. Even mainframes use these techniques. Specifically, virtualization, as we know today, was imagined to solve a common problem, that is, the average utilization of a server is extremely low. Even though some components are used more than others, the total utilization of a server is minimal. That happens because when you plan for a server that will run an application, you have to plan for the higher utilization moment, when an application is stressed. But this utilization peak will occur just a few times during the month. For all other times, your server will be either idle or using 5 to 10 percent of all its capacity. That is the average. Before virtualization, another technique was also used: server consolidation. This technique consists of running multiple applications on the same server. The problem with this option is that you have no isolation between the application environment, and often, you can't combine too many different applications on the same server as they may have totally different requirements. Another problem of the server consolidation is that the utilization peak will create another problem of two concurrent applications on the same server. This technique is hardly used today, as virtualization addresses these issues in a much better way.
Microsoft has played in this field of better hardware utilization since its first operating system. Even Microsoft DOS had some options for doubling RAM. Windows 3.X introduced paging, also known as virtual memory, on later Operating Systems (OS). The game started to change in 2003, when Microsoft bought two products called Virtual PC, which already had released versions for Mac OS and Windows, and Virtual Server, which was in the development phase at that moment, from Connectix. With the acquisition, part of the staff from Connectix came to Microsoft, and, in 2004, Microsoft released Microsoft Virtual Server 2005.
Compared to the first version of Hyper-V, Microsoft Virtual Server is a dinosaur. That's not only because Hyper-V implements new features, but also because there is a major architectural difference between these products. This is the Hypervisor architecture.
If you've used Microsoft Virtual Server or Virtual PC, and then moved to Hyper-V, I'm almost sure that your first impression was: "Wow, this is much faster than Virtual Server". You are right. And there is a reason why Hyper-V performance is much better than Virtual Server or Virtual PC. It's all about the architecture.
There are two types of Hypervisor architectures. Hypervisor Type 1, like Hyper-V and ESXi from VMware, and Hypervisor Type 2, like Virtual Server, Virtual PC, VMware Workstation, and others. The objective of the Hypervisor is to execute, manage and control the operation of the VM on a given hardware. For that reason, the Hypervisor is also called Virtual Machine Monitor (VMM). The main difference between these Hypervisor types is the way they operate on the host machine and its operating systems. As Hyper-V is a Type 1 Hypervisor, we will cover Type 2 first, so we can detail Type 1 and its benefits later.
Hypervisor Type 2, also known as hosted, is an implementation of the Hypervisor over and above the OS installed on the host machine. With that, the OS will impose some limitations to the Hypervisor to operate, and these limitations are going to reflect on the performance of the VM.
To understand that, let me explain how a process is placed on the processor: the processor has what we call Rings on which the processes are placed, based on prioritization. The main Rings are 0 and 3. Kernel processes are placed on Ring 0 as they are vital to the OS. Application processes are placed on Ring 3, and, as a result, they will have less priority when compared to Ring 0. The issue on Hypervisors Type 2 is that it will be considered an application, and will run on Ring 3. Let's have a look at it:
The impact is immediate. As you can see, Hypervisor Type 1 has total control of the underlying hardware. In fact, when you enable Virtualization Assistance (hardware-assisted virtualization) at the server BIOS, you are enabling what we call Ring -1, or Ring decompression, on the processor and the Hypervisor will run on this Ring.
The question you might have is "And what about the host OS?" If you install the Hyper-V role on a Windows Server for the first time, you may note that after installation, the server will restart. But, if you're really paying attention, you will note that the server will actually reboot twice. This behavior is expected, and the reason it will happen is because the OS is not only installing and enabling Hyper-V bits, but also changing its architecture to the Type 1 Hypervisor. In this mode, the host OS will operate in the same way a VM does, on top of the Hypervisor, but on what we call parent partition. The parent partition will play a key role as the boot partition and in supporting the child partitions, or guest OS, where the VMs are running. The main reason for this partition model is the key attribute of a Hypervisor: isolation.
For Microsoft Hyper-V Server you don't have to install the Hyper-V role, as it will be installed when you install the OS, so you won't be able to see the server booting twice.
With isolation, you can ensure that a given VM will never have access to another VM. That means that if you have a compromised VM, with isolation, the VM will never infect another VM or the host OS. The only way a VM can access another VM is through the network, like all other devices in your network. Actually, the same is true for the host OS. This is one of the reasons why you need an antivirus for the host and the VMs, but this will be discussed later.
The major difference between Type 1 and Type 2 now is that kernel processes from both host OS and VM OS will run on Ring 0. Application processes from both host OS and VM OS will run on Ring 3. However, there is one piece left. The question now is: "What about device drivers?"
Have you tried to install Hyper-V on a laptop? What about an all-in-one device? A PC? A server? An x64 based tablet? They all worked, right? And they're supposed to work. As Hyper-V is a Microkernel Type 1 Hypervisor, all the device drivers are hosted on the parent partition. A Monolithic Type 1 Hypervisor hosts its drivers on the Hypervisor itself. VMware ESXi works this way. That's why you should never use a standard ESXi media to install an ESXi host. The hardware manufacturer will provide you with an appropriate media with the correct drivers for the specific hardware.
The main advantage of the Monolithic Type 1 Hypervisor is that, as it always has the correct driver installed, you will never have a performance issue due to an incorrect driver. On the other hand, you won't be able to install this on any device.
The Microkernel Type 1 Hypervisor, on the other hand, hosts its drivers on the parent partition. That means that if you installed the host OS on a device, and the drivers are working, the Hypervisor, and in this case Hyper-V, will work just fine.
The other side of this is that if you use a generic driver, or a wrong version of it, you may have performance issues, or even driver malfunction. What you have to keep in mind here is that Microsoft does not certify drivers for Hyper-V. Device drivers are always certified for Windows Server. If the driver is certified for Windows Server, it is also certified for Hyper-V. But you always have to ensure the use of correct driver for a given hardware. Let's take a better look at how Hyper-V works as a Microkernel Type 1 Hypervisor:
As you can see from the preceding diagram, there are multiple components to ensure that the VM will run perfectly. However, the major component is the Integration Components (IC), also called Integration Services. The IC is a set of tools that you should install or upgrade on the VM, so that the VM OS will be able to detect the virtualization stack and run as a regular OS on a given hardware.
To understand this more clearly, let's see how an application accesses the hardware and understand all the processes behind it.
When the application tries to send a request to the hardware, the kernel is responsible for interpreting this call. As this OS is running on an Enlightened Child Partition (Means that IC is installed), the Kernel will send this call to the Virtual Service Client (VSC) that operates as a synthetic device driver. The VSC is responsible for communicating with the Virtual Service Provider (VSP) on the parent partition, through VMBus, so the VSC can use the hardware resource. The VMBus will then be able to communicate with the hardware for the VM. The VMBus, a channel-based communication, is actually responsible for communicating with the parent partition and hardware.
For the VMBus to access the hardware, it will communicate directly with a component on the Hypervisor called hypercalls. These hypercalls are then redirected to the hardware. However, only the parent partition can actually access the physical processor and memory. The child partitions access a virtual view of these components that are translated on the guest and the host partitions.
New processors have a feature called Second Level Address Translation (SLAT) or Nested Paging. This feature is extremely important on high performance VMs and hosts, as it helps reduce the overhead of the virtual to physical memory and processor translation. On Windows 8, SLAT is a requirement for Hyper-V.
It is important to note that Enlightened Child Partitions, or partitions with IC, can be Windows or Linux OS. If the child partitions have a Linux OS, the name of the component is Linux Integration Services (LIS), but the operation is actually the same.
Another important fact regarding ICs is that they are already present on Windows Server 2008 or later. But, if you are running a newer version of Hyper-V, you have to upgrade the IC version on the VM OS. For example, if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012 R2, you probably don't have to worry about it. But if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012, then you have to upgrade the IC on the VM to match the parent partition version. Running guest OS Windows Server 2012 R2 on a VM on top of Hyper-V 2012 is not recommended. For Linux guest OS, the process is the same. Linux kernel version 3 or later already have LIS installed. If you are running an old version of Linux, you should verify the correct LIS version of your OS. To confirm the Linux and LIS versions, you can refer to an article at http://technet.microsoft.com/library/dn531030.aspx.
Another situation is when the guest OS does not support IC or LIS, or an Unenlightened Child Partition. In this case, the guest OS and its kernel will not be able to run as an Enlightened Child Partition. As the VMBus is not present in this case, the utilization of hardware will be made by emulation and performance will be degraded. This only happens with old versions of Windows and Linux, like Windows 2000 Server, Windows NT, and CentOS 5.8 or earlier, or in case that the guest OS does not have or support IC. Now that you understand how the Hyper-V architecture works, you may be thinking: "Okay, so for all of this to work, what are the requirements?"
At this point, you can see that there is a lot of effort for putting all of this to work. In fact, this architecture is only possible because hardware and software companies worked together in the past. The main goal of both type of companies was to enable virtualization of operating systems without changing them.
Intel and AMD created, each one with its own implementation, a processor feature called virtualization assistance so that the Hypervisor could run on Ring 0, as explained before. But this is just the first requirement. There are other requirement as well, which are as follows:
Virtualization assistance (also known as Hardware-assisted virtualization): This feature was created to remove the necessity of changing the OS for virtualizing it.
On Intel processors, it is known as Intel VT-x. All recent processor families support this feature, including Core i3, Core i5, and Core i7. The complete list of processors and features can be found at http://ark.intel.com/Products/VirtualizationTechnology. You can also use this tool to check if your processor meets this requirement which can be downloaded at: https://downloadcenter.intel.com/Detail_Desc.aspx?ProductID=1881&DwnldID=7838.
On AMD Processors, this technology is known as AMD-V. Like Intel, all recent processor families support this feature. AMD provides a tool to check processor compatibility that can be downloaded at http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization.
Data Execution Prevention (DEP): This is a security feature that marks memory pages as either executable or nonexecutable. For Hyper-V to run, this option must be enabled on the System BIOS. For an Intel-based processor, this feature is called Execute Disable bit (Intel XD bit) and No Execute Bit (AMD NX bit). This configuration will vary from one System BIOS to another. Check with your hardware vendor how to enable it on System BIOS.
x64 (64-bit) based processor: This processor feature uses a 64-bit memory address. Although you may find that all new processors are x64, you might want to check if this is true before starting your implementation. The compatibility checkers above, from Intel and AMD, will show you if your processor is x64.
Second Level Address Translation (SLAT): As discussed before, SLAT is not a requirement for Hyper-V to work. This feature provides much more performance on the VMs as it removes the need for translating physical and virtual pages of memory. It is highly recommended to have the SLAT feature on the processor ait provides more performance on high performance systems. As also discussed before, SLAT is a requirement if you want to use Hyper-V on Windows 8 or 8.1. To check if your processor has the SLAT feature, use the Sysinternals tool—Coreinfo— that can be downloaded at http://technet.microsoft.com/en-us/sysinternals/cc835722.aspx.
There are some specific processor features that are not used exclusively for virtualization. But when the VM is initiated, it will use these specific features from the processor. If the VM is initiated and these features are allocated on the guest OS, you can't simply remove them. This is a problem if you are going to Live Migrate this VM from a host to another host; if these specific features are not available, you won't be able to perform the operation. Live Migration and Share Nothing Live Migration will be covered in later chapters. At this moment, you have to understand that Live Migration moves a powered-on VM from one host to another. If you try to Live Migrate a VM between hosts with different processor types, you may be presented with an error.
Live Migration is only permitted between the same processor vendor: Intel-Intel or AMD-AMD. Intel-AMD Live Migration is not allowed under any circumstance. If the processor is the same on both hosts, Live Migration and Share Nothing Live Migration will work without problems.
But even within the same vendor, there can be different processor families. In this case, you can remove these specific features from the Virtual Processor presented to the VM. To do that, open Hyper-V Manager | Settings... | Processor | Processor Compatibility. Mark the Migrate to a physical computer with a different processor version option. This option is only available if the VM is powered off.
Keep in mind that enabling this option will remove processor-specific features for the VM. If you are going to run an application that requires these features, they will not be available and the application may not run.
Now that you have checked all the requirements, you can start planning your server for virtualization with Hyper-V. This is true from the perspective that you understand how Hyper-V works and what are the requirements for it to work. But there is another important subject that you should pay attention to when planning your server: memory.
I believe you have heard this one before: "The application server is running under performance". In the virtualization world, there is an obvious answer to it: give more virtual hardware to the VM. Although it seems to be the logical solution, the real effect can be totally opposite.
During the early days, when servers had just a few sockets, processors, and cores, a single channel made the communication between logical processors and memory. But server hardware has evolved, and today, we have servers with 256 logical processors and 4 TB of RAM. To provide better communication between these components, a new concept emerged. Modern servers with multiple logical processors and high amount of memory use a new design called Non-Uniform Memory Access (NUMA) architecture.
NUMA is a memory design that consists of allocating memory to a given node, or a cluster of memory and logical processors. Accessing memory from a processor inside the node is notably faster than accessing memory from another node. If a processor has to access memory from another node, the performance of the process performing the operation will be affected. Basically, to solve this equation you have to ensure that the process inside the guest VM is aware of the NUMA node and is able to use the best available option.
When you create a virtual machine, you decide how many virtual processors and how much virtual RAM this VM will have. Usually, you assign the amount of RAM that the application will need to run and meet the expected performance. For example, you may ask a software vendor on the application requirements and this software vendor will say that the application would be using at least 8 GB of RAM. Suppose you have a server with 16 GB of RAM. What you don't know is that this server has four NUMA nodes. To be able to know how much memory each NUMA node has, you must divide the total amount of RAM installed on the server by the number of NUMA nodes on the system. The result will be the amount of RAM of each NUMA node. In this case, each NUMA node has a total of 4 GB of RAM.
Following the instructions of the software vendor, you create a VM with 8 GB of RAM. The Hyper-V standard configuration is to allow NUMA spanning, so you will be able to create the VM and start it. Hyper-V will accommodate 4 GB of RAM on two NUMA nodes. This NUMA spanning configuration means that a processor can access the memory on another NUMA node. As mentioned earlier, this will have an impact on the performance if the application is not aware of it. On Hyper-V, prior to the 2012 version, the guest OS was not informed about the NUMA configuration. Basically, in this case, the guest OS would see one NUMA node with 8 GB of RAM, and the allocation of memory would be made without NUMA restrictions, impacting the final performance of the application.
Hyper-V 2012 and 2012 R2 have the same feature—the guest OS will see the virtual NUMA (vNUMA) presented to the child partition. With this feature, the guest OS and/or the application can make a better choice on where to allocate memory for each process running on this VM.
NUMA is not a virtualization technology. In fact, it has been used for a long time, and even applications like SQL Server 2005 already used NUMA to better allocate the memory that its processes are using.
Prior to Hyper-V 2012, if you wanted to avoid this behavior, you had two choices:
Create the VM and allocate the maximum vRAM of a single NUMA node for it, as Hyper-V will always try to allocate the memory inside of a single NUMA node. In the above case, the VM should not have more than 4 GB of vRAM. But for this configuration to really work, you should also follow the next choice.
Disable NUMA Spanning on Hyper-V. With this configuration disabled, you will not be able to run a VM if the memory configuration exceeds a single NUMA node. To do this, you should clear the Allow virtual machines to span physical NUMA nodes checkbox on Hyper-V Manager | Hyper-V Settings... | NUMA Spanning. Keep in mind that disabling this option will prevent you from running a VM if no nodes are available.
You should also remember that even with Hyper-V 2012, if you create a VM with 8 GB of RAM using two NUMA nodes, the application on top of the guest OS (and the guest OS) must understand the NUMA topology. If the application and/or guest OS are not NUMA aware, vNUMA will not have effect and the application can still have performance issues.
At this point you are probably asking yourself "How do I know how many NUMA nodes I have on my server?" This was harder to find in the previous versions of Windows Server and Hyper-V Server. In versions prior to 2012, you should open the Performance Monitor and check the available counters in Hyper-V VM Vid NUMA Node. The number of instances represents the number of NUMA Nodes.
In Hyper-V 2012, you can check the settings for any VM. Under the Processor tab, there is a new feature available for NUMA. Let's have a look at this screen to understand what it represents:
In Configuration, you can easily confirm how many NUMA nodes the host running this VM has. In the case above, the server has only 1 NUMA node. This means that all memory will be allocated close to the processor.
Multiple NUMA nodes are usually present on servers with high amount of logical processors and memory.
In the NUMA topology section, you can ensure that this VM will always run with the informed configuration. This is presented to you because of a new Hyper-V 2012 feature called Share Nothing Live Migration, which will be explained in detail later. This feature allows you to move a VM from one host to another without turning the VM off, with no cluster and no shared storage. As you can move the VM turned on, you might want to force the processor and memory configuration, based on the hardware of your worst server, ensuring that your VM will always meet your performance expectations.
The Use Hardware Topology button will apply the hardware topology in case you moved the VM to another host or in case you changed the configuration and you want to apply the default configuration again.
To summarize, if you want to make sure that your VM will not have performance problems, you should check how many NUMA nodes your server has and divide the total amount of memory by it; the result is the total memory on each node. Creating a VM with more memory than a single node will make Hyper-V present a vNUMA to the guest OS. Ensuring that the guest OS and applications are NUMA aware is also important, so that the guest OS and application can use this information to allocate memory for a process on the correct node.
NUMA is important to ensure that you will not have problems because of host configuration and misconfiguration on the VM. But, in some cases, even when planning the VM size, you will come to a moment when the VM memory is stressed. In these cases, Hyper-V can help with another feature called Dynamic memory.
Dynamic memory is a feature that was released in Service Pack 1 for Windows Server 2008 R2 and Hyper-V Server 2008 R2. It was a long awaited feature for Hyper-V as its major competitors had other features for managing VM memory, and, until that point, Hyper-V option was the static VM memory.
This feature allows you to configure not only the amount of memory for a VM, but also define a Minimum and Maximum of vRAM. In case the VM memory is stressed, Hyper-V can provide more memory to the VM, as long as the host has physical memory. An important point for Dynamic memory is that it only uses physical memory. Other memory management techniques in the market use processors (page sharing) or disks (Second Level Paging) to address memory issues. Microsoft decided not to use these techniques as they bring overhead to the host or drag the performance of the VM, except for some VM restart operations with Smart Paging that will be explained later in this section. Instead, Dynamic memory uses another technique from the market called ballooning.
To better understand the ballooning technique, imagine the air companies' ticketing and sales process. Let's say an aircraft has 200 seats. The company will sell a given percentage above those 200 seats. In the end, only 200 passengers are allowed to check-in. With Dynamic memory, you can create any number of VMs with a maximum that exceeds the physical memory on the host, but only the available physical memory will be allocated to these VMs. If a VM requires more memory and there is no available memory on this host, the VM will have to wait for the ballooning process to work on other VMs and for the host to reclaim unused memory.
The standard configuration, unless you changed it during VM creation, has the Enable Dynamic Memory checkbox cleared. With that configuration, you can specify the Startup RAM size and the VM will run with this amount of RAM all the time. This is what we call static memory.
Keep in mind that static memory is the recommended option. Dynamic memory is an alternative that can be used for consolidating VMs on environments with idle or low-load VMs. Dynamic memory is only recommended for Virtual Desktop Infrastructure (VDI) with Pooled VMs scenario. Additionally, some applications, such as SharePoint, do not support Dynamic memory.
If you enable Dynamic memory, you can also specify the Minimum RAM and Maximum RAM. This is a new configuration for Dynamic memory since Hyper-V 2012. Prior to that, you were able to set the Startup and Maximum RAM. This change enables you to set a Startup RAM that will be set to the VM at the moment of the VM startup. As the VM continues to run, and if it doesn't need that amount of RAM, Hyper-V will reclaim that memory up to the value configured on Minimum RAM. This reclaimed memory will be available for use by other VMs. This new configuration of startup and minimum memory creates a scenario issue that will be explained in detail later in this section.
The Maximum RAM will limit how much memory a VM can allocate as the application inside the VM requests more memory. When the limit is reached, Hyper-V stops giving memory to the VM.
The two remaining options are as important as the other ones. Memory buffer and Memory weight are usually overlooked, but require attention as they influence the operation of the VM. The Memory buffer option is important because it is the definition of how much memory Hyper-V will try to reserve and assign for the VM on the host. When the VM requests more memory, Hyper-V will deliver the reserved memory for the VM based on the percentage and the memory requested by the application. After that, Hyper-V will reserve the percentage again. To validate the amount of memory that will be assigned to the VM, Hyper-V uses performance counters to identify the committed memory. You can check this performance counter by opening Performance Monitor and adding the Hyper-V Dynamic memory VM / Average Pressure performance counter. With that information, Hyper-V uses the following formula to determine how much memory to assign to the VM:
Amount of Memory Buffer = How much memory the virtual machine actually needs / (Memory Buffer value / 100)
For example, suppose that the memory allocation is 2000 MB and the memory buffer is 20 percent. In this case, Hyper-V will try to allocate 400 MB of RAM, with a total of 2400 MB, for the VM. But all of this will only make sense if you also configure Memory weight.
Memory weight is the configuration that confirms the reserved memory. Once the VM requests memory and Hyper-V delivers it, Hyper-V is unable to remove the memory from the VM as this can crash the VM operation. If the Memory weight is the same for all VMs, then all VMs can actually request all available memory on the host, as long as the maximum memory allows it, even if this memory is reserved to another VM. If a VM is more important in your environment, and you want to ensure that the reserved memory is not consumed by other VMs, you can increase the Memory weight. This configuration will ensure that VMs with a lower configuration of Memory weight do not use the reserved memory from VMs with higher Memory weight configuration.
After configuring Dynamic memory, you can verify the VM memory on Hyper-V Memory, as seen in the following screenshot:
As you can see from the image above, there is some important information on Hyper-V Manager for you to take note of, and to verify that your VM is correctly configured. If you need more details on Dynamic memory, there are other performance counters under Hyper-V Dynamic Memory VM in Performance Monitor.
With the configuration above, if all VMs are turned on at the same time, Hyper-V will be able to allocate the correct amount of memory for each VM.
The preceding configuration is an example. In fact, due to memory management overhead, you need more memory on the host to be able to turn on the VMs with the VM configuration illustrated in this table. A minimum of 512 MB is recommended to be reserved on the host.
In case you turn off VM04 and VM01 requests more memory, VM01 will be able to use 2 GB of RAM. In this case, if you try to turn on VM04 again, you won't be able to power on as there are not enough resources. The following error will be presented:
But there is another case where a situation like this can happen even without turning any VM off. With the same configuration from the previous table, imagine that after a while, Hyper-V tries to reclaim memory from the VMs, and VM01 is with low utilization. As VM01 is configured to use 512 MB of Minimum RAM, Hyper-V will reclaim another 512 MB of RAM. At this moment, VM04 is stressed and requests more memory. Hyper-V will then allocate the available 512 MB of RAM to VM04.
Everything is okay until VM01 begins to restart. As VM01 is configured to use 1 GB of startup RAM, the VM won't be able to initialize. To avoid this scenario, Microsoft introduced Smart Paging.
Smart Paging is a feature released in Hyper-V 2012. It creates a Smart Paging file on the host disk for allocating memory to the VM so that the VM can use the correct Startup memory configuration. This feature can be used only under certain conditions (all must be true), which are as follows:
The virtual machine is being restarted
There is no available physical memory
No memory can be reclaimed from other virtual machines running on the host
If all of the above are true, Smart Paging will work to allow the VM to restart. You can configure the Smart Paging location by navigating to Hyper-V Manager | VM Settings... | Smart Paging File Location. It is recommended to use a Solid-State Disk (SSD), or disks which are not over-utilized, for better performance.
With all the Dynamic memory configuration in place, it is important to understand its behavior from the architectural perspective and guest OS limitations. As you were probably able to figure out, for Hyper-V to be able to reclaim unused memory, the guest OS will be prompted to return this memory. This can only be achieved by using IC or LIS, so it is extremely important to have IC/LIS updated on the guest OS.
The IC will create the balloon driver and this driver will be utilized by the VSC on the guest OS. This driver will inflate when an application requests memory and Hyper-V delivers it. For this driver to work, there are some requirements for the guest OS. When using a Windows Server 2012 or 2012 R2, both Standard and Datacenter versions will support it. But some versions of previous releases of Windows Server, the Standard and Datacenter versions did not support it. For Windows Server 2008 and 2008 R2, the Standard version did not support Dynamic memory. For Windows 8 and 8.1, only Professional and Enterprise support it. For a complete list of guest OS, that support Dynamic memory, check the official article at http://technet.microsoft.com/en-us/library/hh831766.aspx.
After understanding all the information on architecture presented in this chapter, you must now be equipped to understand how Hyper-V works, and to make a better choice of your physical servers for virtualization. To summarize, a physical server must attend to some requirements to be able to run Hyper-V. When you install the Hyper-V role, the architecture will be modified to a Microkernel Type 1 Hypervisor. The processor families on all hosts must be identified to allow VM Live Migration. Memory configuration must be examined to avoid the misconfiguration of VMs leading to NUMA issues, and Dynamic memory can help in memory allocation among VMs for idle or low-load VMs.
In the next chapter, we will cover the installation of Hyper-V on multiple scenarios, and options for Windows Server and Hyper-V Server, so you will be able to see all of the information presented here in action.