Most IT professionals would define VDI as a collection of Virtual Machines (VMs), each running a Windows client OS (Windows 8.1 in this book) that our users will connect to from a variety of devices, such as thin clients, tablets and smart phones, laptops, and so on. In fact, there are other ways of doing this, but these can also rely on VMs, so a basic knowledge of this server virtualization is essential before we can understand Virtual Desktop Infrastructure (VDI) itself.
Strictly speaking, the business of running multiple operating systems, each in its own VM on one physical host is called server virtualization. This may seem to be picky, but as we'll see in later chapters, there is also application and user virtualization, both of which have their part to play in Windows-based VDI.
In this book, we'll be using Hyper-V, the Microsoft solution to virtualize the operating system away from the underlying hardware. Hyper-V is now in its fourth generation and is included inside Windows Server 2012 R2, but may be new to some readers. So, in this chapter, we'll explore the basics of Hyper-V by creating a simple VM. We can then see how to manage this VM and lay the foundation for deploying VDI by implementing a Domain Controller (DC) in the VM.
The Hyper-V role has been a part of the Windows Server operating system since Windows Server 2008. It is also included in the Windows 8 and Windows 8.1 client operating systems. This enables developers to run virtualized servers on a Windows client, but in Windows 8/8.1, the Hyper-V role does not contain advanced features that are available in the full Server edition, which we need for VDI.
It's technically possible to run nearly any x86 or x64 OS in Hyper-V, but if you want support from Microsoft, the Guest OS must either be a supported version of Windows Server or Client, or a supported version and distribution of Linux. For example, Windows Server 2000 and Windows XP aren't supported at all, so they aren't supported on Hyper-V. The full details of Guest OS support in Hyper-V can be found at http://technet.microsoft.com/en-us/library/cc794868(v=WS.10).aspx.
Hyper-V can be deployed in a variety of ways, and the easiest of these is simply to add the role into Windows Server running on a physical server. When the hypervisor is installed, the host operating system sits in a parent partition, which is essentially like a VM, and is just there to manage the hypervisor. Technically speaking, Hyper-V is like VMware's ESXi, a true Type 1 or bare-metal hypervisor. This deep integration with the latest virtualization hardware is reflected in the high performance limits for Hyper-V in Windows Server 2012 R2, as shown in the following table:
Host â CPU
Host â memory
Host â limit of logical processors that can be assigned to VMs
Host cluster â nodes
Host cluster â highly available VMs
VM â logical processors
VM â memory
VM â virtual disk size
64 TB (VHDX)
These limits are beyond the capabilities of most modern servers and enable all but the largest workloads to be virtualized, including substantial VDI deployments.
Most modern servers are designed to run Hyper-V and, by association, support VDI. However, you should check that your servers have been tested with Windows Server 2012 R2 before using it for VDI in production, either with your hardware vendor or directly on the Microsoft Hardware Compatibility List (HCL) at http://windowsservercatalog.com/results.aspx?bCatId=1283&avc=10.
For Hyper-V servers running VDI VMs, there's also a further hardware prerequisite â Second Level Address Translation (SLAT) on the CPUs. This is needed for RemoteFX, the technology used to virtualize a Graphics Processing Unit (GPU). Advanced Micro Devices (AMD) refers to this as Nested Page Table (NPT) or Rapid Virtualization Indexing (RVI), and Intel calls it Extended Page Table (EPT). Either way, you'll need a CPU that supports this if you want a good graphics experience for users in Windows VDI.
If Hyper-V is installed without these supporting features in place, the Hyper-V service will not start, and we'll get errors in the event log of the server.
In server virtualization, a physical disk is represented by a single file, a Virtual Hard Disk (VHD), and a VM will have one or more of these. In Hyper-V, VHDs have either the VHD or VHDX extension, where VMware uses the VMDK format. The VHD format came out with Hyper-V in Windows Server 2008, while the newer VHDX format was introduced with Windows Server 2012. VHDX has a number of advantages over the older format, but both are supported in Windows Server 2012 R2. They have the following properties:
VHDX has the concept of physical and logical block sizes to run much faster on larger physical disks of 4 KB block size
VHDX is more resilient to failure and is extensible for use by third parties, as it has additional XML metadata to rack updates and store custom information
VHDs have other uses outside of server virtualization, for example, Windows backups are in VHD format and now, with Windows Server 2012, they are in VHDX format. In this book, I will refer to Virtual Hard Disks as VHDs, and I'll point out when there is a specific need to use VHDX.
Originally, fixed disks were much faster than differencing disks, but at the expense of reserving space on the disk and making them larger and more difficult to move around. The gap has closed in Windows Server 2012 R2, but it's worth remembering that a fixed disk on a clean physical disk will be contiguous and not fragmented, and a dynamic disk has to store all the information about where all the physical blocks are. So, there will always be a performance penalty.
Differencing disks: They work by creating a VHDX where changes are written to the differencing disk, but they are based on a parent disk that never changes. The parent disk can either be a fixed or dynamic disk, and the differencing disk will inherit this property.
In order to explore Hyper-V and then overlay VDI on it, we are going to need a server to work with. For evaluation purposes, this could simply be a modern laptop; I have a 16 GB laptop with an Intel i7 processor, which I have modified by adding a 750 GB Solid State Drive (SSD) to reduce disk contention when I am running multiple VMs. I am assuming that you have something like this to work on. The key thing is that it can support Hyper-V, and you have enough RAM and disk space to support the running of multiple VMs on it. Microsoft's Windows Server 2012 R2 is available as a free evaluation, which is good for 180 days once activated. This can be downloaded from the following link:
An option is available to alter the Boot Configuration Database (BCD) to boot directly from a VHD or VHDX (this requires a minimum of a VHD or VHDX running Windows 7 or Windows Server 2008 R2). One advantage of this option is that it gives us the ability to change different VHDs or VHDXs quickly and create alternative OS configurations. The full details are available on TechNet at http://technet.microsoft.com/en-us/library/dd799299(v=WS.10).aspx. Navigate to Manage | Add Roles and Features.
Our first task is to add in the Hyper-V role from Server Manager. Perform the following tasks to achieve this:
Select the server you are working on in the Before You Begin page and click on Next.
On the Installation Type page, select the option Role-based or feature-based installation.
Select the server you wish to work on and click on Next.
Select Hyper-V from the Server Roles page and click on Next.
You'll see that the option to install the Hyper-V management tools is already selected, so click on Next.
Don't create any virtual switches on this screen. We'll want to explore this and create our own as we configure Hyper-V after it's installed.
On the Confirmation page, be sure to select Restart the Destination Server Automatically If Required and click on Install.
Add-WindowsFeature Hyper-V âIncludeManagementTools -Restart
Downloading the example code
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
No other server roles should be installed on a server with the Hyper-V role installed, with one exception: failover clustering. Failover clustering is needed to provide High Availability (HA) of Virtual Machines.
After the server reboots twice, we are ready to use Hyper-V.
Before we can use Hyper-V to create VMs, we need to perform some initial configuring to provide the resources that our VMs will need, such as CPU, graphics, storage, and networking. This is done through the Hyper-V Settings option in Hyper-V Manager, the Microsoft Management Console (MMC) snap-in that we've just installed, as shown in the following screenshot:
The first two options are how we set the default locations for VMs and their VHDXs. Notice that, by default, new VMs and their associated disks get stored on
C:, which is probably the last place we want them, especially if we have just configured Boot to VHD, as the VHDXs for the VMs will now reside inside another VHDX file. So, we need to change this to a large fast disk that will be a suitable home for our VMs (in my case,
e:\TempVMStore, which is sitting on an SSD). We'll return to the other settings for Hyper-V in later chapters, as it would be good to get a basic VM up and running quickly so that we can explore Hyper-V. The final step before we create a VM is to configure networking in Hyper-V by creating one or more virtual switches that we can then connect to our VMs. There are three types of virtual switches, as follows:
External virtual switch: This is bound to an actual network adapter on our host (physical NIC). There's an important setting for external networks to allow the management operating system to share this network adapter. If this is set, then a host server can communicate with other physical and virtual servers over the physical NIC. If it's left unchecked, then the physical host cannot use that physical NIC.
Internal virtual switch: This type of virtual switch is not bound to a physical switch at all. We could use this for our first look at VDI if we are doing everything on one physical server, as it allows the physical host and the guest VMs to connect over it.
Private virtual switch: This can also be useful for some types of sandbox or restricted services as it allows the VMs to communicate over it, but there is no communication with the physical host at all.
Any internal virtual switches and external virtual switches that are set to be shared with the management operating system will be visible in the network connections on the host, and they will appear with vEthernet (the name of the virtual switch in Hyper-V). Private switches and external virtual switches that are not shared will only show up as virtual switches in Hyper-V and are completely hidden from the host.
On my demo setup, shown in the following screenshot, we can see what's going on if we look at the network connections and the Hyper-V switches:
Here, we can see that I have created two virtual switches: RDS-Switch and RDS Internet in Hyper-V. These correspond to the two network connections: vEthernet (RDS-Switch) and vEthernet (RDS Internet). What isn't obvious until we examine its settings is that the Ethernet network connection is controlled by Hyper-V. The only protocol that is enabled when Hyper-V is managing a physical NIC is the Hyper-V Extensible Virtual Switch. The easiest way to see all of this is using PowerShell, as I can get all the relevant information on one screen using the following command:
Get-NetAdapterBinding | where enabled -EQ $true | Select-Object -Property Name, DisplayName, ifDesc| Out-GridView
This works by looking for all the protocols on each of the network connections and filtering out the ones that are enabled before outputting the specific properties. I am interested in a grid view, like the one shown in the following screenshot:
My suggestion for a simple lab setup is to create an external switch and allow the management operating system to share it so that we can reach our VMs from outside the physical server.
One or more Virtual Hard Disks (VHDX), one of which will have the OS.
Its memory execution state. If a VM is running and has 2 GB of RAM memory, then this is what is in that 2 GB of RAM. VMs can be paused/saved in the same way as a physical laptop has hibernation. VMs can be set to be automatically saved when the host is shut down. When a VM is resumed, this memory state is copied back into the RAM. The file for this is the
BINfile, and its size is related to the amount of memory allocated to the VM.
One of the simplest ways to create a VHDX complete with an operating system is with PowerShell. Actually, the PowerShell code is pretty complex, but it's encapsulated in a script available from the TechNet gallery (http://gallery.technet.microsoft.com/scriptcenter/Convert-WindowsImageps1-0fe23a8f).
You can either work out the command-line switches for this script or invoke it with
-showUI, as shown in the following command:
The preceding command will give you a simple UI to fill in to create an image. One reason for doing it this way is that there are multiple installation options on a given Windows ISO, for example, in Windows Server, there will be the option to install Server Core or the full user interface, and the dropdown in the UI will allow you to select that. We'll be using this script later to create a VHDX with Windows 8.1 as a template for our VDI VMs. For now, we can use this script to make a VHDX from the Windows Server 2012 R2 ISO. I suggest that we call this
WS2012R2 Sysprep.VHDX to denote that the OS is in a sysprepped state.
If we just copied the disk of an existing VM, then its System Identifier (SID) remains the same even though we might rename the server or client, and we won't be able to have both the VMs in the same domain. We use sysprep to remove the SID, and when each copy is started, a unique SID is created so that each VM can then properly join a domain.
We could at this stage go and build a VM around it, but as we will be creating several VMs and the disk space is limited in our demo lab, we will make use of differencing disks. To create a differencing disk from this VHDX, we can use the New Virtual Hard Disk Wizard dialog box in Hyper-V Manager. We will perform the following steps:
From Hyper-V Manager, navigate to New | Hard Disk in the task pane. Click on Next to skip past the Before You Begin screen.
In the Choose Disk Format screen, select the VHDX format and click on Next.
In the Choose Disk Type screen, select Differencing.
In the Specify Name and Location screen, give the disk a name and path. It's a good idea to name disks used to store and run a server OS on a VM with the same name as the VM they belong to. We'll use
Test.VHDXfor this and put it on a path where you have good disk speed, and away from the host server OS if possible (I am using
e:\TempVMStorethroughout this book). Click on Next to continue.
In the Configure Disk screen, we need to identify the sysprepped VHDX we created earlier,
WS2012R2 Sysprep.VHDX, and click on Next to continue.
We can then confirm our choices on the Summary screen and click on Finish to create the disk.
The equivalent command in PowerShell is as follows:
New-VHD âPath "E:\TempVMStore\Test.VHDX" -Differencing âParentPath"E:\TempVMStore\ WS2012R2 Sysprep.VHDX"
If we look at the
Test.VHDX disk that we just created, it's just 4 Mb in size because it's dynamically expanding (thin provisioned), which means we have declared what the size it is (in this case, based on the size of the parent disk). But, as yet we haven't used any of it, so it's just a placeholder. Now we need to specify the other resources that our test VM will need. The New Virtual Hard Disk Wizard dialog box could be used for this. If you do decide to do this, you'll realize that there are a lot of settings to specify, such as the number of processors, memory, disks, and networking. Rather than fill this book with lots of screenshots of dialog boxes like this, it's simpler and more efficient to use PowerShell. In fact, we can create a working VM with just one (rather long) command line, as follows:
New-VM -Name Test -VHDPath 'E:\Temp VM Store\test.vhdx' -MemoryStartupBytes 1Gb -BootDevice IDE -SwitchNameHostNetLogicalSwitch
The terms used in the command line are explained as follows:
If we go back to Hyper-V Manager, we can see our new VM
Test. It's currently turned off, and before we start it, we can review the settings we have made. This is shown in the following screenshot:
Here, we can see that it has 1 GB memory, that the VHDX we made is connected to the IDE controller, and the network adapter is connected to our virtual switch. Note that some of the hardware settings here can only be made while the VM is off for example, we can add in another network adapter or a second hard disk could be connected to the IDE controller. Some operations can be performed online and it is possible to add or remove virtual disks that reconnected to the Small Computer System Interface (SCSI) adapter while the machine is on. But, in this type of VM, we can only boot from a disk connected to an IDE controller. There is actually no difference in performance when you connect a VHD or VHDX to either type of controller.
There is a new type of VM in Windows Server 2012 R2, a second-generation VM that removes a lot of these restrictions. It's based on UEFI rather than BIOS and, if we look at its settings alongside the first-generation VM we created, there are a number of changes. This is shown in the following screenshot:
As you can see, there is Firmware instead of BIOS. In a second-generation VM, we are booting from a file. It's not shown in the screenshot, but we can also enable a secure boot so that the VM won't start if the VHDX with the OS has been modified externally. Second-generation VMs only have SCSI connectors to attach virtual disks and DVD drives. Only VHDX format virtual disks can be used, and these can be expanded while the VM is online. There are other changes that are not visible here; the VMs make less demands on the hypervisor, and the attack surface of the VMs is reduced. However, I am going to stop there because at the time of writing, we can't use a second-generation VM as a VDI template. So all the desktop VMs that we are going to create in this book are going to have to be from the first-generation VM. Hopefully, this will be possible soon so VDI can make use of the benefits of this new type of VM.
Second-generation VMs don't support RemoteFX, so USB redirection won't work. If we try to use a second-generation VM as a template for VDI, then the process will fail. For more on second-generation VMs, refer to http://technet.microsoft.com/en-us/library/dn282285.aspx.
We can now start our VM either by right-clicking on it or using the following simple PowerShell command:
start-VM âName Test
In the preceding command,
âName is our test VM. When we run this command, our new VM will come out of sysprep and all the settings and changes made during that process will be written to our new differencing disk, not the read-only parent disk (
WS2012R2 Sysprep.VHDX). This differencing disk will now take up about 700 MB. It's also worth noting that the VM is using resources on the host via the Hyper-V integration components (similar to VMware tools), which provide driver support for the virtual devices we have specified, such as the SCSI and network adapters. We can connect to our VM at any time after start up by right-clicking on it in Hyper-V Manager, and we can watch it complete its initial configuration. After that, we can enter the administrator password and see that it is now running Windows Server 2012 R2.
In Hyper-V, it's possible to capture the state of a VM at a point in time and roll back by creating a checkpoint (in VMware and older versions of Hyper-V, this is called a snapshot). This uses the same differencing system. However, this is controlled by Hyper-V, so don't delete or merge these disks directly! When a checkpoint is made, the state of the VHD is frozen by making it read only, and a new VHD (an AVHD in Hyper-V) is created, into which any changes are written. Reverting to a checkpoint removes the AVHD and re-enables the original VHD. When checkpoints are deleted, the checkpoint AVHDs are merged back into the original VHD or chain of AVHDs if you have more than one checkpoint. It's also important to understand that checkpoints will impact the performance of a VM just as differencing disks do, and that creating checkpoints for a VM is not a substitute for creating a backup.
The Test VM we created is largely useless as it is; it has no roles and features on it and we can only connect to it using the Hyper-V console, the virtual equivalent of wandering into a data center and logging on to it directly. Windows Server is designed to be remotely managed, whether it's used for Hyper-V or performing a role in VDI. There are several ways of doing this from another server or desktop with the Remote Server Administration Tools (RSAT) via System Center, and of course with command-line tools such as PowerShell.
In older versions of Windows Server, remote management had to be enabled, but now it's already configured for servers and desktops in the same Active Directory (AD) domain. This has greatly increased the power of Server Manager as it's now possible to see how groups of servers are performing either by their function or by some grouping of your own, as shown in the following screenshot:
The boxes with the dark gray (red in reality) headers have issues, and these can be identified by simply clicking on the problem identified in the box: Manageability, Events, Performance, or BPA results. With RSAT installed, all the individual servers can be managed by using tools appropriate to the roles they are performing. For example, if the server is running Hyper-V, then you are presented with the option to use Hyper-V Manager, where you would get the DNS console option for a DNS server. You can also remotely connect to them and run those RSAT tools remotely from the server you are working on. Server Manager can also add and remove roles and features on any of the managed servers, or to a VHDX while the VM is turned off.
Remote management is very important when it comes to putting Hyper-V into production. We have already established that having any roles and features on a Hyper-V host apart from Hyper-V is not best practice. So why have the management tools and any kind of user interface on these servers at all? There is no good answer to that and there are two ways of achieving this, as follows:
If we are going to use a physical server to host VMs running Windows Server, we should use the default way of installing Windows Server, Server Core.
If the physical host is going to host VMs running Windows client VMs for VDI or we plan to deploy Linux VMs, then we should use the free, cut-down version of Windows Server, Hyper-V Server 2012 R2 (http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx). This only has the Hyper-V role, the file server role, and the failover clustering feature included in it.
SConfig, a lightweight shell for basic tasks
Both Server Core and Hyper-V Server require far less patches and updates (only about 25 percent compared to a full installation of Windows Server), and at about 4 GB, they are under half the size of a full installation of Windows Server. The other benefit is that with no Internet Explorer, no one can go surfing the Web on our servers! Just to be clear, this is the only interface available in Hyper-V Server. The new feature for Windows Server 2012 is the option to post-install the server graphical shell and management tools on top of Server Core, or remove them just like any other role or feature. Have a look at the following screenshot:
The effects of changing the options under User Interfaces and Infrastructure are as follows:
If we just keep the Server Graphical Shell feature, we can still run MMC snap-ins and Server Manager, but there won't be a modern desktop, an explorer bar, file explorer, or Internet Explorer; this is often referred to as "MinShell"
If we elect to install Windows with a user interface, we'll get the Server Graphical Shell and Graphical Management Tools and Infrastructure features
If we add in the Desktop Experience feature, our server will look even more like Windows 8.1, complete with the Windows Store
The simplest way to show the power of management in Windows Server is to have some domain-joined servers that we can manage, as remote management in Windows Server 2012 R2 is enabled by default across a domain. Domain membership is also needed for VDI as we'll see in the next chapter, so we need to build a DC. We already know that we shouldn't add roles like AD to a server running Hyper-V, but we could deploy the AD role to our test VM and set it up as a DC. However, I suggest we make a new VM from scratch and configure this as a DC, if there is room on your server for that, so we can see how to manage another VM and a host. We'll also make this VM a Dynamic Host Configuration Protocol (DHCP) server so that any new VMs we create can be assigned IP addresses. Finally, we'll add the RSAT tools in here so that we can manage everything from this one VM.
We can manually create a new VM called
RDS-DC by following these steps:
New-VirtualSwitch âName "RDS Switch" âtype internal
Create a new VHD called
WS2012R2 Sysprep.VHDXusing the following command:
New-VHD âPath "E:\TempVMStore\RDS-DC.VHDX" -Differencing âParentPath "E:\TempVMStore\ WS2012R2 Sysprep.VHDX"
Create a new VM as we did for the test VM, but with the following properties:
Give it at least 512 MB RAM and one logical processor
Connect it to the RDS switch we just created
The command line will look like the following:
New-VM -Name RDS-DC -VHDPath 'E:\Temp VM Store\RDS-DC.vhdx' -MemoryStartupBytes 512Mb -BootDevice IDE -SwitchName "RDS Switch"
Add the roles and features directly into the new VHD that we just made without starting it.
In the Select destination server screen, check the Select a virtual hard disk option, as shown in the following screenshot:
In this dialog box, we select our physical host to mount the VHD on to (Orange.contoso.com in the screenshot), and the path to the VHD we just created. Click on Next to continue.
From the Select Server Roles screen, select Active Directory Domain Services, DHCP Server, and DNS Server. We'll get warnings as we add these roles, such as the need to install DNS and not having a static IP address. These can be ignored as we'll be addressing them once the VM is started. Click on Next to continue.
From the Features screen, expand the Remote Server Administration Tools option and select the features as shown in the following screenshot:
On the Confirm installation selections screen, note that we can save our choices onto an XML file and then use this again to create identical server installations by simply using the PowerShell command
Add-windowsFeature. We'll do that and save the file as
RDS-DC installation config.xml. Then click on Install to add the features to the VHD.
At this point, the new VM has the features installed on it but they aren't configured. The VM has not been turned on and it's still in a sysprepped state. This allows VMs to be configured exactly as we want and stored as templates. We can even patch the OS in the same way, using the Disk Image Servicing Management tool (
DISM.exe). We can now start to configure the options we have installed by performing the following steps:
Start the VM from Hyper-V Manager and wait for it to come out of sysprep. You should see the preview screen at the bottom of Hyper-V Manager change from the Windows logo to the initial post sysprep setup screen.
Connect to the VM from Hyper-V Manager by double-clicking on it.
Set the language for your region and accept the license agreement.
Set the administrator password (we'll be using
Passw0rd!as the standard for all passwords). The VM will complete its initial setup.
Log in to the VM, and in Server Manager, select the local server in the navigation pane and rename the VM to
RDS-DC. Leave it in a workgroup and reboot it.
Configure the network card with a static IP address (192.168.0.1).
From Server Manager, select the local server and right-click on the link next to Ethernet IPv4 address assigned by DHCP,IPv6 enabled.
Right-click on the Ethernet adapter and select Internet Protocol Version 4. Then click on Properties.
Select the checkbox Use the following IP address. Set the IP address field to
192.168.0.1with a Subnet mask value of
255.255.255.0. Leave the Default gateway field blank and set the Preferred DNS server field to
127.0.0.1. Click on OK and Close to accept these settings and close the Network Connections window.
In Server Manager, select the yellow warning flag that displays Promote this server to a domain controller to open the Active Directory Domain Services Configuration Wizard window. Note that DCPromo (
dcpromo.exe) is gone, and that this is how domain controllers are created in Windows Server 2012 R2. This is shown in the following screenshot:
On the DNS Options screen, ignore the warning and click on Next to continue.
In the Additional Options screen, leave the NetBIOS domain name as Contoso and click on Next to continue.
Click on Next through the next couple of screens, and in the Prerequisites Check screen, ignore the warnings about NT4 and DNS delegation and click on Install. The server will now reboot.
Reconnect to the
RDS-DCVM and log in as
Administrator. Then configure the new VM as a DHCP server by selecting the Complete DHCP Configuration warning flag in Server Manager, which will launch the DHCP Post-Install configuration wizard.
Authorize the service in our new domain with the
Administratordomain credentials and click on Commit. Close the wizard.
From the Tools menu in Server Manager, select DHCP to open the DHCP MMC snap-in. Expand rds-dc.contoso.com in the left navigation pane.
Right-click on IPv4 and select New Scope to launch the New Scope Wizard window. Call the scope
VDI-Scopeand set its description to
Pool for VDI desktop virtual machines.
In the IP address screen, set the Start IP address to
192.168.0.100and the End IP address to
192.168.0.254, and click on Next.
Click on Next on the Add Exclusions and Delay and Durations screens.
Click on Next on the Router screen.
On the Domain Name and DNS Servers screens, you should see the parent domain already set to Contoso.com. The only address we need in the IP addresses window is
192.168.10.1so that the DHCP clients can have the DNS server set as well as being granted an IP address.
On the WINS scope screen, click on Next.
On the Activate Scope screen, select Yes, I want to activate this scope now and click on Next.
Click on Finish to complete the DHCP scope wizard.
Refresh the IP address on the host and confirm that the host can get an IP address in this range and can ping
Now that we have a DC, it would be good to have some users and groups in there that we can use in our VDI lab setup in the next chapter. We will add users and groups by performing the following steps:
Three new users,
RDSUser3with password as
Passw0rd!, and set the password to never expire
Two new groups,
The included script to create a DC stops here, but for the VDI we need to have our physical host in our new domain. Our physical host will be registered in the domain because it will be able to resolve the new DC; it's connected to the internal switch we created and set to use DHCP by default. Our new DC is a DHCP server, and part of this means that the domain and domain controller can be resolved in DNS. So, all we have to do is perform the following steps:
In Server Manager, on the host, select the local server on the left-hand navigation pane and click on the name of the server to bring up its system properties.
Click on Change and select the contoso.com domain. When asked to enter the domain credentials, use the
administratoraccount with the password
Passw0rd!to join the domain.
Reboot the host. Note that the VMs on the host will restart automatically if they were on when the host is shut down. This is a default that can be overridden in the setting for each VM.
Test VM we made earlier to the domain in the same way so that we can contrast managing a physical host with a VM. To do this, we perform the following steps:
In Hyper-V Manager, on the host, right-click on the
TestVM we made already and select Settings. Change the network adapter to RDS-Switch.
Connect to the VM and complete the initial setup of this VM (set the local administrator password to
Log in to the
TestVM and in Server Manager, select the local server in the navigation pane and click on the server name. Rename this server to
RDS-Testand join it to the
Log in to
RDS-DCand open Server Manager.
Select all the servers in the
Contosodomain. Note that we can manage other servers that aren't in our domain if we can discover them and have the credentials to log in to them.
Now you should see something like the following screenshot:
I can manage all my servers from here. You'll notice that as you added the host server, the Hyper-V server group was added to the navigation pane. If we select it, you'll see your host server. If we right-click on it, we can get to Hyper-V Manager because Server Manager knows what roles and features are on each server that it's managing. Hyper-V Manager is only on our DC because we installed RSAT, and if you go to the Tools menu in Server Manager, you can see all of the consoles and utilities that are provided.
Underneath any list of servers in Server Manager, there is a complete readout of the servers in a given group, which includes event logs and the roles and features on each server by default. We can also turn on performance counters for our servers and enable the Best Practices Analyzer (BPA) to get meaningful advice and guidance on any issues that affect all our managed servers.
It's possible to manage earlier versions of Windows Server (2008 and 2008 R2) by adding in PowerShell 3 and Windows Remote Management 3. The only thing we can't do with these older versions is remotely install roles and features.
PowerShell also has extensive support to manage multiple servers, and we have the ability to run remote PowerShell sessions that can persist after a reboot to carry out complex tasks. Many of the operations we carry out in Server Manager are in simple PowerShell scripts running behind the scenes, such as when we promoted the
RDS-DC VM to being a DC.
While Server Manager can show us what's happening with all our servers, it doesn't actually do anything. It just shows us what problems there might be and gives us the tools to investigate and fix them. One aspect of this is that we often want our servers to stay exactly in the configuration, and with Windows Server 2012 R2, PowerShell 4 can now be used for Desired State Configuration (DSC). This allows servers to be in a known configuration with the following aspects:
The roles and features that are enabled
Files and directories
DSC can both check and enforce depending on how it is set up. Behind the scenes, a Microsoft Operations File (MOF) is created based on your script. It's then possible to configure a central server with the DSC service (
Add-WindowsFeature âname DSC-Service), which can then check the configurations on other servers for application installation files, specific files, and settings. If they are not there, then DSC can replace them to put them back into the required state.
No doubt, Microsoft and third-party vendors such as Dell and Idera will be bringing out tools to simplify the creation of these files to the point where a given template server can be examined, as well as the MOF files and content built from it, so that a series of servers can be kept in a given configuration. That's pretty much what we need to do in VDI, except that we want to manage lots of VMs running desktops and keep them in a desired state.
In this chapter, you acquired basic knowledge of how Microsoft performs server virtualization with Hyper-V. You saw how to perform an initial setup and configuration of a Hyper-V host, sufficient to create a simple virtual machine running on that host. This allowed us to create a virtual DC that then gave us a quick way to manage our virtualization hosts and other VMs joined to that domain. This is very important because it's so simple to create lots of VMs, and we can only manage them at scale by managing them from one place rather than connecting to each in turn to check their health and configuration. Another key part of the management story is PowerShell, and it's important that we all get used to it as it makes our actions less repeatable and gives us the tools to quickly create VMs with a given set of specifications and keep them in a desired configuration.
Besides being an exercise in itself, all of the practical sections in this chapter provide the foundation we need to create VDI deployment; we need virtualization hosts, we need a domain controller, and we need those hosts to belong to our new domain. We are now ready to start looking at VDI itself and, in the next chapter, we'll examine the different forms of VDI and its architecture. On the way, we'll also create a simple VDI lab setup.
We'll also be using more of PowerShell!