So, what is Microsoft © Hyper-V server 2008 R2?

Exclusive offer: get 50% off this eBook here
Instant Hyper-V Server Virtualization Starter [Instant]

Instant Hyper-V Server Virtualization Starter [Instant] — Save 50%

An intuitive guide to learning Virtualization with Hyper-V with this book and ebook

$19.99    $10.00
by Vicente Rodriguez Eguibar | September 2013 | Enterprise Articles

Welcome to the world of virtualization. In this article, by Vicente Rodriguez Eguibar, the author of Instant Hyper-V Server Virtualization Starter, we will explain in simple terms what virtualization is, where it comes from, and why this technology is amazing. So let's start.

(For more resources related to this topic, see here.)

Welcome to the world of virtualization. On the next pages we will explain in simple terms what virtualization is, where it comes from, and why this technology is amazing. So let's start.

The concept of virtualization is not really new; as a matter of fact it is in some ways an inheritance of the mainframe world. For those of you who don't know what a mainframe, is here is a short explanation: A mainframe is a huge computer that can have from several dozen up to hundreds of processors, tons of RAM, and enormous storage space. Think of it as the super computers that international banks are using, or car manufacturers, or even aerospace entities.

These monster computers have a "core" operating system (OS), which helps in creating a logical partition of the resources to assign it to a smaller OS. In other words, the full hardware power is somehow divided into smaller chunks that have a specific purpose. As you can imagine, there are not too many companies which can afford this kind of equipment, and this is one of the reasons why the small servers became so popular. You can learn more about mainframes on the Wikipedia page at http://en.wikipedia.org/wiki/Mainframe_computer.

Starting in the 80s, small servers (mainly based on Intel© and/or AMD© processors) became quite popular, and almost anybody could buy a simple server. But mid-sized companies began to increase the number of servers. In later years the power provided by new servers was enough to fulfill the most demanding applications, and guess what, even to support virtualization.

But you will be wondering, what is virtualization? Well the virtualization concept, even if a bit bizarre, is to work as a normal application to the host OS, asking for CPU, memory, disk, network, to name the main four subsystems, but the application is creating hardware, virtualized hardware of course, that can be used to install a brand new OS. In the diagram that follows, you can see a physical server, including CPU, RAM, disk, and network. This server needs an OS on top, and from there you can install and execute programs such as Internet browsers, databases, spreadsheets, and of course a virtualization software. This virtualization software behaves the same way as any other application-it sends a request to the OS for a file stored on the disk, access to a web page, more CPU time; so for the host OS, is a standard application that demands resources. But within the virtualization application (also known as Hypervisor), some virtual hardware is created, in other words, some fake hardware is presented at the top end of the program.

At this point we can start the OS setup on this virtual hardware, and the OS can recognize the hardware and use it as if it were real.

So coming back to the original idea, virtualization is a technique, based on software, to execute several servers and their corresponding OSes on the same physical hardware. Virtualization can be implemented on many architectures, such as IBM© mainframes, many distributions of Unix© and Linux, Windows©, Apple©, and so on.

We already mentioned that the virtualization is based on software, but there are two main kinds of software you can use to virtualize your servers. The first type of software is the one that behaves as any other application installed on the server and is also known as workstation or software-based virtualization. The second one is part of the kernel on the host OS, and is enabled as a service. This type of software is also called as hardware virtualization and it uses special CPU characteristics (as Data Execution Prevention or Virtualization Support), which we will discuss in the installation section. The main difference is the performance you can have when using either of the types. On the software/workstation virtualization, the request for hardware resources has to go from the application down to the OS into the kernel in order to get the resource. In the hardware solution, the virtualization software or hypervisor layer is built into the kernel and makes extensive usage of the CPU's virtualization capabilities, so the resource demand is faster and more reliable, as in Microsoft © Hyper-V Server 2008 R2.

Reliability and fault tolerance

By placing all the eggs in the same basket, we want to be sure that the basket is protected. Now think that instead of eggs, we have virtual machines, and instead of the basket, we have a Hyper-V server. We require that this server is up and running most of the time, rendering into reliable virtual machines that can run for a long time.

For that reason we need a fault tolerant system, that is to say a whole system which is capable of running normally even if a fault or a failure arises. How can this be achieved? Well, just use more than one Hyper-V server. If a single Hyper-V server fails, all running VMs on it will fail, but if we have a couple of Hyper-V servers running hand in hand, then if the first one becomes unavailable, its twin brother will take care of the load. Simple, isn't it? It is, if it is correctly dimensioned and configured. This is called Live Migration.

In a previous section we discussed how to migrate a VM from one Hyper-V server to another, but using this import/export technique causes some downtime in our VMs. You can imagine how much time it will take to move all our machines in case a host server fails, and even worse, if the host server is dead, you can't export your machines at all. Well, this is one of the reasons we should create a Cluster.

As we already stated, a fault tolerant solution is basically to duplicate everything in the given solution. If a single hard disk may fail, then we configure additional disks (as it may be RAID 1 or RAID 5), if a NIC is prone to failure, then teaming two NICs may solve the problem. Of course, if a single server may fail (dragging with it all VMs on it), then the solution is to add another server; but here we face the problem of storage space; each disk can only be physically connected to one single data bus (consider this the cable, for simplicity), and the server must have its own disk in order to operate correctly. This can be done by using a single shared disk, as it may be a directly connected SCSI storage, a SAN (Storage Area Network connected by optical fiber), or the very popular NAS (Network Attached Storage) connected by NICs.

As we can see in the preceding diagram, the red circle has two servers; each is a node within the cluster. When you connect to this infrastructure, you don't even see the number of servers, because in a cluster there are shared resources such as the server name, IP address, and so on.

So you connect to the first available physical server, and in the event of a failure, your session is automatically transferred to the next available physical server. Exactly the same happens at the server's backend. We can define certain resources as shared to the cluster's resources, and then the cluster can administer which physical server will use the resources. For example, consider the preceding diagram, there are several iSCSI targets (Internet SCSI targets) defined in the NAS, and the cluster is accessing those according to the active physical node of the cluster, thus making your service (in this case, your configured virtual machines) highly available. You can see the iSCSI FAQ on the Microsoft web site (http://go.microsoft.com/fwlink/?LinkId=61375).

In order to use a failover cluster solution, the hardware must be marked as Certified for Windows Server 2008 R2 and it has to be identical (in some cases the solution may work with dissimilar hardware, but the maintenance, operation, capacity planning, to name some, will increase thus making the solution more expensive and more difficult to possess). Also the full solution has to successfully pass the Hardware Configuration Wizard when creating the cluster. The storage solution must be certified as well, and it has to be Windows Cluster compliant (mainly supporting the SCSI-3 Persistent Reservations specification), and is strongly recommended that you implement an isolated LAN exclusively for storage purposes. Remember that to have a fault tolerant solution, all infrastructure devices have to be duplicated, even networks. The configuration wizard will let us configure our cluster even if the network is not redundant, but it will display a warning notifying you of this point.

Ok, let's get to business. To configure a fault tolerant Hyper-V cluster, we need to use Cluster Shared Volumes, which, in simple terms, will let Hyper-V be a clustered service. As we are using a NAS, we have to configure both the ends—the iSCSI initiator (on the host server) and the iSCSI terminator (on the NAS). You can see this Microsoft Technet video at http://technet.microsoft.com/en-us/video/how-to-setup-iscsi-on-windows-server-2008-11-mins.aspx or read the Microsoft article for more information on how to configure iSCSI initiators at http://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx. To configure the iSCSI terminator on the NAS, please refer to the NAS manufacturer's documentation. Apart from the iSCSI disk configuration we have for our virtual machines, we need to provide a witness disk (known in the past as Quorum disk). This disk (using 1 GB will do the trick) is used to orchestrate and synchronize our cluster.

Once we have our iSCSI disk configured and visible (you can check this by opening the Computer Management console and selecting Disk Management ) in one of our servers, we can proceed to configure our cluster.

To install the Failover Clustering feature, we have to open the Server Manager console, select the Roles node on the left, then select Add Roles, and finally select the Failover Clustering role (this is very similar to the procedure we used when we installed the Hyper-V role in the Requirements and Installation section). We have to repeat this step for every node participating on the cluster. At this point we should have both the Failover Clustering role and the Hyper-V role set up in the servers, so we can open the Failover Cluster Manager console from the Administrative tools and validate our configuration. Check that Failover Cluster Manager is selected and on the center pane, select Validate Configuration (a right-click can do the trick as well). Follow all the instructions and run all of the tests until no errors are shown. When this step is completed, we can proceed to create our cluster.

In the same Failover Cluster Manager console, in the center pane, select Create a Cluster (a right-click can do the trick as well). This wizard will ask you for the following:

  • All servers that will participate in the cluster (a maximum of 16 nodes and a minimum of 1, which is useless, so better go for two servers):

  • The name of the cluster (this name is how you will access the cluster and not the individual server names)
  • The IP configuration for the cluster (same as the previous point):

We still need to enable Cluster Shared Volumes. To do so, right-click the failover cluster, and then click Enable Cluster Shared Volumes. The Enable Cluster Shared Volumes dialog opens. Read and accept the terms and restrictions, and click OK. Then select Cluster Shared Volumes and under Actions(to the left), select Add Storage and select the disks (the iSCSI disks) we had previously configured.

Now the only thing we have left, is to make the VM highly available, which we created in the Quick start – creating a virtual machine in 8 steps section (or any other VMs that you have created or any new VM you want to create, be imaginative!). The OS in the virtual machine can failover to another node without almost no interruption. Note that the virtual machine cannot be running in order to make it highly available through the wizard.

  1. In the Failover Clustering Manager console, expand the tree of the cluster we just created.
  2. Select Services and Applications.
  3. In the Action pane, select Configure a Service or Application.
  4. In the Select Service or Application page, click Virtual Machine and then click Next.
  5. In the Select Virtual Machine page, check the name of the virtual machine that you want to make highly available, and then click Next.
  6. Confirm your selection and then click Next again.
  7. The wizard will show a summary and the ability to check the report.
  8. And finally, under Services and Applications , right-click the virtual machine and then click Bring this service or application online. This action will bring the virtual machine online and start it.
Instant Hyper-V Server Virtualization Starter [Instant] An intuitive guide to learning Virtualization with Hyper-V with this book and ebook
Published: February 2013
eBook Price: $19.99
See more
Select your format and quantity:

Integrating the virtual host

When we speak of integrating the virtual machine, what we mean is the fact that the host server is able to communicate directly to the virtual machine and the other way around. This internal communication has to be reliable, fast, and secure.

The hypervisor provides a special mechanism to facilitate this communication—the Hyper-V VMBus. As the name states, it is a dedicated communication bus between the parent partition and the child partition, or following the naming convention on this book, the host server and the virtual machines, which provides a high speed, point-to-point secured communication.

But what about the virtual machine? Well, as the VMBus is the Hyper-V part, we also need the client part. As you may expect, the component to facilitate such communication on the guest VM is a set of drives called Integration Services. In other words, a set of agents that is running inside our virtual machine, and communicates with the Hyper-V host in a secure and fast way. Once the OS in the virtual machine has installed these components, it becomes aware that it is a virtual partition and it can organize itself with the host beneath. With that said, you may be wondering, how is this going to be valuable for me? Well, consider a small example. If the tools are installed on the virtual machine, we could send a Shut Down message from the Hyper-V management console and the VM will shut down as if we were shutting it down from the desktop.

Now that we already know what the integration topic is about, we can talk about the two types of device drivers that Hyper-V provides— emulated and synthetic. As the name suggests, the emulated drivers are a very basic way of providing the service, in other words, they translate every request and move it through the VMBus until it reaches the hypervisor. The synthetic drivers do not have to perform any translation; they just act as the main gate to the VMBus until reaching the physical device on the host.

Before you ask, the emulated drivers exist to provide basic functionalities to every guest OS installed on the VM. In the initial setup stage of our VM, we do need some kind of device driver (as it may be display or network). You can think of emulated drivers as those cheap flip-flops that you can find almost anywhere at the beach. Those flip-flops are neither comfortable nor fancy (they don't even last long), but they will fit almost everybody thereby fulfilling their goal. Of course you want to change from using such an uncomfortable flip-flop to a more comfortable shoe, which fits perfectly to your feet, looks nice, and can even help you run with it. Well, think of that shoe as a synthetic drive.

As you may have already guessed, synthetic device drivers are not available for all OSes, but are only available for a more select group. Linux fans, don't worry, there are synthetic drivers for some of the most common distributions. The drivers for Linux can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=11674 and are intended for Red Hat Enterprise, CentOS, and SUSE Linux Enterprise. For a complete list of supported OSes (Linux and Microsoft), you can visit the Microsoft Technet site at http://technet.microsoft.com/en-us/library/cc794868(v=ws.10).aspx.

We are just one topic away from the installation of these drivers. But first we will describe in more detail what these drivers do:

  • VM connection enhancements. If we connect to a machine without integration services, the mouse pointer will get trapped inside the VM, and we will have to use a key combination (by default Ctrl + Alt + left arrow) to release it. This enhancement will make the VM window behave as any other window.
  • Drivers with Hyper-V knowledge: Remember the synthetic drivers. This is another name for them.
  • Time Synchronization service : A mechanism to maintain time synchronization between the host and the guest. Because the guest has no BIOS battery, it uses the host clock to synchronize.
  • Heartbeat service: The host sends heartbeat messages at regular intervals, waiting for a heartbeat answer from the guest VM. If any of these messages are not answered, then Hyper-V considers that virtual machine as having a problem and logs any such as an error.
  • Shut down service : As mentioned earlier, a graceful shutdown of the VM without the need to log in and manually shut it down.
  • Volume Shadow-Copy requestor : Provides an interface to the Volume Shadow Copy or VSC service in order to create shadow copies of the volume, but only when the OS supports it.
  • Key/Value Pair Exchange : A set of registry keys used by the VM and the host to identify and obtain information. The host can directly access and modify such keys. You can see these values in the Hyper-V management console (in the VM properties).

The Integration drivers are fully supported on the following OSes. For other OSes not mentioned here, not all services and/or features may be available:

Table 1

Now the main dish. Installing Integration services is quite simple. The VM has to be running, and we have to connect to it using the Connect…option from the Hyper-V Manager console. From there we have to select the Action menu and select Insert Integration Services Setup Disk, as shown in the following screenshot:

Depending on whether the CD-ROM autoplay feature is enabled or not, you may get a pop-up window asking to execute the inserted media. If you do, select the install option. If autoplay is not enabled, browse your CD-ROM and manually execute the Integration Services setup file.

A progress window will show how the components being installed on the virtual machine. Once finished, it will show a window asking for a reboot to complete the operation, as shown here:

After the virtual machine is rebooted, the drivers will start working and your machines (both the hypervisor and the virtual) will be integrated. To review the configuration or to make changes, we can see the Services option from Control Panel on the virtual machine.

Or from the Hyper-V Manager console, select the corresponding virtual machine, right-click on it and select Settings... on the right pane of the settings window, click on Integration Services .

How much will it cost?

By now you should have a clear idea of what virtualization is, why it's so popular, and many other nice features. But we live in a business world, and because of this we will be facing the moment when we are asked: how much is this going to cost? And if you are like me, preferring nice toys such as Hyper-V rather than playing with numbers and ROI calculations, you will try to avoid it. Sorry to say it, but any effective economic quote must start with people like us.

The Return of Investment (ROI) is in simple terms the profit we will make in certain periods related to the investment. Nowadays everybody wants to increase their ROI. This can be accomplished by introducing new technologies that make our life easier, by consolidating and reducing the infrastructure, or by simplifying the administration. Our challenge is to identify such investments and treat these numbers, so that we can present the ROI in different ways.

Don't misunderstand me when I say treat those numbers, it's not a manipulation but a different understanding instead. We have the hard costs of our solution, and understand that the hard costs are hardware or software or anything that is easily accounted for, most of time by a single invoice. And then you have the infamous soft costs, which can be as simple as how many watts my server is using or as complicated as the percentage of operational cost (including help desk) that one single window's server uses.

There are many ways to calculate these things, but the procedure used may vary from company A to company B, because what it is important to A may not be useful to B and vice versa. You may be wondering, how should I calculate this? And how do I know if it is correct or not? Well, if you do it and whoever you report to understands and agrees on it, then it is correct. As you already figured out, in this section we are not going to go through this exercise, but instead give you a good baseline to start this task according to your own environment.

Let's start with the calculations. Hardware investment. In a typical (or call it physical) environment, you buy a CPU (to name one component) that is capable of delivering its power to the application on top, even if the application is in a slow state. Simply put, you bought a CPU that is being utilized on an average of 25 percent (or 35 percent or any other percentage low below the full utilization of the chip). By virtualizing, you may share the CPU load over the configured virtual machines, making for a much better utilization; the challenge is to assign an economic value to both scenarios and compare them. The hardware cost is not assigned to a single server (like if you have your dedicated database server for HR), but to each single virtual machine running on that hardware.

Continuing with the calculations, we have to take care of housing (that is the required facilities to have the servers, the computer room, the air conditioner to maintain our devices, the electricity used, the cabling, and so on), which can be very simple in case we have a single closet or very complex if we have a dedicated room. As a rule of thumb, we will consider the devices + setup cost + running expenses divided between every service provided. As we are speaking about virtualization, a single server may host several virtual machines, so the calculated cost will decrease where we have more VMs, even if the host servers are bigger and more powerful (and likely more expensive).

Then we have the software cost. Traditionally, we need one OS license per physical server, or 100 in case we have 100 servers, but Microsoft has developed a very interesting licensing scheme.

In case we are deploying any other virtualization technology, we have to buy a license per each VM, no matter what. If we are planning to deploy Windows Hyper-V server 2008 (which by the way is a free license), we still need to have an OS license per VM. But for the remaining three versions of Windows 2008 Server (Standard, Enterprise, and Datacenter), we have a nice deal, while they are only installed and dedicated as Hyper-V hosts (yes, unfortunately we cannot even install a simple DNS on the host).

For the Standard version, we have included one OS license for a VM (say we have one for the host OS plus one for a VM); for the Enterprise we have four OS licenses for VMs (again one for the host plus four for the VMs), and for the Datacenter… mmm, I even get nervous… we have unlimited licenses (well, this is a lie, because by design we have a limit of 384 VMs per node).

The bottom line is that choosing the wrong brand will bring no savings; choosing the right brand and the right version will lower down the licensing cost significantly (Hyper -V free plus 30 VMs at the cost of, let's say $200, is a total of $6000, but Datacenter plus 200 VMs is only the cost of Datacenter, even if this is $5000).

Dealing with licenses is always a bit of chaos, so it is strongly recommended to call your local Microsoft representative, who will be pleased to help you in your licensing journey.

And last but not least (and the biggest cost within our IT service) - the manpower. Imagine you and your colleagues having to visit the server on the second floor to reboot a device, and then go running to the sixteen floor because the server is not ac cessible by network. With our new Hyper-V infrastructure we will not face this (or if we do, we just have to run to a single place where the servers reside) because of the consolidated state of our machines. We will be optimizing our IT support even if we add more VMs.

Don't trust my own calculations, or even the Microsoft ones which claim that their solution is six times cheaper than their competitors. Take a pen and a piece of paper and create your own number, I'm completely sure you will love the final result, and so will your boss!

Summary

And that's it!!

By this point, you should have a fully working virtual machine running on Microsoft © Hyper-V server 2008 R2.

Resources for Article :


Further resources on this subject:


Instant Hyper-V Server Virtualization Starter [Instant] An intuitive guide to learning Virtualization with Hyper-V with this book and ebook
Published: February 2013
eBook Price: $19.99
See more
Select your format and quantity:

About the Author :


Vicente Rodriguez Eguibar

Vicente Rodriguez Eguibar is the founder of Eguibar Information Technology S.L. Company, which is dedicated to providing IT consultancy and services, focusing on corporate directories, networking, virtualization, migration, and IT optimization. He has been a director of this company for the last four years, providing services and solutions to Fortune 500 international companies. He has traveled to several countries in Asia, Europe, and America supervising and managing projects for different companies.

His technical background started in 1993 as an IT trainer. He has also worked in many different positions as a system operator, technical project manager, and senior consultant for many international companies. Back in Mexico, where he was born, he was certified by Microsoft as a Product Specialist, being one of the first people to obtain this certification in Mexico.

At the beginning in México, he administered and managed computer systems for several industrial companies in the automotive sector. Following his performance in Mexico, he was required by the CIO office to design and manage the international communication network and corporate directory for their company. After managing this position for three years, he was hired by a German car manufacturing company to design the global corporate directory, being in this position for three years. When the German car manufacturing company sold the IT section to a German telecommunications company, Rodriguez Eguibar was appointed to design IT Architecture Infrastructures for external customer companies and government agencies. His last position before creating his own company was for an international call center corporation, where he was in charge of designing, deploying, and migrating to the corporate directory, messaging system, and virtualization strategy.

He was married in Mexico to Adriana Sainz 14 years ago; since 2001, he lives in Spain.

Books From Packt


Windows Server 2012 Hyper-V: Deploying Hyper-V Enterprise Server Virtualization Platform
Windows Server 2012 Hyper-V: Deploying Hyper-V Enterprise Server Virtualization Platform

Windows Server 2012 Hyper-V Cookbook
Windows Server 2012 Hyper-V Cookbook

Hyper-V Replica Essentials
Hyper-V Replica Essentials

 Instant Migration from Windows Server 2008 and 2008 R2 to 2012 How-to [Instant]
Instant Migration from Windows Server 2008 and 2008 R2 to 2012 How-to [Instant]

Windows Server 2012 Automation with PowerShell Cookbook
Windows Server 2012 Automation with PowerShell Cookbook

Microsoft System Center Virtual Machine Manager 2012 Cookbook
Microsoft System Center Virtual Machine Manager 2012 Cookbook

Getting Started with Oracle Hyperion Planning 11
Getting Started with Oracle Hyperion Planning 11

The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11
The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
b
N
D
3
m
C
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software