Microsoft Hyper-V Cluster Design — Save 50%
Plan, design, build and maintain Microsoft Hyper-V Server 2012 and 2012 R2 clusters using this essential guide with this book and ebook
This article by Eric Siron, the author of Microsoft Hyper-V Cluster Design, presents holistic and specific methods to determine how well your system performs. It then guides you through balancing virtual machines across cluster nodes.
Now that we've covered how to design and plan your virtual machines, we're going to turn to the host's view of things. There are add-on and third-party tools that can perform automatic load balancing, but a failover cluster of Hyper-V Servers will only perform balancing in response to a failover event. Whether you'll use automated tools or not, you'll need to have an understanding of your host's abilities.
Balancing is not the entire story. Even if you have additional tools that can perform load balancing for you, you'll still need to keep abreast of the performance metrics of your cluster. As new virtual machines are added, your total capacity will be lessened and you'll want to know well in advance if you need to add hardware. Remember that your cluster is probably intended to survive the loss of at least one host without negatively impacting virtual machines, so just having a fully functional cluster with sufficient capacity may not be adequate.
There are two basic components to proper balancing. The first is being aware of what your hosts are capable of. The second is being aware of what they're doing. This article will work through a number of ways to satisfy these needs. You'll be introduced to the following concepts and activities:
- General system testing
- Disk I/O testing
- Memory testing
- Network testing
- Preferred and possible owners
(For more resources related to this topic, see here.)
Initial and on-going performance measurement
Performance measurements begin prior to system deployment. In terms of a failover cluster of Hyper-V systems, it begins prior to creating any virtual machines. Your first goal is to obtain baselines. The term baseline has different meanings in different contexts; in this case it means gathering data on a system during a known healthy period. Its purpose is to serve as a point of comparison for later data gathering operations.
The first set of performance measurements you take will be with no virtual machines. Once you have reached your target deployment level, you will obtain another. These will be your baselines. All future performance measurements will be compared to these in order to determine how your systems are working.
Microsoft provides a thorough document for performance tuning of Windows Server 2012. These concepts carry forward to R2 and many apply to Hyper-V Server as well. Download it from the following site: http://download.microsoft.com/download/0/0/B/00BE76AF-D340-4759-8ECD-C80BC53B6231/performance-tuning-guidelines-windows-server-2012.docx
General performance measurement
Baselines and ongoing performance evaluations tend to be fairly generic in nature. They can be carried out in a number of ways. This section will examine two others. The first is the free Server Performance Advisor ( SPA ) provided by Microsoft. The second is the Performance Monitor tool in-built in Windows operating systems.
Server Performance Advisor
This tool can be run quickly to determine the performance characteristics of a new system and on a schedule to track the performance trends of an active system.
Do not install or run Server Performance Advisor directly on a Hyper-V host or any guests that are to be measured. Doing so adds a load that will make the results inaccurate.
The following instructions can be used to quickly set up SPA to run in a basic environment. They assume that you'll be running the application with a domain account that has administrative privileges on the systems to be measured. To scan a system that has an active firewall, run the following cmdlet:
Enable-NetFirewallRule -DisplayName "Performance Logs and Alerts (TCP‑In)"
Service Performance Advisor is published on the developer center, which is accessible at http://msdn.microsoft.com/en-us/library/windows/hardware/hh367834.aspx. For best results, this tool should be run from a remote computer that's not on the host being measured. It can be run from any modern Windows system. It requires a connection to an installation of Microsoft SQL Server 2008 R2 or newer. The Express edition is perfectly acceptable. The latest version can be obtained at no charge from the Microsoft download center at http://search.microsoft.com/en-us/downloadresults.aspx?q=sql%20express.
There is another requirement that's listed on the download page but not in the included documentation. The CAB file that SPA is delivered in must be extracted with its directory structure intact. If you use Windows Explorer to open the CAB, it will not extract the files properly. Use the built-in extrac32 tool according to the directions (they're on the download page) or use another extraction application that can reproduce the proper folder structure.
The final prerequisite you must satisfy is the creation of a folder to hold the results. This folder can be in any location on the system you'll be running SPA from, and it can have any name. This folder must be shared. Determine the domain account that you'll be running SPA with and give that account full permissions to the folder and its share.
All that's left is to run SPA. In the folder where you extracted the CAB's contents, run SPAConsole.exe. When it opens, choose File and then New Project to get started. The first screen is just a basic introductory screen. Click on Next and you'll see the following screen, which has been filled in with examples:
The previous entries direct the application to create a database on the local computer, in this case an instance of SQL Server Express. For a large environment with many systems to scan, it is recommended to use SQL Server Standard instead. The database name can be anything you like; this one has been named to reflect that it will contain data on the first Hyper-V cluster in the sample organization. Be aware that this will create a new database on the selected server. Once you have selected the database server, instance, and name, and then click on Next to move to the following screen:
This screen allows you to select the advisor packs that you'd like to make available in this project. Even though you only need the Hyper-V advisor and perhaps the CoreOS advisor, it's best to select all three. The interface sometimes hangs if only a subset is selected. You won't be required to use all three during a scan. Click Next . This will bring you to the final screen:
On this screen, enter the servers that you want to scan. The File Share Location is a file share that will hold the results of the scan. As with the SQL database, it's not required to be on the same system as the scanner. Servers can be added to the list later. You can use Test Configuration to ensure that the indicated servers are reachable. Once you're happy with the entries, click on Finish .
You'll be returned to the main screen of SPA. Now, you should see the host(s) that you selected for this project. Select their checkboxes, and then press the Run Analysis button in the lower-right corner. Here, you'll be able to select the actual advisor packs that you want to use. At the bottom of the screen, you'll be able to enter how long you want the scan to run, and if you wish to collect numerous data points over a period of time—how often you want it to run. Click on OK when you're satisfied with your selections and the data collection process will begin.
Once it is complete, you can click on the small down arrow in the Analysis Result column of one of the hosts. This will show three buttons, indicated in the following screenshot:
These buttons are, from left to right:
- View Latest Report : See the report from the latest analysis. This is the screen you're most likely to be interested in after a one-time scan of a new system. It will show warnings for any items and settings it finds that might impede optimal performance of Hyper-V. It can also compare one report against another and export result sets to XML.
- Find Reports : Search through all result sets for this host according to the criteria that you choose.
- View Charts : These are detailed charts that examine and graph very specific performance metrics of the host.
The wording of the Logical Processor count limit when Hyper-V is enabled warning is misleading. The management operating system is restricted to using 64 logical processors, but Hyper-V itself can still schedule guest processes up to the maximum of 320 logical processors.
The first two buttons are very simple to understand and you should have no trouble navigating them on your own. Do remember to check the various tabs inside the report. The third button, View Charts , brings you to a tool that isn't as easy to decipher. You'll begin by picking a range of dates, and assuming that you've got more than one report to chart, you'll get a screen that looks something like the following screenshot:
The sheer amount of data shown can make this difficult to interpret. In the lower section, you'll notice that there is a large number of performance counters. Select only those that you're actually interested in viewing and you'll find that the chart becomes much easier to understand. To deselect all items, select the first item and press Ctrl + A , and then press the Space bar.
The items marked as 90% remove all utilization above the 90 percent mark. These are assumed to be momentary spikes that can skew the outcome in a way that makes the data meaningless. Compare these to the same metrics marked as Max .
Use the Pick Series button at the bottom of the window if you wish to reduce the number of selectable items. This button is more useful on the other two tabs; in fact, they'll have no data to display if you do not select an item. As indicated, these two tabs show the way that the selected metrics have been trending over a specified period of time. These can show you how your systems behave differently during the day or across a week. Comparing these reports against those generated by other servers can help you to determine how your guests should be load balanced.
The built-in Performance Monitor tool is much more powerful than most others; but it's up to you to choose what to measure. One of its major strengths is that there's nothing to install. All you need is a Windows system with a GUI.
As with Server Performance Advisor, it is not recommended that you run this on a Hyper-V host or guest that you are going to measure.
There are two ways to run Performance Monitor. One is as a real-time tool that graphs the monitored performance counters as they occur. The second is as a collector that gathers metrics and stores them for trend analysis. The differentiating features of Performance Monitor from Server Performance Advisor are:
- Real-time graphing
- Precise selection of metrics
- No software downloads required
- No database system required
- Performance logs can be opened on any Windows system
Performance Monitor is found in Administrative Tools. Depending on how your system is configured, Administrative Tools may be found on the Start screen or menu. It's available in the Control Panel in all versions of Windows. It's also available under the Performance node of Computer Management . If you will be running it for real-time graphing, ensure that you start it with an account that has administrative privileges on the target system. For collectors, you'll be able to specify the account to run it under. You may also need to modify the firewall as indicated under the Server Performance Advisor section mentioned earlier.
Real-time monitoring with Performance Monitor
To start a real-time monitoring session, expand the Monitoring Tools node and click on Performance Monitor . In the center pane, click on the button with the green plus, which will open the Add Counters window. In this window, you'll want to change the counters' source to the target computer. Your screen should look like the following screenshot:
Navigate through the various counters in the upper list box. When you click on one, it will show the instances of that counter that are available to be monitored. Double-click on an instance or highlight it and click on the Add >> button to move it to the list box on the right. These are the objects that will be tracked. When you are satisfied with your selection, click on OK . See Step 4 in the next section for a screenshot of this window and more information about its contents.
You will be returned to the main screen. The display will be updated every second. Each counter you picked will be displayed as a line of a various color. The legend will be shown at the bottom. You can uncheck an item to hide it from the running display; however, its counter will still be monitored.
Using the buttons across the top of the graph pane, you can modify the output. Most of the options are self-explanatory; change them until the display suits your desires. You have the ability to modify the graph from its default line output to a histogram or to a running digital display. Click on the Highlight button and then select a counter to make it stand out against the others. Several of the buttons open various tabs on the Performance Monitor Properties window where you can change many settings, such as the delay between samples. Of interest here is the Source tab, which will be used in the next section.
Trend tracking with Performance Monitor
The second use for Performance Monitor is to pull performance statistics across a span of time. In active deployments, it can be used to track the performance of Hyper-V hosts. You'll create scheduled gathering of data collector sets for this. What makes Performance Monitor especially useful for this is that a single collector set can gather from all the hosts in your cluster simultaneously.
Before you start, ensure that the Performance Monitor console is not connected to the target computer system as it would be for a real-time monitor. For instance, if you are using Computer Management as shown in the first screenshot in this article, the tree root should say Computer Management (Local) and not contain the name of another system. The first reason is that running and managing the collector sets creates a small drain on the system's resources. Second, you're going to be running collectors against multiple systems and it's better to use a single remote computer for those purposes. Third, it's easiest to look at the results of performance logs on the system that took them. Otherwise, you have to move them around.
Look under the Data Collector Sets tree item. There are a number of predefined collector sets and you can add more. Just right-click on the User Defined node and choose New and Data Collector Set. The following steps will take you through the creation of a collector set:
- On the first screen, come up with a name for the set, then choose to manually create the set, then click on Next :
This wizard will create a data collector named DataCollector01 which cannot be renamed. If you wish, you can skip through the wizard to the end, delete the generic collector, and then create new ones with friendlier names.
- On the second screen, you want to create performance counter data logs:
- On the third screen, you can change how often the collector polls for data. As you can see in the following screenshot, the default is every 15 seconds:
Click on the Add… button in the previous screen to pick the counters that you want to poll. This is the same screen that you see when selecting counters in the real-time screen. Enter the name of the computer you want to poll data from in the Select counters from computer text box. Upon pressing Tab or Enter or clicking on another control, it will load the counters from that system. Select the counters and instances that you desire and click on Add >> . You can monitor counters from multiple computer systems in the same collector set if you like, but you may also choose to use one collector per computer per set. Remember that you'll want to select Hyper-V related counters for CPU, memory, and networking or you'll be retrieving collectors from the parent partition only. Physical disk counters are read from the management operating system.
You cannot retrieve statistics for pass-through disks by setting performance counters on the management operating system.
- If you click on the Add >> button and nothing happens, it is because instances are required but didn't load. Click on another counter and then back on the desired counter until the instances are displayed.
- On clicking OK , you'll be returned to the previous screen that will now be populated with the counters that you chose. Ensure everything looks as you wish and click on Next .
- You'll now be asked for a location to save the logs to. Although it will allow you to enter a UNC, logfile creation is usually unsuccessful anywhere but on the local system. You may place them in a local folder that is shared for easy accessibility from other systems, if you wish.
- The final screen will have you provide the credentials that the set will use. If you leave it on its default setting, it will use the Local System account that will not have the necessary rights to run the collection on the target computer. You have two choices: you can add the computer account of the collector machine to the Performance Log Users group on all target machines or you can use an account that is a member of that group on all machines. For the purposes of this step-through, we're just going to use the domain administrator account:
- Before clicking on Finish , you are encouraged to set the radio button to Open properties for this data collector set . This will allow you to jump straight to the properties window where you can schedule the scan. Alternately, you can open the properties window by right-clicking on the completed collector set and clicking on Properties .
- In the properties window, change the options as you like. The Schedule tab is where you establish the Start and End times. You can create multiple schedules for a collector set:
- If you want to use a separate collector in this set for another host, right-click on the new Collector Set in the left pane and click on New and Data Collector . The wizard is very nearly identical to the one you just completed.
You aren't required to follow a schedule. You can manually start and stop collector sets by right-clicking on the menu in the left pane.
Once the collector has begun its work, you can go back to the real-time monitor screen and open the Performance Monitor Properties window to the Source tab. Select the logfile that you instructed the collector set to use. The display will switch to the static output of the logfile. However, it will be blank because by default, no counters are selected. Add counters with the green plus button just as you did with the real-time display. This time, you'll only be able to choose from counters that are contained in the logfiles. You can now manipulate the log contents as you did with the live display. Note that you can view a log of an actively running collector, but the screen will not update in real time.
Selecting counters practically
If you use the exact counters as shown in the example, you'll notice that some of them aren't very useful. For instance, the number of processors in a host is highly unlikely to change during a monitoring session, although the number of virtual processors might. Not all of the available counters are well documented, but there is a Show description check box on the counter selection screen that provides a bit of information. Also, some of the counters you can pull don't compare well from one host to another. In the sample, we instructed SV-HYPERV1 to monitor the amount of data traveling across the virtual adapter in SV-DC1. This is useful data in its own right, but probably in isolation, not as a comparator. Of course, if the virtual machine migrates to another host, it will no longer be readable. You may find the aggregate counters to be more useful than specific virtual machine counters.
The counters that are truly useful are simply too numerous to make a meaningful list out of, and not all counters are universally useful in all organizations. The four generic categories you're likely to be interested in are CPU, disk, memory, and networking. Be judicious about selecting counters that look at specific highly available virtual machines.
Alternative ways to read performance logs
Performance logs can be confusing, especially when you first encounter them. There are a number of tools on the market designed to aid you. One free and popular tool is the Performance Analysis of Logs ( PAL ) Tool. It is a free and open source tool downloadable from Codeplex at http://pal.codeplex.com.
|Plan, design, build and maintain Microsoft Hyper-V Server 2012 and 2012 R2 clusters using this essential guide with this book and ebook|
eBook Price: $35.99
Book Price: $59.99
Along with generic system testing, you can examine specific subsystems. This may be more useful in situations in which you suspect an issue, rather than for general knowledge or trend analysis. It's also useful if you just want to know the capabilities of your systems, since the tools you'll be shown focus more on what the system can do than on how it has been performing. The biggest barrier to testing subsystems is that many of the tools won't work from the hypervisor level. The subsystems we'll examine are disk, memory, and network.
Disk I/O testing
A tool commonly used for examining disk performance is IOMeter, which was created by Intel and is now an open source project. The official site and download are accessible at http://www.iometer.org. More recent but somewhat less stable versions are available at http://sourceforge.net/projects/iometer/. It runs on all current Windows operating systems including Hyper-V Server, and all non-Windows operating systems that can run as guests under Hyper-V. Unfortunately, it does not work directly with Cluster Shared Volumes or mapped network drives. You'll need to assign drive letters to them for testing. It can, of course, test iSCSI and Fibre Channel LUNs that have been assigned a drive letter in the local system. So, as a test of a LUN, you can test by using a drive letter prior to moving it to Cluster Shared Volumes. You can also use Storage Live Migration to evacuate a CSV, remove it from Cluster Shared Volumes, and test it.
IOMeter is a stress-test tool. It will impact performance during use. If measuring a production system, consider doing so during a maintenance window.
To assign a drive letter to a CSV, you can use either DISKPART or SUBST. The DISKPART method will survive a reboot while the SUBST method will not. SUBST is a basic built-in command; use SUBST /? if you are unfamiliar with its usage. For DISKPART, perform the following steps:
- Execute DISKPART.EXE at an elevated command prompt.
- Type LIST VOLUME.
- Items with CSVFS in the Fs column are your CSVs. Note the volume number of the CSV that you want to test.
- Type SELECT VOLUME 3, substituting the number for the CSV you wish to test.
- Type ASSIGN LETTER J, substituting the desired drive letter. To reverse this later, use REMOVE LETTER J in the same fashion.
- Type EXIT to end your DISKPART session.
To measure the performance of an SMB share, use standard drive mappings:
NET USE J: \\SV-STORAGE\VMs
The previous steps are non-standard and unsupported for both CSVs and shared locations. Remove assigned drive letters and mappings once testing is complete.
IOMeter can also be used inside guests to test pass-through disks. You may also choose to use this tool inside a virtual machine to test shared storage. The results of this test will be subject to contention for the shared location, but it will give a fairly accurate idea of how well the particular machine can utilize that storage.
The following screenshot shows the screen you'll see when starting IOMeter:
The red line through the J: indicates that it needs to be prepared, which simply means that it is not a raw drive (because it has one or more partitions) and does not contain an iobw.tst file at its root. This file will be automatically created once the job starts. If undirected, this file will consume all available space on the drive. To keep it to a reasonable size, you can use the Maximum Disk Size field as a limiter. Unfortunately, that field is by sector, so you'll have to do some math to arrive at a reasonable size. For a 10 GB file on a standard 512 bytes-per-sector drive system, you would enter 20971520.
A clumsier, but effective method is to simply allow the file creation to proceed automatically. After a few minutes of Preparing Drives status, stop the job and exit the program. The file will be truncated at whatever size IOMeter reached when you cancelled its creation. When you start the program again, it will use that file at its current size. Of course, you may also use file generation software to create a file of any size you like; it should contain mixed bits (preferably random), must be named iobw.tst, and must be created in the root of the drive to be tested.
By default, two workers are generated. You only need to use one but you can add as many as you like. Each worker maintains its own disk targets and test criteria. You can have multiple workers targeting the same disk.
To set how your worker will test the file, switch to the Access Specifications tab. Predefined test types are in the right column. You can edit these entries to create whatever test you like, and a single worker can run multiple tests. The following screenshot shows an Edit Access Specification screen:
The final step to perform prior to starting your test is to configure the duration and test conditions. These are all found on the Test Setup tab and are self-explanatory.
IOMeter has the ability to save test configuration files using the relevant toolbar items and buttons. It is highly recommended that you save the configurations that you use so that you can make meaningful comparisons in the future.
Once you've set up the test as desired, click on the button with the green flag icon to start the test. You'll be asked for a location to save the results.csv file which can be imported into Excel or any other application that can process comma-separated values. Switch to the results display screen if you'd like to watch the gatherer work. You'll need to set Update Frequency to something other than infinite in order to see the numbers change. You can also click on the > button at the end of each row to see a gage display of that statistic. Click on the text button at the start of each row to change the statistic that it displays.
Practical IOMeter usage for disk analysis
The official site provides documentation for the tool that is very thorough. For all but the most simplistic testing, it is worth your time to read through it. One thing it contains that is not in this article is instructions on importing your results into Excel and using them to create a graph.
The prerelease software on SourceForge does not contain an installer. Remember to unblock the files in Windows Explorer or with the Unblock-File PowerShell cmdlet or they will not function. IOMeter requires elevated privileges to run.
For the most accurate results, use a raw disk that has not been partitioned or formatted. This will cause IOMeter to utilize the entire space of the disk or disk system for its testing, which can then include such factors as maximum read-head travel. If you cannot test the raw disk, use a very large file. Small files do not span enough of the disk to get an accurate understanding of the effects of read-head motion and will result in a high rate of cache hits. The more the cache is utilized, the more optimistic the results will be and will not reflect the real-world capabilities of the system.
For a worst-case scenario test, set Access Specifications to use small (512 or 4KB) chunks of 100 percent random access. Sequential operations of higher sizes will be somewhat more reflective of real-world scenarios, but remember that in a virtualization environment, disk access is scheduled across all virtual machines and will therefore be unpredictable. Most real-world workloads are heavier on reads than writes. 75 percent read is a good expectation for standard systems. IOMeter's documentation indicates using a 66 percent read pattern to simulate a database load. In reality, a database system's disk access will be as dependent on its usage scenario as any other system. Your application vendors may be able to help you design reasonable metrics. Use a varied combination of Access Specifications to simulate real-world loads.
Testing shared storage is dependent not only on the capabilities of the disk system but also on the connections to that disk system. In most cases, they won't present a meaningful bottleneck but keep their limitations in mind as you test.
If you are using automatic-tiered storage that will move heavily-accessed data to a faster disk, ensure that you consult with your manufacturer prior to running any disk metric tests. Otherwise, it may inappropriately place low-utilization data on high-speed storage in response to a stress test.
Make sure to save your test configuration. You will be unable to perform a proper comparative analysis without using the same criteria in other tests.
Network testing is most useful at deployment time and when you suspect that there may be an issue in the hardware that is undetectable by other means. Network adapters and switching hardware has very fixed limits on performance that do not have the variances of spinning disks due to their solid-state nature. We will look at two tools for networking testing.
IOMeter for network testing
Since you've already got IOMeter for testing your disks, there's not much more to using it for testing a network connection. You'll need to install or place IOMeter's executables on another system. By setting that remote system as a target, you can test the network between them.
Before setting up IOMeter for network testing, make a firewall change on the system by running the IOMeter GUI. You can use the /p switch in the following commands to indicate exactly which port you wish to use; otherwise, it will randomly select a port above 53,000.
To use IOMeter to test the network connection between two systems:
- On the remote system, open an elevated command prompt. Navigate to the folder that contains the IOMeter executables. Run dynamo.exe with both the /i and /m switches targeting the system that is running the IOMeter application. In this test environment, SV-HYPERV2 is running IOMeter and SV-HYPERV1 is the remote system. The command for that is as follows:
Dynamo.exe /i SV-HYPERV2 /m SV-HYPERV2
- It will indicate that it is attempting to log in to the target system and will show the port it is attempting to use.
- On the IOMeter system, either add a new manager or change the existing one. The managers are displayed in the Topology pane at the left. To add a manager, use the button with a computer as its icon. To change a manager, you have to click on its name and then after a brief pause, click on it again. If done properly, the manager name will become an editable text box. Change it to the name of the remote system. For our example, this would be SV-HYPERV1.
- With that manager selected, add a new Network Worker using the relevant toolbar button.
- Switch to the Network Targets tab. At this point, your screen should look similar to the following screenshot:
- Under the remote machine's entry in the Targets pane, check the adapters you wish to test.
- Configure, save, and execute the tests as you would for a disk target.
Practical IOMeter usage for network analysis
Unlike the disk scan, you won't get as much useful data out of changing the Access Specification metrics. What you want to watch for are MBs and latency. These will indicate if your network speeds are as expected and if there is high delay between the measured nodes. Another interesting metric to watch is CPU utilization. The efficiency of IOMeter itself is unknown, but this is a test of an application fully utilizing a network channel. By changing enabled network options and re-running the test, you can see the effect that various offload technologies have (or do not have).
NTttcp for network testing
NTttcp is a Microsoft tool used for measuring network performance. It has been made available to the public through the Microsoft TechNet gallery. The version that was most recent as of this writing is available at http://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769.
To compare and contrast this tool with IOMeter, it is command line only and can only export to XML files. The benefit is that it is easier to run multiple simultaneous instances, which means that you can test using multiple ports. With multiple tests running on multiple ports, you can more properly test a hashed load-balancing algorithm for a single virtual adapter on a network team.
NTttcp usage is straightforward. Instructions are provided directly on the download page.
Testing memory for its performance capabilities is generally not a useful undertaking. Because memory is solid state and its performance characteristics are not dependent on external factors, memory will behave exactly the same way throughout its life. Only a faulty memory module will have variances. For actual performance testing, a number of benchmarking applications are commercially available.
Testing memory for faults is usually much more valuable than testing it for performance. Often, administrators will perform a burn-in procedure on new hardware. Such a procedure runs all portions of all memory modules through a repeated series of stress tests over a period of time. If any of the modules produce inconsistent results, it is a sign that the module is faulty and needs to be replaced. In general, memory modules will fail shortly after being placed into service or they will last a number of years. The burn-in process helps to single out those that will fail early, before they enter production.
A commonly used tool for testing memory is the Prime95 software. This tool's primary purpose is actually to use a distributed computing model to find Mersenne prime numbers. You can read about that endeavor and download the software at http://www.mersenne.org/freesoft/. You do not need to sign up for an account or participate in the search to use this software. If you will be running it from within a Windows Server or Hyper-V Server installation, use the 64-bit download. If you will be placing the file on a bootable CD or USB flash drive, use the 32-bit download.
To use it, you can simply extract, unblock, and run the Prime95.exe program from within Windows or a command prompt. When run with the 64-bit GUI, you can use the Custom setting to specify how you want the utility to run; although the default stress test settings are adequate. There isn't a lot to see as the tests run. One child window will be created for each master thread and a line item will be entered for each test as it runs. The following is a screenshot of the application running on Hyper-V Server 2012:
Creating a bootable CD or USB flash drive is beyond the scope of this article. Rather than going through the effort of making a special-use bootable disk, it is possible to boot to other bootable media then replace it with media that contains the 32-bit Prime95.exe program. Many prebuilt bootable images are readily available. They are not specifically referenced in this text as very few are both free and have direct support from any manufacturer.
Baseline and comparative performance measures
Using the previous tools, conduct the tests of your systems that can be duplicated later. You'll want to know how they perform right out of the box and how they perform under normal load. Store these results as baseline measures. Then, if you or your users suspect that there are performance problems, you can run the exact same tests and compare them against the baselines. You will then have an easier time determining if there is a problem and where that problem may be.
As your deployment grows, these comparative measurements will help you determine how quickly your systems are expanding, which can help you plan well in advance for any scale-out operations.
|Plan, design, build and maintain Microsoft Hyper-V Server 2012 and 2012 R2 clusters using this essential guide with this book and ebook|
eBook Price: $35.99
Book Price: $59.99
Cluster load balancing
The next phase beyond node resource usage is resource usage across your cluster. As previously noted, an extra application layer, such as System Center Virtual Machine Manager, can perform this load balancing automatically. Without this software, you will need to perform load balancing manually. Also, in some cases, you may wish to exert manual control over specific virtual machines. For instance, if you are using the built-in high availability features of some applications, such as Microsoft SQL Server or Exchange Server, you may want to ensure that particular virtual machines never share the same host. You may also be using clustering with Hyper-V Server with the intent of keeping particular application servers logically arranged more than providing failover. For instance, the following is a planning diagram for a Microsoft Lync system running on a Hyper-V Server cluster:
A Lync frontend server involves very heavy usage of CPU and having two on the same physical host would probably overload it. The mediation servers are less demanding; so in this theoretical design, one mediation server and one frontend server have been paired on two hosts. The challenge is to provide automatic failover capabilities while ensuring that the two frontend servers are never on the same hardware. Preventing the mediation servers from sharing hardware isn't quite as important; but because the purpose of having two is to introduce high availability at the application level, keeping them apart is desirable. There are two things built-in to Hyper-V Server to address these issues.
Lync does not support Live Migration. This scenario represents a failover build in which the cluster service could revive a crashed guest on another host.
By marking particular hosts as the preferred owners for a given virtual machine, they are given top priority when a cluster-initiated move event occurs. Usually, this is in response to a host shutdown (including crashes) or a Drain Roles operation. It is not considered when an administrator manually initiates Live Migration or Quick Migration. Preferred owners can be designated in the GUI or in PowerShell.
Setting preferred owners using Failover Cluster Manager
Select the Roles node in Failover Cluster Manager. In the center pane, right-click on the virtual machine whose settings you wish to change and click on Properties . In the properties dialog, check the boxes next to the host(s) you wish to designate as preferred for this particular virtual machine.
There is also a Preferred Owners link in the grey divider area between the top and bottom panes of Failover Cluster Manager as seen in the following screenshot:
Setting preferred owners using PowerShell
This can be a somewhat involved process, but it looks more difficult than it actually is. For example, one-line method to set the possible owner of the virtual machine named sv-spa to sv-hyperv1 and sv-hyperv3 is as follows:
Get-ClusterGroup -VMId ((Get-VM -Name "sv-spa").VMId) | Set‑ClusterOwnerNode -Owners "sv-hyperv1", "sv-hyperv3"
The reason for the depth is because you actually set the owner property on a group of resources that the virtual machine is part of. The other component is the configuration data for that guest. Together, these are members of a resource group that the cluster controls. Usually, this group has the exact same name as the virtual machine, so you're probably safe to just use:
Set-ClusterOwnerNode -Group "sv-spa" –Owners "sv-hyperv1", "sv-hyperv3"
Of course, there's no guarantee that the names are the same, so the previous longer command is safer. We'll examine that piece by piece:
First, it retrieves the identifier for the specific virtual machine with (Get-VM –Name "sv-spa").VMId. If the virtual machine was running on a different node, you'd also have to use the ComputerName parameter. The retrieved VMId is then used in the VMId parameter of Get-ClusterGroup. If you execute that portion by itself, it will show you the group that contains the virtual machine in question. What we've done is use the pipeline to ship that group to the Set-ClusterOwnerNode cmdlet. There, the preferred owners are set.
If you'd just like to see all the available groups, use Get-ClusterGroup with no parameters. These names are the same that are displayed in the Roles tree of Failover Cluster Manager. Any of these can be passed directly to Set-ClusterOwnerNode from any node in the cluster.
Possible owners goes one step beyond preferred owners by setting which nodes are allowed to host a virtual machine. Any node not included cannot be a destination for that guest by any means, automatic or manual.
Setting possible owners using Failover Cluster Manager
To use Failover Cluster Manager to set Possible Owners:
- Click on the Roles node. In the lower section of the center pane, switch to the Resources tab.
- In the upper portion of this screen, under the Virtual Machine heading, right-click on the virtual machine object and click on Properties .
- In the Properties window, uncheck any server that you do not wish to host this virtual machine.
Setting possible owners using PowerShell
As with the Preferred Owners PowerShell setting, we need to go a little deeper into the system. These cmdlets look very similar to the preferred owners cmdlets:
Get-ClusterResource -VMId ((Get-VM -Name "sv-spa").VMId) | Set‑ClusterOwnerNode -Owners "sv-hyperv1", "sv-hyperv3"
This time, instead of setting the owners on the overall group, we are setting it on the virtual machine resource itself. Unlike the group, the resource probably does not have the same name as the virtual machine, although it will probably be similar. A virtual machine resource created using the GUI tools will usually have a name such as Virtual Machine sv‑spa. You can see these by executing Get-ClusterResource by itself or by looking in the GUI in the location mentioned for setting possible owners.
Where Preferred Owners and Possible Owners set restrictions on specific virtual machines in relation to specific nodes, the anti-affinity setting is applied to a group of virtual machines. When the cluster service takes automatic actions that move virtual machines, it will attempt to prevent any members of the same anti-affinity group from being started on the same host. As with Preferred Owners, this setting is not guaranteed to be honored and you can override it with manual migrations.
The property itself is a string array that is attached to the cluster resource group. String arrays are more commonly seen in programming than in systems administration. An array is a group and a string is a set of characters. A common example would be an array that contains colors. Such an array might just be named Colors and it would contain values such as blue, green, and red. Those values have meaning to you as a human but the computer just sees them as assorted characters. As you'll recall from Preferred Owners, the cluster resource group contains the virtual machine and its configuration information. So in order to set anti-affinity, you first need to come up with a name for an anti-affinity class. Then you need to insert that name into this array on the cluster resource group for the virtual machines you want to be members of that class. The reason that this field is an array and not simply a string value is so that you can assign the same virtual machine to multiple anti-affinity classes.
Despite the lengthy explanation, setting anti-affinity is very simple:
(Get-ClusterGroup "sv-lyncms1").AntiAffinityClassNames = "Lync Mediation Servers" (Get-ClusterGroup "sv-lyncms2").AntiAffinityClassNames = "Lync Mediation Servers"
The previous cmdlet will set the two indicated cluster groups to the same anti-affinity class so that automated failovers will attempt to keep them separated. The cmdlets assume that the cluster groups have the same name as the virtual machines. Refer to the Setting Preferred Owners Using PowerShell section earlier to learn how to find the cluster group name for a virtual machine.
Of course, you can perform similar operates on one line:
((Get-ClusterGroup).Name -like "*lyncms*").AntiAffinityClassNames =
"Lync Mediation Servers"
You can then add a system into another anti-affinity class with the += operator:
(Get-ClusterGroup "sv-lyncfe1").AntiAffinityClassNames += "High CPU"
To view the classes that a virtual machine belongs to:
Remove the guest from classes by setting an empty array:
(Get-ClusterGroup "sv-lyncms1").AntiAffinityClassNames = ""
Unfortunately, removing a single class is difficult. You can replace the entire array, specifying only the classes that you wish to remain:
(Get-ClusterGroup "sv-lyncfe1").AntiAffinityClassNames =
@("Lync Front-End Servers", "High CPU")
Cluster.exe also contains switches for setting anti-affinity. This module is deprecated and should no longer be used. System Center Virtual Machine Manager can be used to modify this setting by changing Availability Sets .
In this article, we looked at performance monitoring and load balancing.
First, we learned how to use Server Performance Advisor and Performance Monitor to learn how hosts are functioning. We then learned about various tools to monitor the capabilities and stability of the core components of your systems.
When discussing performance, it is always important to remember that the goal is to achieve a level that provides a satisfactory experience for users of the system; not to achieve an arbitrary score on a benchmark. Provision and balance our resources wisely.
After performance monitoring, we showed how to automatically set restrictions on your virtual machines to control which hosts they can operate on. This is an important tactic to keep our host systems properly load-balanced.
Resources for Article:
- Insight into Hyper-V Storage [Article]
- Configuring Clusters in GlassFish [Article]
- So, what is Microsoft © Hyper-V server 2008 R2? [Article]
About the Author :
Eric Siron has over fifteen years of professional experience in the information technology field. Eric has architected solutions across the spectrum, from two-user home offices to thousand user enterprises. He began working with Microsoft Hyper-V Server, version R2 in 2010, and has focused on Microsoft virtualization technologies ever since. He is currently employed as a Senior Systems Administrator at The University of Iowa Hospitals and Clinics, in Iowa City, Iowa, and is a regular contributor to the Hyper-V blog hosted by Altaro Software.