Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-introduction-xenconvert
Packt
04 Sep 2013
3 min read
Save for later

Introduction to XenConvert

Packt
04 Sep 2013
3 min read
(For more resources related to this topic, see here.) System requirements Since XenConvert can only convert Windows-based hosts and installs on the same host, the requirements are pretty much the same, as follows: Operating system: Windows XP, Windows Vista, Windows 7, Windows Server 2003 (SP1 or later), Windows Server 2008 (R2) .Net Framework 4.0 Disk Space: 40 MB free disk space XenServer version 6.0 or 6.1 Converting a physical machine to a virtual machine Let's take a quick look at how to convert a physical machine to a virtual machine. First we need to install XenConvert on the source physical machine. We can download XenConvert from this link: http://www.citrix.com/downloads/xenserver/tools/conversion.html. Once the standard Windows installation process is complete, launch the XenConvert tool; but before that we need to prepare the host machine for the conversion. To know more about XenConvert, refer to the XenConvert guide at http://support.citrix.com/article/CTX135017. Preparing the host machine For best results, prepare the host machine as follows: Enable Windows Automount on Windows Server operating systems. Disable Windows Autoplay. Remove any virtualization software before performing a conversion. Ensure that adequate free space exists at the destination, which is approximately 101 percent of used space of all source volumes. Remove any network interface teams; they are not applicable to a virtual machine. We need to run the XenConvert tool on the host machine to start the physical-to-virtual conversion. We can convert the physical machine directly to our XenServer if this host machine is accessible. The other options are to convert to VHD, OVF, or vDisk, which can be imported later on to XenServer using some methods. These options are more useful if we don't have enough disk space or connectivity with XenServer. I chose XenServer and clicked on Next . We can select multiple partitions to be included in the conversion, or select none from the drop-down menu in Source Volume and those disks won't be included in the conversion. We can also increase or decrease the size of the new virtual partition to be allocated for this virtual machine. Click on Next . We'll be asked to provide the details of the XenServer host. The hostname needs either an IP address or a FQDN of the XenServer; a username and password are standard login requirements. In the Workspace field, enter the path to the folder to store the intermediate OVF package that XenConvert will use during the conversion process. XenConvert will store the OVF package in the path we give. Click on Next to select the storage repositories found with XenServer and continue to the last step, in which we'll be provided with the summary of the conversion. Soon after the conversion is completed, we'll be able to have this new machine in our XenCenter. We'll need to have XenServer Tools installed on this new virtual machine. Summary In this article we covered an advanced topic that explained the process of converting a physical Windows server to a virtual machine using XenConvert. Resources for Article : Further resources on this subject: Citrix XenApp Performance Essentials [Article] Defining alerts [Article] Publishing applications [Article]
Read more
  • 0
  • 0
  • 8933

article-image-creating-custom-reports-and-notifications-vsphere
Packt
24 Mar 2015
34 min read
Save for later

Creating Custom Reports and Notifications for vSphere

Packt
24 Mar 2015
34 min read
In this article by Philip Sellers, the author of PowerCLI Cookbook, you will cover the following topics: Getting alerts from a vSphere environment Basics of formatting output from PowerShell objects Sending output to CSV and HTML Reporting VM objects created during a predefined time period from VI Events object Setting custom properties to add useful context to your virtual machines Using PowerShell native capabilities to schedule scripts (For more resources related to this topic, see here.) This article is all about leveraging the information available to you in PowerCLI. As much as any other topic, figuring out how to tap into the data that PowerCLI offers is as important as understanding the cmdlets and syntax of the language. However, once you obtain your data, you will need to alter the formatting and how it's returned to be used. PowerShell, and by extension PowerCLI, offers a big set of ways to control the formatting and the display of information returned by its cmdlets and data objects. You will explore all of these topics with this article. Getting alerts from a vSphere environment Discovering the data available to you is the most difficult thing that you will learn and adopt in PowerCLI after learning the initial cmdlets and syntax. There is a large amount of data available to you through PowerCLI, but there are techniques to extract the data in a way that you can use. The Get-Member cmdlet is a great tool for discovering the properties that you can use. Sometimes, just listing the data returned by a cmdlet is enough; however, when the property contains other objects, Get-Member can provide context to know that the Alarm property is a Managed Object Reference (MoRef) data type. As your returned objects have properties that contain other objects, you can have multiple layers of data available for you to expose using PowerShell dot notation ($variable.property.property). The ExtensionData property found on most objects has a lot of related data and objects to the primary data. Sometimes, the data found in the property is an object identifier that doesn't mean much to an administrator but represents an object in vSphere. In these cases, the Get-View cmdlet can refer to that identifier and return human-readable data. This topic will walk you through the methods of accessing data and converting it to usable, human-readable data wherever needed so that you can leverage it in scripts. To explore these methods, we will take a look at vSphere's built-in alert system. While PowerCLI has native cmdlets to report on the defined alarm states and actions, it doesn't have a native cmdlet to retrieve the triggered alarms on a particular object. To do this, you must get the datacenter, VMhost, VM, and other objects directly and look at data from the ExtensionData property. Getting ready To begin this topic, you will need a PowerCLI window and an active connection to vCenter. You should also check the vSphere Web Client or the vSphere Windows Client to see whether you have any active alarms and to know what to expect. If you do not have any active VM alarms, you can simulate an alarm condition using a utility such as HeavyLoad. For more information on generating an alarm, see the There's more... section of this topic. How to do it… In order to access data and convert it to usable, human-readable data, perform the following steps: The first step is to retrieve all of the VMs on the system. A simple Get-VM cmdlet will return all VMs on the vCenter you're connected to. Within the VM object returned by Get-VM, one of the properties is ExtensionData. This property is an object that contains many additional properties and objects. One of the properties is TriggeredAlarmState: Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} To dig into TriggeredAlarmState more, take the output of the previous cmdlet and store it into a variable. This will allow you to enumerate the properties without having to wait for the Get-VM cmdlet to run. Add a Select -First 1 cmdlet to the command string so that only a single object is returned. This will help you look inside without having to deal with multiple VMs in the variable: $alarms = Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} | Select -First 1 Now that you have extracted an alarm, how do you get useful data about what type of alarm it is and which vSphere object has a problem? In this case, you have VM objects since you used Get-VM to find the alarms. To see what is in the TriggeredAlarmState property, output the contents of TriggeredAlarmState and pipe it to Get-Member or its shortcut GM: $alarms.ExtensionData.TriggeredAlarmState | GM The following screenshot shows the output of the preceding command line: List the data in the $alarms variable without the Get-Member cmdlet appended and view the data in a real alarm. The data returned does tell you the time when the alarm was triggered, the OverallStatus property or severity of the alarm, and whether the alarm has been acknowledged by an administrator, who acknowledged it and at what time. You will see that the Entity property contains a reference to a virtual machine. You can use the Get-View cmdlet on a reference to a VM, in this case, the Entity property, and return the virtual machine name and other properties. You will also see that Alarm is referred to in a similar way and we can extract usable information using Get-View also: Get-View $alarms.ExtensionData.TriggeredAlarmState.Entity Get-View $alarms.ExtensionData.TriggeredAlarmState.Alarm You can see how the output from these two views differs. The Entity view provides the name of the VM. You don't really need this data since the top-level object contains the VM name, but it's good to understand how to use Get-View with an entity. On the other hand, the data returned by the Alarm view does not show the name or type of the alarm, but it does include an Info property. Since this is the most likely property with additional information, you should list its contents. To do so, enclose the Get-View cmdlet in parenthesis and then use dot notation to access the Info variable: (Get-View $alarms.ExtensionData.TriggeredAlarmState.Alarm).Info In the output from the Info property, you can see that the example alarm in the screenshot is a Virtual Machine CPU usage alarm. Your alarm can be different, but it should appear similar to this. After retrieving PowerShell objects that contain the data that you need, the easiest way to return the data is to use calculated expressions. Since the Get-VM cmdlet was the source for all lookup data, you will need to use this object with the calculated expressions to display the data. To do this, you will append a Select statement after the Get-VM and Where statement. Notice that you use the same Get-View statement, except that you change your variable to $_, which is the current object being passed into Select: Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} | Select Name, @{N="AlarmName";E={(Get-View $_.ExtensionData. TriggeredAlarmState.Alarm).Info.Name}}, @{N="AlarmDescription";E={(Get-View $_.ExtensionData. TriggeredAlarmState.Alarm).Info.Description}}, @ {N="TimeTriggered"; E={$_.ExtensionData.TriggeredAlarmState. Time}}, @{N="AlarmOverallStatus"; E={$_.ExtensionData. TriggeredAlarmState. OverallStatus}} How it works When the data you really need is several levels below the top-level properties of a data object, you need to use calculated expressions to return these at the top level. There are other techniques where you can build your own object with only the data you want returned, but in a large environment with thousands of objects in vSphere, the method in this topic will execute faster than looping through many objects to build a custom object. Calculated expressions are extremely powerful since nearly anything can be done with expressions. More than that, you explored techniques to discover the data you want. Data exploration can provide you with incredible new capabilities. The point is you need to know where the data is and how to pull that data back to the top level. There's more… It is likely that your test environment has no alarms and in this case, it might be up to you to create an alarm situation. One of the easiest to control and create is heavy CPU load with a CPU load-testing tool. JAM Software created software named HeavyLoad that is a stress-testing tool. This utility can be loaded into any Windows VM on your test systems and can consume all of the available CPU that the VM is configured with. To be safe, configure the VM with a single vCPU and the utility will consume all of the available CPU. Once you install the utility, go to the Test Options menu and you can uncheck the Stress GPU option, ensure that Stress CPU and Allocate Memory are checked. The utility also has shortcut buttons on the Menu bar to allow you to set these options. Click on the Start button (which looks like a Play button) and the utility begins to stress the VM. For users who wish to do the same test, but utilize Linux, StressLinux is a great option. StressLinux is a minimal distribution designed to create high load on an operating system. See also You can read more about the HeavyLoad Utility available under the JAM Software page at http://www.jam-software.com/heavyload/ You can read more about StressLinux at http://www.stresslinux.org/sl/ Basics of formatting output from PowerShell objects Anything that exists in a PowerShell object can be output as a report, e-mail, or editable file. Formatting the output is a simple task in PowerShell. Sometimes, the information you receive in the object is in a long decimal number format, but to make it more readable, you want to truncate the output to just a couple decimal places. In this section, you will take a look at the Format-Table, Format-Wide, and Format-List cmdlets. You will dig into the Format-Custom cmdlet and also take a look at the -f format operator. The truth is that native cmdlets do a great job returning data using default formatting. When we start changing and adding our own data to the list of properties returned, the formatting can become unoptimized. Even in the returned values of a native cmdlet, some columns might be too narrow to display all of the information. Getting ready To begin this topic, you will need the PowerShell ISE. How to do it… In order to format the output from PowerShell objects, perform the following steps: Run Add-PSSnapIn VMware.VimAutomation.Core in the PowerShell ISE to initialize a PowerCLI session and bring in the VMware cmdlet. Connect to your vCenter server. Start with a simple object from a Get-VM cmdlet. The default output is in a table format. If you pipe the object to Format-Wide, it will change the default output into a multicolumn with a single property, just like running a dir /w command at the Windows Command Prompt. You can also use FW, an alias for Format-Wide: Get-VM | Format-Wide Get-VM | FW If you take the same object and pipe it to Format-Table or its alias FT, you will receive the same output if you use the default output for Get-VM: Get-VM Get-VM | Format-Table However, as soon as you begin to select a different order of properties, the default formatting disappears. Select the same four properties and watch the formatting change. The default formatting disappears. Get-VM | Select Name, PowerState, NumCPU, MemoryGB | FT To restore formatting to table output, you have a few choices. You can change the formatting on the data in the object using the Select statement and calculated expressions. You can also pass formatting through the Format-Table cmdlet. While setting formatting in the Select statement changes the underlying data, using Format-Table doesn't change the data, but only its display. The formatting looks essentially like a calculated expression in a Select statement. You provide Label, Expression, and formatting commands: Get-VM | Select * | FT Name, PowerState, NumCPU, @{Label="MemoryGB"; Expression={$_.MemoryGB}; FormatString="N2"; Alignment="left"} If you have data in a number data type, you can convert it into a string using the ToString() method on the object. You can try this method on NumCPU: Get-VM | Select * | FT Name, PowerState, @{Label="Num CPUs"; Expression={($_.NumCpu).ToString()}; Alignment="left"}, @{Label="MemoryGB"; Expression={$_.MemoryGB}; FormatString="N2"; Alignment="left"} The other method is to format with the -f operator, which is basically a .NET derivative. To better understand the formatting and string, the structure is {<index>[,<alignment>][:<formatString>]}. Index sets that are a part of the data being passed, will be transformed. The alignment is a numeric value. A positive number will right-align those number of characters. A negative number will left-align those number of characters. The formatString parameter is the part that defines the format to apply. In this example, let's take a datastore and compute the percentage of free disk space. The format for percent is p: Get-Datastore | Select Name, @{N="FreePercent";E={"{0:p} -f ($_.FreeSpaceGB / $_.CapacityGB)}} To make the FreePercent column 15 characters wide, you add 0,15:p to the format string: Get-Datastore | Select Name, @{N="FreePercent";E={"{0,15:p} -f ($_.FreeSpaceGB / $_.CapacityGB)}} How it works… With the Format-Table, Format-List, and Format-Wide cmdlets, you can change the display of data coming from a PowerCLI object. These cmdlets all apply basic transformations without changing the data in the object. This is important to note because once the data is changed, it can prevent you from making changes. For instance, if you take the percentage example, after transforming the FreePercent property, it is stored as a string and no longer as a number, which means that you can't reformat it again. Applying a similar transformation from the Format-Table cmdlet would not alter your data. This doesn't matter when you're performing a one-liner, but in a more complex script or in a routine, where you might need to not only output the data but also reuse it, changing the data in the object is a big deal. There's more… This topic only begins to tap the full potential of PowerShell's native -f format operator. There are hundreds of blog posts about this topic, and there are use cases and examples of how to produce the formatting that you are looking for. The following link also gives you more details about the operator and formatting strings that you can use in your own code. See also For more information, refer to the PowerShell -f Format operator page available at http://ss64.com/ps/syntax-f-operator.html Sending output to CSV and HTML On the screen the output is great, but there are many times when you need to share your results with other people. When looking at sharing information, you want to choose a format that is easy to view and interpret. You might also want a format that is easy to manipulate and change. Comma Separated Values (CSV) files allow the user to take the output you generate and use it easily within a spreadsheet software. This allows you the ability to compare the results from vSphere versus internal tracking databases or other systems easily to find differences. It can also be useful to compare against service contracts for physical hosts as examples. HTML is a great choice for displaying information for reading, but not manipulation. Since e-mails can be in an HTML format, converting the output from PowerCLI (or PowerShell) into an e-mail is an easy way to assemble an e-mail to other areas of the business. What's even better about these cmdlets is the ease of use. If you have a data object in PowerCLI, all that you need to do is pipe that data object into the ConvertTo-CSV or ConvertTo-HTML cmdlets and you instantly get the formatted data. You might not be satisfied with the HTML-generated version alone, but like any other HTML, you can transform the look and formatting of the HTML using CSS by adding a header. In this topic, you will examine the conversion cmdlets with a simple set of Get- cmdlets. You will also take a look at trimming results using the Select statements and formatting HTML results with CSS. This topic will pull a list of virtual machines and their basic properties to send to a manager who can reconcile it against internal records or system monitoring. It will export to a CSV file that will be attached to the e-mail and you will use the HTML to format a list in an e-mail to send to the manager. Getting ready To begin this topic, you will need to use the PowerShell ISE. How to do it… In order to examine the conversion cmdlets using Get- cmdlets, trim results using the Select statements, and format HTML results with CSS, perform the following steps: Open the PowerShell ISE and run Add-PSSnapIn VMware.VimAutomation.Core to initialize a PowerCLI session within the ISE. Again, you will use the Get-VM cmdlet as the base for this topic. The fields that we care about are the name of the VM, the number of CPUs, the amount of memory, and the description: $VMs = Get-VM | Select Name, NumCPU, MemoryGB, Description In addition to the top-level data, you also want to provide the IP address, hostname, and the operating system. These are all available from the ExtensionData.Guest property: $VMs = Get-VM | Select Name, NumCPU, MemoryGB, Description, @{N="Hostname";E={$_.ExtensionData.Guest.HostName}}, @{N="IP";E={$_.ExtensionData.Guest.IPAddress}}, @{N="OS";E={$_.ExtensionData.Guest.GuestFullName}} The next step is to take this data and format it to be sent as an HTML e-mail. Converting the information to HTML is actually easy. Pipe the variable you created with the data into ConvertTo-HTML and store in a new variable. You will need to reuse the data to convert it to a CSV file to attach: $HTMLBody = $VMs | ConvertTo-HTML If you were to output the contents of $HTMLBody, you will see that it is very plain, inheriting the defaults of the browser or e-mail program used to display it. To dress this up, you need to define some basic CSS to add some style for the <body>, <table>, <tr>, <td>, and <th> tags. You can add this by running the ConvertTo-HTML cmdlet again with the -PreContent parameter: $css = "<style> body { font-family: Verdana, sans-serif; fontsize: 14px; color: #666; background: #FFF; } table{ width:100%; border-collapse:collapse; } table td, table th { border:1px solid #333; padding: 4px; } table th { text-align:left; padding: 4px; background-color:#BBB; color:#FFF;} </style>" $HTMLBody = $VMs | ConvertTo-HTML -PreContent $css It might also be nice to add the date and time generated to the end of the file. You can use the -PostContent parameter to add this: $HTMLBody = $VMs | ConvertTo-HTML -PreContent $css -PostContent "<div><strong>Generated:</strong> $(Get-Date)</div>" Now, you have the HTML body of your message. To take the same data from $VMs and save it to a CSV file that you can use, you will need a writable directory, and a good choice is to use your My Documents folder. You can obtain this using an environment variable: $tempdir = [environment]::getfolderpath("mydocuments") Now that you have a temp directory, you can perform your export. Pipe $VMs to Export-CSV and specify the path and filename: $VMs | Export-CSV $tempdirVM_Inventory.csv At this point, you are ready to assemble an e-mail and send it along with your attachment. Most of the cmdlets are straightforward. You set up a $msg variable that is a MailMessage object. You create an Attachment object and populate it with your temporary filename and then create an SMTP server with the server name: $msg = New-Object Net.Mail.MailMessage $attachment = new-object Net.Mail.Attachment("$tempdirVM_Inventory.csv") $smtpServer = New-Object Net.Mail.SmtpClient("hostname") You set the From, To, and Subject parameters of the message variable. All of these are set with dot notation on the $msg variable: $msg.From = "fromaddress@yourcompany.com" $msg.To.Add("admin@yourcompany.com") $msg.Subject = "Weekly VM Report" You set the body you created earlier, as $HTMLBody, but you need to run it through Out-String to convert any other data types to a pure string for e-mailing. This prevents an error where System.String[] appears instead of your content in part of the output: $msg.Body = $HTMLBody | Out-String You need to take the attachment and add it to the message: $msg.Attachments.Add($attachment) You need to set the message to an HTML format; otherwise, the HTML will be sent as plain text and not displayed as an HTML message: $msg.IsBodyHtml = $true Finally, you are ready to send the message using the $smtpServer variable that contains the mail server object. Pass in the $msg variable to the server object using the Send method and it transmits the message via SMTP to the mail server: $smtpServer.Send($msg) Don't forget to clean up the temporary CSV file you generated. To do this, use the PowerShell Remove-Item cmdlet that will remove the file from the filesystem. Add a -Confirm parameter to suppress any prompts: Remove-Item $tempdirVM_Inventory.csv -Confirm:$false How it works… Most of this topic relies on native PowerShell and less on the PowerCLI portions of the language. This is the beauty of PowerCLI. Since it is based on PowerShell and only an extension, you lose none of the functions of PowerShell, a very powerful set of commands in its own right. The ConvertTo-HTML cmdlet is very easy to use. It requires no parameters to produce HTML, but the HTML it produces isn't the most legible if you display it. However, a bit of CSS goes a long way to improve the look of the output. Add some colors and style to the table and it becomes a really easy and quick way to format a mail message of data to be sent to a manager on a weekly basis. The Export-CSV cmdlet lets you take the data returned by a cmdlet and convert that into an editable format for use. You can place this onto a file share for use or you can e-mail it along, as you did in this topic. This topic takes you step by step through the process of creating a mail message, formatting it in HTML, and making sure that it's relayed as an HTML message. You also looked at how to attach a file. To send a mail, you define a mail server as an object and store it in a variable for reuse. You create a message object and store it in a variable and then set all of the appropriate configuration on the message. For an attachment, you create a third object and define a file to be attached. That is set as a property on the message object and then finally, the message object is sent using the server object. There's more… ConvertTo-HTML is just one of four conversion cmdlets in PowerShell. In addition to ConvertTo-HTML, you can convert data objects into XML. ConvertTo-JSON allows you to convert a data object into an XML format specific for web applications. ConvertTo-CSV is identical to Export-CSV except that it doesn't save the content immediately to a defined file. If you had a use case to manipulate the CSV before saving it, such as stripping the double quotes or making other alternations to the contents, you can use ConvertTo-CSV and then save it to a file at a later point in your script. Reporting VM objects created during a predefined time period from VI Events object An important auditing tool in your environment can be a report of when virtual machines were created, cloned, or deleted. Unlike snapshots, that store a created date on the snapshot, virtual machines don't have this property associated with them. Instead, you have to rely on the events log in vSphere to let you know when virtual machines were created. PowerCLI has the Get-VIEvents cmdlet that allows you to retrieve the last 1,000 events on the vCenter, by default. The cmdlet can accept a parameter to include more than the last 1,000 events. The cmdlet also allows you to specify a start date, and this can allow you to search for things within the past week or the past month. At a high level, this topic works the same in both PowerCLI and the vSphere SDK for Perl (VIPerl). They both rely on getting the vSphere events and selecting the specific events that match your criteria. Even though you are looking for VM creation events in this topic, you will see that the code can be easily adapted to look for many other types of events. Getting ready To begin this topic, you will need a PowerCLI window and an active connection to a vCenter server. How to do it… In order to report VM objects created during a predefined time period from VI Events object, perform the following steps: You will use the Get-VIEvent cmdlet to retrieve the VM creation events for this topic. To begin, get a list of the last 50 events from the vCenter host using the -MaxSamples parameter: Get-VIEvent -MaxSamples 50 If you pipe the output from the preceding cmdlet to Get-Member, you will see that this cmdlet can return a lot of different objects. However, the type of object isn't really what you need to find the VM's created events. Looking through the objects, they all include a GetType() method that returns the type of event. Inside the type, there is a name parameter. Create a calculated expression using GetType() and then group it based on this expression, you will get a usable list of events you can search for. This list is also good for tracking the number of events your systems have encountered or generated: Get-VIEvent -MaxSamples 2000 | Select @{N="Type";E={$_.GetType().Name}} | Group Type In the preceding screenshot, you will see that there are VMClonedEvent, VmRemovedEvent, and VmCreatedEvent listed. All of these have to do with creating or removing virtual machines in vSphere. Since you are looking for created events, VMClonedEvent and VmCreatedEvent are the two needed for this script. Write a Where statement to return only these events. To do this, we can use a regular expression with both the event names and the -match PowerShell comparison parameter: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} Next, you want to select just the properties that you want in your output. To do this, add a Select statement and you will reuse the calculated expression from Step 3. If you want to return the VM name, which is in a Vm property with the type of VMware.Vim.VVmeventArgument, you can create a calculated expression to return the VM name. To round out the output, you can include the FullFormattedMessage, CreatedTime, and UserName properties: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} | Select @{N="Type",E={$_.GetType().Name}}, @{N="VMName",E={$_.Vm.Name}},FullFormattedMessage, CreatedTime, UserName The last thing you will want to do is go back and add a time frame to the Get-VIEvent cmdlet. You can do this by specifying the -Start parameter along with (Get-Date).AddMonths(-1) to return the last month's events: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} | Select @{N="Type",E={$_.GetType().Name}}, @{N="VMName",E={$_.Vm.Name}},FullFormattedMessage, CreatedTime, UserName How it works… The Get-VIEvent cmdlet drives a majority of this topic, but in this topic you only scratched the surface of the information you can unearth with Get-VIEvent. As you saw in the screenshot, there are so many different types of events that can be reported, queried, and acted upon from the vCenter server. Once you discover and know which events you are looking for specifically, then it's a matter of scoping down the results with a Where statement. Last, you use calculated expressions to pull data that is several levels deep in the returned data object. One of the primary things employed here is a regular expression used to search for the types of events you were interested in: VmCreatedEvent and VmClonedEvent. By combining a regular expression with the -match operator, you were able to use a quick and very understandable bit of code to find more than one type of object you needed to return. There's more… Regular Expressions (RegEx) are big topics on their own. These types of searches can match any type of pattern that you can establish or in the case of this topic, a number of defined values that you are searching for. RegEx are beyond the scope of this article, but they can be a big help anytime you have a pattern you need to search for and match, or perhaps more importantly, replace. You can use the -replace operator instead of –match to not only to find things that match your pattern, but also change them. See also For more information on Regular Expressions refer to http://ss64.com/ps/syntax-regex.html The PowerShell.com page: Text and Regular Expressions is available at http://powershell.com/cs/blogs/ebookv2/archive/2012/03/20/chapter-13-text-and-regular-expressions.aspx Setting custom properties to add useful context to your virtual machines Building on the use case for the Get-VIEvent cmdlet, Alan Renouf of VMware's PowerCLI team has a useful script posted on his personal blog (refer to the See also section) that helps you pull the created date and the user who created a virtual machine and populate this into a custom attribute. This is a great use for a custom attribute on virtual machines and makes some useful information available that is not normally visible. This is a process that needs to be run often to pick up details for virtual machines that have been created. Rather than looking specifically at a VM and trying to go back and find its creation date as Alan's script does, in this article, you will take a different approach building on the previous article and populate the information from the found creation events. Maintenance in this form would be easier by finding creation events for the last week, running the script weekly, and updating the VMs with the data in the object rather than looking for VMs with missing data and searching through all of the events. This article assumes that you are using a Windows system that is joined to AD on the same domain as your vCenter. It also assumes that you have loaded the Remote Server Administration Tools for Windows so that the Active Directory PowerShell modules are available. This is a separate download for Windows 7. The Active Directory Module for PowerShell can be enabled on Windows 7, Windows 8, Windows Server 2008, and Windows Server 2012 in the Programs and Features control panel under Turn Windows features on or off. Getting ready To begin this script, you will need the PowerShell ISE. How to do it… I order to set custom properties to add useful context to your virtual machines, perform the following steps: Open the PowerShell ISE and run Add-PSSnapIn VMware.VimAutomation.Core to initialize a PowerCLI session within the ISE. The first step is to create a custom attribute in vCenter for the CreatedBy and CreateDate attributes: New-CustomAttribute -TargetType VirtualMachine -Name CreatedBy New-CustomAttribute -TargetType VirtualMachine -Name CreateDate Before you begin the scripting, you will need to run ImportSystemModules to bring in the Active Directory cmdlets that you will use later to lookup the username and reference it back to a display name: ImportSystemModules Next, you need to locate and pull out all of the creation events with the same code as the Reporting VM objects created during a predefined time period from VI Events object topic. You will assign the events to a variable for processing in a loop in this case; however, you will also want to change the period to 1 week (7 days) instead of 1 month: $Events = Get-VIEvent -Start (Get-Date).AddDays(-7) -MaxSamples25000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} The next step is to begin a ForEach loop to pull the data and populate it into a custom attribute: ForEach ($Event in $Events) { The first thing to do in the loop is to look up the VM referenced in the Event's Vm parameter by name using Get-VM: $VM = Get-VM -Name $Event.Vm.Name Next, you can use the CreatedTime parameter on the event and set this as a custom attribute on the VM using the Set-Annotation cmdlet: $VM | Set-Annotation -CustomAttribute "CreateDate" -Value $Event.CreatedTime Next, you can use the Username parameter to lookup the display name of the user account who created the VM using Active Directory cmdlets. For the Active Directory cmdlets to be available, your client system or server needs to have the Microsoft Remote Server Administration Tools (RSAT) installed to make the Active Directory cmdlets available. The data coming from $Event.Username is in DOMAINusername format. You need just the username to perform a lookup with Get-AdUser, so that you can split on the backslash and return only the second item in the array resulting from the split command. After the lookup, the display name that you will want to use is in the Name property. You can retrieve it with dot notation: $User = (($Event.UserName.split(""))[1]) $DisplayName = (Get-AdUser $User).Name To do this, you need to use a built-in on the event and set this as a custom attribute on the VM using the Set-Annotation cmdlet: $VM | Set-Annotation -CustomAttribute "CreatedBy" -Value $DisplayName Finally, close the ForEach loop. } <# End ForEach #> How it works… This topic works by leveraging the Get-VIEvent cmdlet to search for events in the log from the last number of days. In larger environments, you might need to expand the -MaxSamples cmdlet well beyond the number in this example. There might be thousands of events per day in larger environments. The topic looks through the log and the Where statement returns only the creation events. Once you have the object with all of the creation events, you can loop through this and pull out the username of the person who created each virtual machine and the time they were created. Then, you just need to populate the data into the custom attributes created. There's more… Combine this script with the next topic and you have a great solution for scheduling this routine to run on a daily basis. Running it daily would certainly cut down on the number of events you need to process through to find and update the virtual machines that have been created with the information. You should absolutely go and read Alan Renouf's blog post on which this topic is based. This primary difference between this topic and the one Alan presents is the use of native Windows Active Directory PowerShell lookups in this topic instead of the Quest Active Directory PowerShell cmdlets. See also Virtu-Al.net: Who created that VM? is available at http://www.virtu-al.net/2010/02/23/who-created-that-vm/ Using PowerShell native capabilities to schedule scripts There is potentially a better and easier way to schedule your processes to run from PowerShell and PowerCLI and those are known as Scheduled Jobs. Scheduled Jobs were introduced in PowerShell 3.0 and distributed as part of the Windows Management Framework 3.0 and higher. While Scheduled Tasks can execute any Windows batch file or executable, Scheduled Jobs are specific to PowerShell and are used to generate and create background jobs that run once or on a specified schedule. Scheduled Jobs appear in the Windows Task Scheduler and can be managed with the scheduled task cmdlets of PowerShell. The only difference is that the scheduled jobs cmdlets cannot manage scheduled tasks. These jobs are stored in the MicrosoftWindowsPowerShellScheduledJobs path of the Windows Task Scheduler. You can see and edit them through the management console in Windows after creation. What's even greater about Scheduled Jobs in PowerShell is that you are not forced into creating a .ps1 file for every new job you need to run. If you have a PowerCLI one-liner that provides all of the functionality you need, you can simply include it in a job creation cmdlet without ever needing to save it anywhere. Getting ready To being this topic, you will need a PowerCLI window with an active connection to a vCenter server. How to do it… In order to schedule scripts using the native capabilities of PowerShell, perform the following steps: If you are running PowerCLI on systems lower than Windows 8 or Windows Server 2012, there's a chance that you are running PowerShell 2.0 and you will need to upgrade in order to use this. To check, run Get-PSVersion to see which version is installed on your system. If less than version 3.0, upgrade before continuing this topic. Throw back a script you have already written, the script to find and remove snapshots over 30 days old: Get-Snapshot -VM * | Where {$_.Created -LT (Get-Date).AddDays(-30)} | Remove-Snapshot -Confirm:$false To schedule a new job, the first thing you need to think about is what triggers your job to run. To define a new trigger, you use the New-JobTrigger cmdlet: $WeeklySundayAt6AM = New-JobTrigger -Weekly -At "6:00 AM" -DaysOfWeek Sunday –WeeksInterval 1 Like scheduled tasks, there are some options that can be set for a scheduled job. These include whether to wake the system to run: $Options = New-ScheduledJobOption –WakeToRun –StartIfIdle –MultipleInstancePolicy Queue Next, you will use the Register-ScheduledJob cmdlet. This cmdlet accepts a parameter named ScriptBlock and this is where you will specify the script that you have written. This method works best with one-liners, or scripts that execute in a single line of piped cmdlets. Since this is PowerCLI and not just PowerShell, you will need to add the VMware cmdlets and connect to vCenter at the beginning of the script block. You also need to specify the -Trigger and -ScheduledJobOption parameters that are defined in the previous two steps: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"-ScriptBlock { Add-PSSnapIn VMware.VimAutomation.Core; Connect-VIServer servers; Get-Snapshot -VM * | Where {$_.Created -LT (Get-Date).AddDays(-30)} | Remove-Snapshot -Confirm:$false} -Trigger$WeeklySundayAt6AM-ScheduledJobOption $Options You are not restricted to only running a script block. If you have a routine in a .ps1 file, you can easily run it from ScheduledJob also. For illustration, if you have a .ps1 file stored in c:Scripts named 30DaySnaps.ps1, you can use the following cmdlet to register a job: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"–FilePath c:Scripts30DaySnaps.ps1 -Trigger $WeeklySundayAt6AM-ScheduledJobOption $Options Rather than scheduling the scheduled job and defining the PowerShell in the job, a more maintainable method can be to write the module and then call the function from the scheduled job. One other requirement is that Single Sign-On should be configured so that the Connect-VIServer works correctly in the script: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"-ScriptBlock {Add-PSSnapIn VMware.VimAutomation.Core; Connect-ViServer server; Import-Module 30DaySnaps; Remove-30DaySnaps -VM*} - How it works… This topic leverages the scheduled jobs framework developed specifically for running PowerShell as scheduled tasks. It doesn't require you to configure all of the extra settings as you have seen in previous examples of scheduled tasks. These are PowerShell native cmdlets that know how to implement PowerShell on a schedule. One thing to keep in mind is that these jobs will begin with a normal PowerShell session—one that knows nothing about PowerCLI, by default. You will need to include Add-PSSnapIn VMware.VimAutomation.Core in each script block or the .ps1 file that you use with a scheduled job. There's more… There is a full library of cmdlets to implement and maintain scheduled jobs. You have Set-ScheduleJob that allows you to change the settings of a registered scheduled job on a Windows system. You can disable and enable scheduled jobs using the Disable-ScheduledJob and Enable-Scheduled job cmdlets. This allows you to pause the execution of a job during maintenance, or for other reasons, without needing to remove and resetup the job. This is especially helpful since the script blocks are inside the job and not saved in a separate .ps1 file. You can also configure remote scheduled jobs on other systems using the Invoke-Command PowerShell cmdlet. This concept is shown in examples on Microsoft TechNet in the documentation for the Register-ScheduledJob cmdlet. In addition to scheduling new jobs, you can remove jobs using the Unregister-ScheduledJob cmdlet. This cmdlet requires one of three identifying properties to unschedule a job. You can pass -Name with a string, -ID with the number identifying the job, or an object reference to the scheduled job with -InputObject. You can combine the Get-ScheduledJob cmdlet to find and pass the object by pipeline. See also To read more about Microsoft TechNet: PSScheduledJob Cmdlets, refer to http://technet.microsoft.com/en-us/library/hh849778.aspx Summary This article was all about leveraging the information and data from PowerCLI as well as how we can format and display this information. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [article] Creating an Image Profile by cloning an existing one [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 8874

article-image-configuration-manager-troubleshooting-toolkit
Packt
18 Jan 2016
8 min read
Save for later

The Configuration Manager Troubleshooting Toolkit

Packt
18 Jan 2016
8 min read
In this article by Peter Egerton and Gerry Hampson, the author of the book Troubleshooting System Center Configuration Manager you will be able to dive deeper in the troubleshoot Configuration Manager concepts. In order to successfully troubleshoot Configuration Manager, there are a number of tools that are recommended to be always kept in your troubleshooting toolkit. These include a mixture of Microsoft provided tools, third-party tools, and some community developed tools. Best of all is that they are free. As it could be expected with the broad scope of functionality within Configuration Manager, there are also quite a variety of different utilities out there, so we need to know where to use the right tool for the problem. We are going to take a look at some commonly used tools and some not so commonly used ones and see what they do and where we can use them. These are not necessarily the be all and end all, but they will certainly help us get on the way to solving problems and undoubtedly save some time. In this article, we are going to cover the following: Registry editor Group policy tools Log file viewer PowerShell Community tools (For more resources related to this topic, see here.) Registry Editor Also worth a mention is the Registry Editor that is built into Microsoft Windows on both server and client operating systems. Most IT administrators know this as regedit.exe and it is the default tool of choice for making any changes to, or just simply viewing the contents of a registry key or value. Many of the Configuration Manager roles and the clients allow us to make changes to enables features such as extended logging or manually changing policy settings by using the registry to do so. It should be noted that changing the registry is not something that should be taken lightly however, as making incorrect changes can result in creating more problems not just in Configuration Manager but the operating system as a whole. If we stick to the published settings though, we should be fine and this can be a fine tool while troubleshooting oddities and problems in a Configuration Manager environment. Group Policy Tools As Configuration Manager is a client management tool, there are certain features and settings on a client such as software updates that may conflict with settings defined in Group Policy. In particular, in larger organizations, it can often be useful to compare and contrast the settings that may conflict between Group Policy and Configuration Manager. Using integrated tools such as Resultant Set of Policy (RSoP) and Group Policy Result (gpresult.exe) or the Group Policy management console as part of the Remote Server Administration Tools (RSAT) can help identify where and why clients are not functioning as expected. We can then move forward and amend group policies as and where required using the Group Policy object editor. Used in combination, these tools can prove essential while dealing with Configuration Manager clients in particular. Log file viewer Those who have spent any time at all working with Configuration Manager will know that it contains quite a few log files, literally hundreds. We will go through the log files in more detail in the next chapter but we will need to use something to read the logs. We can use something as simple as Notepad and to an extent there are some advantages with using this as it is a no nonsense text reader. Having said that, generally speaking most people want a little more when it comes to reading Configuration Manager logs as they can often be long, complex, and frequently refreshed. We have already seen one example of a log viewer as part of the Configuration Manager Support Center, but Configuration Manager includes its own log file viewer that is tailored to the needs of troubleshooting the product logs. In Configuration Manager 2012 versions, we are provided with CMTrace.exe. The previous versions provided us with Trace32.exe or SMSTrace.exe. They are very similar tools but we will highlight some of the features of CMTrace which is the more modern of the two. To begin with, we can typically find CMTrace in the following locations: <INSTALL DRIVE>Program FilesMicrosoft Configuration ManagerToolsCMTrace.exe <INSTALL MEDIA>SMSSETUPTOOLSCMTrace.exe Those that are running Configuration Manager 2012 R2 and up also have CMTrace available out of the box in WinPE when running Operating System Deployments. We can simply hit F8 if we have command support enabled in the WinPE image and type CMTrace. This can also be added to the later stages of a task sequence when running in the full operating system by copying the file onto the hard disk. The single biggest advantage of using CMTrace over a standard text reader is that it is a tail reader which by default is refreshed every 500 milliseconds or, in others words, it will update the window as new lines are logged in the log file; we also have the functionality to pause the file too. The other functionality of CMTrace is to allow filtering of the log based on certain conditions and there is also a highlight feature which can highlight a whole line in yellow if a word we are looking for is found on the line. The program automatically highlights lines if certain words are found such as error or warning, which is useful but can also be a red herring at times, so this is something to be aware of if we come across logs with these key words. We can also merge log files; this is particularly useful when looking at time critical incidents, as we can analyze data from multiple sources in the order they happened and understand the flow of information between the different components. PowerShell PowerShell is here to stay. A phrase often heard recently is Learn PowerShell or learn golf and like it or not you cannot get away from the emphasis on this homemade product from Microsoft. This is evident in just about all the current products, as PowerShell is so deeply embedded. Configuration Manager is no exception to this and although we cannot quite do everything you can in the console, there are an increasing number of cmdlets becoming available, more than 500 at the time of writing. So the question we may ask is where does this come into troubleshooting? Well, for the uninitiated in PowerShell, maybe it won't be the first tool they turn to, but with some experience, we can soon find that performing things like WMI queries and typical console tasks can be made quicker and slicker with PowerShell. If we prefer, we can also read log files from PowerShell and make remote changes to the machines. PowerShell can be a one-stop shop for our troubleshooting needs if we spend the time to pick up the skills. Community tools Finally, as user group community leaders, we couldn't leave this section out of the troubleshooting toolkit. Configuration Manager has such a great collection of community contributors that have likely to have been through our troubleshooting pain before us and either blog about it, post it on a forum or create a fix for it. There is such an array of free tools out there that people share that we cannot ignore them. Outside of troubleshooting specifically, some of the best add-ons available for Configuration Manager are community contributions whether that be from individuals or businesses. There are so many utilities which are ever evolving and not all will suit your needs, but if we browse the Microsoft TechNet galleries, Codeplex and GitHub, you are sure find a great resource to meet your requirements. Why not get involved with a user group too, in terms of troubleshooting, this is probably one of the best things I personally could recommend. It gives access to a network of people who work on the same product as us and are often using them in the same way, so it is quite likely that someone has seen our problem before and can fast forward us to a solution. Microsoft TechNet Galleries:https://www gallery.technet.microsoft.com/ Codeplex: https://www.codeplex.com/ GitHub: https://www.github.com/ Summary In this article, you learned about various troubleshoot Configuration Manager tools such as Registry editor, Group policy tools, Log file viewer, PowerShell, and Community tools. Resources for Article: Further resources on this subject: Basic Troubleshooting Methodology [article] Monitoring and Troubleshooting Networking [article] Troubleshooting your BAM Applications [article]
Read more
  • 0
  • 0
  • 8682

article-image-essentials-vmware-vsphere
Packt
09 Jul 2015
7 min read
Save for later

Essentials of VMware vSphere

Packt
09 Jul 2015
7 min read
In this article by Puthiyavan Udayakumar, author of the book VMware vSphere Design Essentials, we will cover the following topics: Essentials of designing VMware vSphere The PPP framework The challenges and encounters faced on virtual infrastructure (For more resources related to this topic, see here.) Let's get started with understanding the essentials of designing VMware vSphere. Designing is nothing but assembling and integrating VMware vSphere infrastructure components together to form the baseline for a virtualized datacenter. It has the following benefits: Saves power consumption Decreases the datacenter footprint and helps towards server consolidation Fastest server provisioning On-demand QA lab environments Decreases hardware vendor dependency Aids to move to the cloud Greater savings and affordability Superior security and High Availability Designing VMware vSphere Architecture design principles are usually developed by the VMware architect in concurrence with the enterprise CIO, Infrastructure Architecture Board, and other key business stakeholders. From my experience, I would always urge you to have frequent meetings to observe functional requirements as much as possible. This will create a win-win situation for you and the requestor and show you how to get things done. Please follow your own approach, if it works. Architecture design principles should be developed by the overall IT principles specific to the customer's demands, if they exist. If not, they should be selected to ensure positioning of IT strategies in line with business approaches. In nutshell, architect should aim to form an effective architecture principles that fulfills the infrastructure demands, following are high level principles that should be followed across any design: Design mission and plans Design strategic initiatives External influencing factors When you release a design to the customer, keep in mind that the design must have the following principles: Understandable and robust Complete and consistent Stable and capable of accepting continuous requirement-based changes Rational and controlled technical diversity Without the preceding principles, I wouldn't recommend you to release your design to anyone even for peer review. For every design, irrespective of the product that you are about to design, try the following approach; it should work well but if required I would recommend you make changes to the approach. The following approach is called PPP, which will focus on people's requirements, the product's capacity, and the process that helps to bridge the gap between the product capacity and people requirements: The preceding diagram illustrates three entities that should be considered while designing VMware vSphere infrastructure. Please keep in mind that your design is just a product designed by a process that is based on people's needs. In the end, using this unified framework will aid you in getting rid of any known risks and its implications. Functional requirements should be meaningful; while designing, please make sure there is a meaning to your design. Selecting VMware vSphere from other competitors should not be a random pick, you should always list the benefits of VMware vSphere. Some of them are as follows: Server consolidation and easy hardware changes Dynamic provisioning of resources to your compute node Templates, snapshots, vMotion, DRS, DPM, High Availability, fault tolerance, auto monitoring, and solutions for warnings and alerts Virtual Desktop Infrastructure (VDI), building a disaster recovery site, fast deployments, and decommissions The PPP framework Let's explore the components that integrate to form the PPP framework. Always keep in mind that the design should consist of people, processes, and products that meet the unified functional requirements and performance benchmark. Always expect the unexpected. Without these metrics, your design is incomplete; PPP always retains its own decision metrics. What does it do, who does it, and how is it done? We will see the answers in the following diagrams: The PPP Framework helps you to get started with requirements gathering, design vision, business architecture, infrastructure architecture, opportunities and solutions, migration planning, fixing the tone for implementing and design governance. The following table illustrates the essentials of the three-dimensional approach and the basic questions that are required to be answered before you start designing or documenting about designing, which will in turn help to understand the real requirements for a specific design: Phase Description Key components Product Results of what? In what hardware will the VM reside? What kind of CPU is required? What is the quantity of CPU, RAM, storage per host/VM? What kind of storage is required? What kind of network is required? What are the standard applications that need to be rolled out? What kind of power and cooling are required? How much rack and floor space is demanded? People Results of who? Who is responsible for infrastructure provisioning? Who manages the data center and supplies the power? Who is responsible for implementation of the hardware and software patches? Who is responsible for storage and back up? Who is responsible for security and hardware support? Process Results of how? How should we manage the virtual infrastructure? How should we manage hosted VMs? How should we provision VM on demand? How should a DR site be active during a primary site failure? How should we provision storage and backup? How should we take snapshots of VMs? How should we monitor and perform periodic health checks? Before we start to apply the PPP framework on VMware vSphere, we will discuss the list of challenges and encounters faced on the virtual infrastructure. List of challenges and encounters faced on the virtual infrastructure In this section, we will see a list of challenges and encounters faced with virtual infrastructure due to the simple reason that we fail to capture the functional and non-functional demands of business users, or do not understand the fit-for-purpose concept: Resource Estimate Misfire: If you underestimate the amount of memory required up-front, you could change the number of VMs you attempt to run on the VMware ESXi host hardware. Resource unavailability: Without capacity management and configuration management, you cannot create dozens or hundreds of VMs on a single host. Some of the VMs could consume all resources, leaving other VMs unknown. High utilization: An army of VMs can also throw workflows off-balance due to the complexities they can bring to provisioning and operational tasks. Business continuity: Unlike a PC environment, VMs cannot be backed up to an actual hard drive. This is why 80 percent of IT professionals believe that virtualization backup is a great technological challenge. Security: More than six out of ten IT professionals believe that data protection is a top technological challenge. Backward compatibility: This is especially challenging for certain apps and systems that are dependent on legacy systems. Monitoring performance: Unlike physical servers, you cannot monitor the performance of VMs with common hardware resources such as CPU, memory, and storage. Restriction of licensing: Before you install software on virtual machines, read the license agreements; they might not support this; hence, by hosting on VMs, you might violate the agreement. Sizing the database and mailbox: Proper sizing of databases and mailboxes is really critical to the organization's communication systems and for applications. Poor design of storage and network: A poor storage design or a networking design resulting from a failure to properly involve the required teams within an organization is a sure-fire way to ensure that this design isn't successful. Summary In this article we covered a brief introduction of the essentials of designing VMware vSphere which focused on the PPP framework. We also had look over the challenges and encounters faced on the virtual infrastructure. Resources for Article: Further resources on this subject: Creating and Managing VMFS Datastores [article] Networking Performance Design [article] The Design Documentation [article]
Read more
  • 0
  • 0
  • 8677

article-image-planning-desktop-virtualization
Packt
16 Oct 2014
3 min read
Save for later

Planning Desktop Virtualization

Packt
16 Oct 2014
3 min read
 This article by Andy Paul, author of the book Citrix XenApp® 7.5 Virtualization Solutions, explains the VDI and its building blocks in detail. (For more resources related to this topic, see here.) The building blocks of VDI The first step in understanding Virtual Desktop Infrastructure (VDI) is to identify what VDI means to your environment. VDI is an all-encompassing term for most virtual infrastructure projects. For this book, we will use the definitions cited in the following sections for clarity. Hosted Virtual Desktop (HVD) Hosted Virtual Desktop is a machine running a single-user operating system such as Windows 7 or Windows 8, sometimes called a desktop OS, which is hosted on a virtual platform within the data center. Users remotely access a desktop that may or may not be dedicated but runs with isolated resources. This is typically a Citrix XenDesktop virtual desktop, as shown in the following figure:   Hosted Virtual Desktop model; each user has dedicated resources Hosted Shared Desktop (HSD) Hosted Shared Desktop is a machine running a multiuser operating system such as Windows 2008 Server or Windows 2012 Server, sometimes called a server OS, possibly hosted on a virtual platform within the data center. Users remotely access a desktop that may be using shared resources among multiple users. This will historically be a Citrix XenApp published desktop, as demonstrated in the following figure:   Hosted Shared Desktop model; each user shares the desktop server resources Session-based Computing (SBC) With Session-based Computing, users remotely access applications or other resources on a server running in the data center. These are typically client/server applications. This server may or may not be virtualized. This is a multiuser environment, but the users do not access the underlying operating system directly. This will typically be a Citrix XenApp hosted application, as shown in the following figure:   Session-based computing model; each user accesses applications remotely, but shares resources Application virtualization In application virtualization, applications are centrally managed and distributed, but they are locally executed. This may be in conjunction with, or separate from, the other options mentioned previously. Application virtualization typically involves application isolation, allowing the applications to operate independently of any other software. This will be an example of Citrix XenApp offline applications as well as Citrix profiled applications, Microsoft App-V application packages, and VMware ThinApp solutions. Have a look at the following figure:   Application virtualization model; the application packages execute locally The preceding list is not a definitive list of options, but it serves to highlight the most commonly used elements of VDI. Other options include client-side hypervisors for local execution of a virtual desktop, hosted physical desktops, and cloud-based applications. Depending on the environment, all of these components can be relevant. Summary In this article, we learned the VDI and understood its building blocks in detail. Resources for Article: Further resources on this subject: Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Introduction to Citrix XenDesktop [article]
Read more
  • 0
  • 0
  • 8650

article-image-installing-neutron
Packt
04 Nov 2015
15 min read
Save for later

Installing Neutron

Packt
04 Nov 2015
15 min read
We will learn about OpenStack networking in this article by James Denton, who is the author of the book Learning OpenStack Networking (Neutron) - Second Edition. OpenStack Networking, also known as Neutron, provides a network infrastructure as-a-service platform to users of the cloud. In this article, I will guide you through the installation of Neutron networking services on top of the OpenStack environment. Components to be installed include: Neutron API server Modular Layer 2 (ML2) plugin By the end of this article, you will have a basic understanding of the function and operation of various Neutron plugins and agents, as well as a foundation on top of which a virtual switching infrastructure can be built. (For more resources related to this topic, see here.) Basic networking elements in Neutron Neutron constructs the virtual network using elements that are familiar to all system and network administrators, including networks, subnets, ports, routers, load balancers, and more. Using version 2.0 of the core Neutron API, users can build a network foundation composed of the following entities: Network: A network is an isolated layer 2 broadcast domain. Typically reserved for the tenants that created them, networks could be shared among tenants if configured accordingly. The network is the core entity of the Neutron API. Subnets and ports must always be associated with a network. Subnet: A subnet is an IPv4 or IPv6 address block from which IP addresses can be assigned to virtual machine instances. Each subnet must have a CIDR and must be associated with a network. Multiple subnets can be associated with a single network and can be noncontiguous. A DHCP allocation range can be set for a subnet that limits the addresses provided to instances. Port: A port in Neutron represents a virtual switch port on a logical virtual switch. Virtual machine interfaces are mapped to Neutron ports, and the ports define both the MAC address and the IP address to be assigned to the interfaces plugged into them. Neutron port definitions are stored in the Neutron database, which is then used by the respective plugin agent to build and connect the virtual switching infrastructure. Cloud operators and users alike can configure network topologies by creating and configuring networks and subnets, and then instruct services such as Nova to attach virtual devices to ports on these networks. Users can create multiple networks, subnets, and ports, but are limited to thresholds defined by per-tenant quotas set by the cloud administrator. Extending functionality with plugins Neutron introduces support for third-party plugins and drivers that extend network functionality and implementation of the Neutron API. Plugins and drivers can be created that use a variety of software- and hardware-based technologies to implement the network built by operators and users. There are two major plugin types within the Neutron architecture: Core plugin Service plugin A core plugin implements the core Neutron API and is responsible for adapting the logical network described by networks, ports, and subnets into something that can be implemented by the L2 agent and IP address management system running on the host. A service plugin provides additional network services such as routing, load balancing, firewalling, and more. The Neutron API provides a consistent experience to the user despite the chosen networking plugin. For more information on interacting with the Neutron API, visit http://developer.openstack.org/api-ref-networking-v2.html. Modular Layer 2 plugin Prior to the inclusion of the Modular Layer 2 (ML2) plugin in the Havana release of OpenStack, Neutron was limited to using a single core plugin at a time. The ML2 plugin replaces two monolithic plugins in its reference implementation: the LinuxBridge plugin and the Open vSwitch plugin. Their respective agents, however, continue to be utilized and can be configured to work with the ML2 plugin. Drivers The ML2 plugin introduced the concept of type drivers and mechanism drivers to separate the types of networks being implemented and the mechanisms for implementing networks of those types. Type drivers An ML2 type driver maintains type-specific network state, validates provider network attributes, and describes network segments using provider attributes. Provider attributes include network interface labels, segmentation IDs, and network types. Supported network types include local, flat, vlan, gre, and vxlan. Mechanism drivers An ML2 mechanism driver is responsible for taking information established by the type driver and ensuring that it is properly implemented. Multiple mechanism drivers can be configured to operate simultaneously, and can be described using three types of models: Agent-based: This includes LinuxBridge, Open vSwitch, and others Controller-based: This includes OpenDaylight, VMWare NSX, and others Top-of-Rack: This includes Cisco Nexus, Arista, Mellanox, and others The LinuxBridge and Open vSwitch ML2 mechanism drivers are used to configure their respective switching technologies within nodes that host instances and network services. The LinuxBridge driver supports local, flat, vlan, and vxlan network types, while the Open vSwitch driver supports all of those as well as the gre network type. The L2 population driver is used to limit the amount of broadcast traffic that is forwarded across the overlay network fabric. Under normal circumstances, unknown unicast, multicast, and broadcast traffic floods out all tunnels to other compute nodes. This behavior can have a negative impact on the overlay network fabric, especially as the number of hosts in the cloud scales out. As an authority on what instances and other network resources exist in the cloud, Neutron can prepopulate forwarding databases on all hosts to avoid a costly learning operation. When ARP proxy is used, Neutron prepopulates the ARP table on all hosts in a similar manner to avoid ARP traffic from being broadcast across the overlay fabric. ML2 architecture The following diagram demonstrates at a high level how the Neutron API service interacts with the various plugins and agents responsible for constructing the virtual and physical network: Figure 3.1 The preceding diagram demonstrates the interaction between the Neutron API, Neutron plugins and drivers, and services such as the L2 and L3 agents. For more information on the Neutron ML2 plugin architecture, refer to the OpenStack Neutron Modular Layer 2 Plugin Deep Dive video from the 2013 OpenStack Summit in Hong Kong available at https://www.youtube.com/watch?v=whmcQ-vHams. Third-party support Third-party vendors such as PLUMGrid and OpenContrail have implemented support for their respective SDN technologies by developing their own monolithic or ML2 plugins that implement the Neutron API and extended network services. Others, including Cisco, Arista, Brocade, Radware, F5, VMWare, and more, have created plugins that allow Neutron to interface with OpenFlow controllers, load balancers, switches, and other network hardware. For a look at some of the commands related to these plugins, refer to Appendix, Additional Neutron Commands. The configuration and use of these plugins is outside the scope of this article. For more information on the available plugins for Neutron, visit http://docs.openstack.org/admin-guide-cloud/content/section_plugin-arch.html. Network namespaces OpenStack was designed with multitenancy in mind and provides users with the ability to create and manage their own compute and network resources. Neutron supports each tenant having multiple private networks, routers, firewalls, load balancers, and other networking resources. It is able to isolate many of those objects through the use of network namespaces. A network namespace is defined as a logical copy of the network stack with its own routes, firewall rules, and network interface devices. When using the open source reference plugins and drivers, every network, router, and load balancer that is created by a user is represented by a network namespace. When network namespaces are enabled, Neutron is able to provide isolated DHCP and routing services to each network. These services allow users to create overlapping networks with other users in other projects and even other networks in the same project. The following naming convention for network namespaces should be observed: DHCP namespace: qdhcp-<network UUID> Router namespace: qrouter-<router UUID> Load Balancer namespace: qlbaas-<load balancer UUID> A qdhcp namespace contains a DHCP service that provides IP addresses to instances using the DHCP protocol. In a reference implementation, dnsmasq is the process that services DHCP requests. The qdhcp namespace has an interface plugged into the virtual switch and is able to communicate with instances and other devices in the same network or subnet. A qdhcp namespace is created for every network where the associated subnet(s) have DHCP enabled. A qrouter namespace represents a virtual router and is responsible for routing traffic to and from instances in the subnets it is connected to. Like the qdhcp namespace, the qrouter namespace is connected to one or more virtual switches depending on the configuration. A qlbaas namespace represents a virtual load balancer and may run a service such as HAProxy that load balances traffic to instances. The qlbaas namespace is connected to a virtual switch and can communicate with instances and other devices in the same network or subnet. The leading q in the name of the network namespaces stands for Quantum, the original name for the OpenStack Networking service. Network namespaces of the types mentioned earlier will only be seen on nodes running the Neutron DHCP, L3, and LBaaS agents, respectively. These services are typically configured only on controllers or dedicated network nodes. The ip netns list command can be used to list available namespaces, and commands can be executed within the namespace using the following syntax: ip netns exec NAMESPACE_NAME <command> Commands that can be executed in the namespace include ip, route, iptables, and more. The output of these commands corresponds to data specific to the namespace they are executed in. For more information on network namespaces, see the man page for ip netns at http://man7.org/linux/man-pages/man8/ip-netns.8.html. Installing and configuring Neutron services In this installation, the various services that make up OpenStack Networking will be installed on the controller node rather than a dedicated networking node. The compute nodes will run L2 agents that interface with the controller node and provide virtual switch connections to instances. Remember that the configuration settings recommended here and online at docs.openstack.org may not be appropriate for production systems. To install the Neutron API server, the DHCP and metadata agents, and the ML2 plugin on the controller, issue the following command: # apt-get install neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-plugin-ml2 neutron-common python-neutronclient On the compute nodes, only the ML2 plugin is required: # apt-get install neutron-plugin-ml2 Creating the Neutron database Using the mysql client, create the Neutron database and associated user. When prompted for the root password, use openstack: # mysql –u root –p Enter the following SQL statements in the MariaDB [(none)] > prompt: CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; quit; Update the [database] section of the Neutron configuration file at /etc/neutron/neutron.conf on all nodes to use the proper MySQL database connection string based on the preceding values rather than the default value: [database] connection = mysql://neutron:neutron@controller01/neutron Configuring the Neutron user, role, and endpoint in Keystone Neutron requires that you create a user, role, and endpoint in Keystone in order to function properly. When executed from the controller node, the following commands will create a user called neutron in Keystone, associate the admin role with the neutron user, and add the neutron user to the service project: # openstack user create neutron --password neutron # openstack role add --project service --user neutron admin Create a service in Keystone that describes the OpenStack Networking service by executing the following command on the controller node: # openstack service create --name neutron --description "OpenStack Networking" network The service create command will result in the following output: Figure 3.2 To create the endpoint, use the following openstack endpoint create command: # openstack endpoint create --publicurl http://controller01:9696 --adminurl http://controller01:9696 --internalurl http://controller01:9696 --region RegionOne network The resulting endpoint is as follows: Figure 3.3 Enabling packet forwarding Before the nodes can properly forward or route traffic for virtual machine instances, there are three kernel parameters that must be configured on all nodes: net.ipv4.ip_forward net.ipv4.conf.all.rp_filter net.ipv4.conf.default.rp_filter The net.ipv4.ip_forward kernel parameter allows the nodes to forward traffic from the instances to the network. The default value is 0 and should be set to 1 to enable IP forwarding. Use the following command on all nodes to implement this change: # sysctl -w "net.ipv4.ip_forward=1" The net.ipv4.conf.default.rp_filter and net.ipv4.conf.all.rp_filter kernel parameters are related to reverse path filtering, a mechanism intended to prevent certain types of denial of service attacks. When enabled, the Linux kernel will examine every packet to ensure that the source address of the packet is routable back through the interface in which it came. Without this validation, a router can be used to forward malicious packets from a sender who has spoofed the source address to prevent the target machine from responding properly. In OpenStack, anti-spoofing rules are implemented by Neutron on each compute node within iptables. Therefore, the preferred configuration for these two rp_filter values is to disable them by setting them to 0. Use the following sysctl commands on all nodes to implement this change: # sysctl -w "net.ipv4.conf.default.rp_filter=0" # sysctl -w "net.ipv4.conf.all.rp_filter=0" Using sysctl –w makes the changes take effect immediately. However, the changes are not persistent across reboots. To make the changes persistent, edit the /etc/sysctl.conf file on all hosts and add the following lines: net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 Load the changes into memory on all nodes with the following sysctl command: # sysctl -p Configuring Neutron to use Keystone The Neutron configuration file found at /etc/neutron/neutron.conf has dozens of settings that can be modified to meet the needs of the OpenStack cloud administrator. A handful of these settings must be changed from their defaults as part of this installation. To specify Keystone as the authentication method for Neutron, update the [DEFAULT] section of the Neutron configuration file on all hosts with the following setting: [DEFAULT] auth_strategy = keystone Neutron must also be configured with the appropriate Keystone authentication settings. The username and password for the neutron user in Keystone were set earlier in this article. Update the [keystone_authtoken] section of the Neutron configuration file on all hosts with the following settings: [keystone_authtoken] auth_uri = http://controller01:5000 auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron Configuring Neutron to use a messaging service Neutron communicates with various OpenStack services on the AMQP messaging bus. Update the [DEFAULT] and [oslo_messaging_rabbit] sections of the Neutron configuration file on all hosts to specify RabbitMQ as the messaging broker: [DEFAULT] rpc_backend = rabbit The RabbitMQ authentication settings should match what was previously configured for the other OpenStack services: [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = rabbit Configuring Nova to utilize Neutron networking Before Neutron can be utilized as the network manager for Nova Compute services, the appropriate configuration options must be set in the Nova configuration file located at /etc/nova/nova.conf on all hosts. Start by updating the following sections with information on the Neutron API class and URL: [DEFAULT] network_api_class = nova.network.neutronv2.api.API [neutron] url = http://controller01:9696 Then, update the [neutron] section with the proper Neutron credentials: [neutron] auth_strategy = keystone admin_tenant_name = service admin_username = neutron admin_password = neutron admin_auth_url = http://controller01:35357/v2.0 Nova uses the firewall_driver configuration option to determine how to implement firewalling. As the option is meant for use with the nova-network networking service, it should be set to nova.virt.firewall.NoopFirewallDriver to instruct Nova not to implement firewalling when Neutron is in use: [DEFAULT] firewall_driver = nova.virt.firewall.NoopFirewallDriver The security_group_api configuration option specifies which API Nova should use when working with security groups. For installations using Neutron instead of nova-network, this option should be set to neutron as follows: [DEFAULT] security_group_api = neutron Nova requires additional configuration once a mechanism driver has been determined. Configuring Neutron to notify Nova Neutron must be configured to notify Nova of network topology changes. Update the [DEFAULT] and [nova] sections of the Neutron configuration file on the controller node located at /etc/neutron/neutron.conf with the following settings: [DEFAULT] notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller01:8774/v2 [nova] auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova Summary Neutron has seen major internal architectural improvements over the last few releases. These improvements have made developing and implementing network features easier for developers and operators, respectively. Neutron maintains the logical network architecture in its database, and network plugins and agents on each node are responsible for configuring virtual and physical network devices accordingly. With the introduction of the ML2 plugin, developers can spend less time implementing the core Neutron API functionality and more time developing value-added features. Now that OpenStack Networking services have been installed across all nodes in the environment, configuration of a layer 2 networking plugin is all that remains before instances can be created. Resources for Article: Further resources on this subject: Installing OpenStack Swift [article] Securing OpenStack Networking [article] The orchestration service for OpenStack [article]
Read more
  • 0
  • 0
  • 8611
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
24 Dec 2014
3 min read
Save for later

Introduction to Veeam® ONE™ Business View

Packt
24 Dec 2014
3 min read
In this article, by Kevin L. Sapp, author of the book Managing Virtual Infrastructure with Veeam® ONE™, we will have a look at how Veeam® ONE™ Business View allows you to group and manage your virtual infrastructure in business containers. This is helpful to split machines into function, priority, or any other descriptive category you would like. Veeam® ONE™ Business View displays the categorized information about VMs, clusters, and hosts in business terms. This perspective allows you to plan, control, and analyze the changes in the virtual environment. We will also have a look at data collection. (For more resources related to this topic, see here.) Data collection The data required to create the business topology is periodically collected from the connected virtual infrastructure servers. The data collection is usually run at a scheduled interval. However, you can also run the data collection manually. By default, after a virtual infrastructure server is connected to Veeam® ONE™, the collection is scheduled to run on a weekday at 2 a.m. If required, you can adjust the data collection schedule or switch to the manual collection mode to start each data collection session manually. Scheduling the data collection The best way to automate the collection of data is by creating a schedule for a specific VM server. To change the collection mode to Scheduled and to specify the time settings, use the following steps: Open the Veeam® ONE™ Business View web application by either double-clicking on the desktop icon or connecting to the Veeam® ONE™ server using a browser with the URL http://servername : 1340 by default. Click on the Configuration link located in the upper-right corner of the screen. Click on the VI Management Servers menu option located on the left-hand side of the screen.   Select the Run mode option for the server that you would like to change the schedule for.   While scheduling the data collection for the VM server, perform the following steps: Select the Periodically every option if you plan to run the data collection at a desired interval Select the Daily at this time option if you plan to run the data collection at a specific time of the day or week Once the schedule has been created, click on OK. Collecting data manually The following steps are needed to perform a manual collection of the virtual environment data. Use this procedure to collect data manually: Click on the Session History menu item on the left-hand side of the screen.   Click on the Run Now button for the server that you wish to run the data collection manually. The data collection normally takes a few minutes to run. However, it can vary based on the size and complexity of your infrastructure.   View the details of the session data by clicking on the server from the list shown in Session History. Summary In this article, we explained Veeam® ONE™ Business View. We discussed the steps needed to plan, control, and analyze the changes in the virtual environment. Resources for Article: Further resources on this subject: Configuring vShield App [article] Backups in the VMware View Infrastructure [article] Introduction to Veeam® Backup & Replication for VMware [article]
Read more
  • 0
  • 0
  • 8465

article-image-setting-citrix-components
Packt
03 Nov 2015
4 min read
Save for later

Setting Up the Citrix Components

Packt
03 Nov 2015
4 min read
In this article by Sunny Jha, the author of the book Mastering XenApp, we are going to implement the Citrix XenApp infrastructure components, which are going to work together to deliver the applications. The components we will be implementing are as follows: Setting up Citrix License Server Setting up Delivery Controller Setting up Director Setting up StoreFront Setting up Studio Once you will complete this article, you will be able to understand how to install the Citrix XenApp infrastructure components for the effective delivery of applications. (For more resources related to this topic, see here.) Setting up the Citrix infrastructure components You must be aware of the fact that Citrix reintroduced Citrix XenApp in the version of Citrix XenApp 7.5 with the new FMA-based architecture, replacing IMA. In this article, we will be setting up different Citrix components so that they can deliver the applications. As this is the proof of concept, I will be setting up almost all the Citrix components on the single Microsoft Windows 2012 R2 machine, where it is recommended that in the production environment, you should keep the Citrix components such as License Server, Delivery Controller, and StoreFront. These need to be installed on the separate servers to avoid the single point of failure and better performance. The components that we will be setting up in this article are: Delivery Controller: This Citrix component will act as broker, and the main function is to assign users to a server, based on their selection of application published. License Server: This will assign the license to the Citrix components as every Citrix product requires license in order to work. Studio: This will act as control panel for Citrix XenApp 7.6 delivery. Inside Citrix, studio administrator makes all the configuration and changes. Director: This component is basically for monitoring and troubleshooting, which is web-based application. StoreFront: This is the frontend of the Citrix infrastructure by which users connect to their application, either via receiver or web based. Installing of Citrix components In order to start the installation, we need the Citrix XenApp 7.6 DVD or ISO image. You can always download, from the Citrix website, all you need to have in the MyCitrix account. Follow these steps: Mount the disc/ISO you have downloaded. When you will double-click on the mounted disc, it will bring up a nice screen where you have to make the selection between XenApp Deliver applications or XenDesktop Deliver application and desktops: Once you have made the selection, it will show you the next option related to the product. Here, we need to select XenApp. Choose Delivery Controller from the options: The next screen will show you the License Agreement. You can go through it and accept the terms and click on Next: As described earlier, this is the proof of concept. We will install all the components on single server, but it is recommended to put each component on different server for better performance. Select all the components and click on Next: The next screen will show you the features that can be installed. As we have already installed the SQL server, we don't have to select the SQL Express, but we will choose Install Windows Remote Assistance. Click on Next: The next screen will show you the firewall ports that needs to be allowed to communicate, and it can be adjusted by Citrix as well. Click on Next: The next screen will show you the summary of your selection. Here, you can review your selection and click on Install to install the components: After you click on Install, it will go through the installation procedure, and once the installation is complete, click on Next. By following these steps, we completed the installation of the Citrix components such as Delivery Controller, Studio, Director, and StoreFront. We also adjusted the firewall ports as per the Citrix XenApp requirement. Summary In this article, you learned about setting up the Citrix infrastructure components and also how to install Citrix Director, License Server, Citrix Studio, and Citrix Director, and Citrix StoreFront. Resources for Article: Further resources on this subject: Getting Started – Understanding Citrix XenDesktop and its Architecture [article] High Availability, Protection, and Recovery using Microsoft Azure [article] A Virtual Machine for a Virtual World [article]
Read more
  • 0
  • 0
  • 8390

article-image-configuring-placeholder-datastores
Packt
20 May 2014
1 min read
Save for later

Configuring placeholder datastores

Packt
20 May 2014
1 min read
(For more resources related to this topic, see here.) Assuming that each of these paired sites is geographically separated, each site will have its own placeholder datastore. The following figure shows the site and placeholder datastore relationship: This is how you configure placeholder datastores: Navigate to vCenter Server's inventory home page and click on Site Recovery. Click on Sites in the left pane and select a site. Navigate to the Placeholder Datastores tab and click on Configure Placeholder Datastore, as shown in the following screenshot: In the Configure Placeholder Datastore window, select an appropriate datastore and click on OK. To confirm the selection, exit the window. Now, the Placeholder Datastores tab should show the configured placeholder. Refer to the following screenshot: If you plan to configure a Failback, repeat the procedure in the recovery site. Summary In this article, we covered the steps to be followed in order to configure placeholder datastores. Resources for Article: Further resources on this subject: Disaster Recovery for Hyper-V [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] Disaster Recovery Techniques for End Users [Article]
Read more
  • 0
  • 0
  • 8389

article-image-managing-pools-desktops
Packt
07 Oct 2015
14 min read
Save for later

Managing Pools for Desktops

Packt
07 Oct 2015
14 min read
In this article by Andrew Alloway, the author of VMware Horizon View High Availability, we will review strategies for providing High Availability for various types of VMware Horizon View desktop pools. (For more resources related to this topic, see here.) Overview of pools VMware Horizon View provides administrators with the ability to automatically provision and manage pools of desktops. As part of our provisioning of desktops, we must also consider how we will continue service for the individual users in the event of a host or storage failure. Generally High Availability requirements fall into two categories for each pool. We can have stateless desktops where the user information is not stored on the VM between sessions and Stateful desktops where the user information is stored on the desktop between sessions. Stateless desktops In a stateless configuration, we are not required to store data on the Virtual Desktops between user sessions. This allows us to use Local Storage instead of shared storage for our HA strategies as we can tolerate host failures without the use of shared disk. We can achieve a stateless desktop configuration using roaming profiles and/or View Persona profiles. This can greatly reduce cost and maintenance requirements for View Deployments. Stateless desktops are typical in the following environments: Task Workers: A group of workers where the tasks are well known and they all share a common set of core applications. Task workers can use roaming profiles to maintain data between user sessions. In a multi shift environment, having stateless desktops means we only need to provision as many desktops that will be used consecutively. Task Worker setups are typically found in the following scenarios: Data entry Call centers Finance, Accounts Payables, Accounts Receivables Classrooms (in some situations) Laboratories Healthcare terminals Kiosk Users: A group of users that do not login. Logins are typically automatic or without credentials. Kiosk users are typically untrusted users. Kiosk VMs should be locked down and restricted to only the core applications that need to be run. Kiosks are typically refreshed after logoff or at scheduled times after hours. Kiosks can be found in situations such as the following: Airline Check-In stations Library Terminals Classrooms (in some situations) Customer service terminals Customer Self-Serve Digital Signage Stateful desktops Statefull desktops have some advantages from reduced iops and higher disk performance due to the ability to choose thick provisioning. Stateful desktops are desktops that require user data to be stored on the VM or Desktop Host between user sessions. These machines typically are required by users who will extensively customize their desktop in non-trivial ways, require complex or unique applications that are not shared by a large group or require the ability to modify their VM Stateful Desktops are typically used for the following situations: Users who require the ability to modify the installed applications Developers IT Administrators Unique or specialized users Department Managers VIP staff/managers Dedicated pools Dedicated pools are View Desktops provisioned using thin or thick provisioning. Dedicated pools are typically used for Stateful Desktop deployments. Each desktop can be provisioned with a dedicated persistent disk used for storing the User Profile and data. Once assigned a desktop that user will always log into the same desktop ensuring that their profile is kept constant. During OS refresh, balances and recomposes the OS disk is reverted back to the base image. Dedicated Pools with persistent disks offer simplicity for managing desktops as minimal profile management takes place. It is all managed by the View Composer/View Connection Server. It also ensures that applications that store profile data will almost always be able to retrieve the profile data on the next login. Meaning that the administrator doesn't have to track down applications that incorrectly store data outside the roaming profile folder. HA considerations for dedicated pools Dedicated pools unfortunately have very difficult HA requirements. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Dedicated Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware Virtual SAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. Floating Pools Floating pools are a pool of desktops where any user can be assigned to any desktop in the pool upon login. Floating pools are generally used for stateless desktop deployments. Floating pools can be used with roaming profiles or View Persona to provide a consistent user experience on login. Since floating pools are treated as disposable VMs, we open up additional options for HA. Floating pools are given 2 local disks, the OS disk which is a replica from the assigned base VM, and the Disposable Disk where the page file, hibernation file, and temp drive are located. When Floating pools are refreshed, recomposed or rebalanced, all changes made to the desktop by the users are lost. This is due to the Disposable Disk being discarded between refreshes and the OS disk being reverted back to the Base Image. As such any session information such as Profile, Temp directory, and software changes are lost between refreshes. Refreshes can be scheduled to occure after logoff, after every X days or can be manually refreshed. HA considerations for floating pools Floating pools can be protected in several ways depending on the environment. Since floating pools can be deployed on local storage we can protect against a host failure by provisioning the Floating Pool VMs on multiple separate hosts. In the event of a host failure the remaining Virtual Desktops will be used to log users in. If there is free capacity in the cluster more Virtual Desktops will be provisioned on other hosts. For environments with shared storage Floating Pools can still be deployed on the shared storage but it is a good idea to have a secondary shared storage device or a highly available storage device. In the event of a storage failure the VMs can be started on the secondary storage device. VMware Virtual SAN is inherently HA safe and there is no need for a secondary datastore when using Virtual SAN. Many floating pool environments will utilize a profile management solution such as Roaming Profiles or View Persona Management. In these situations it is essential to setup a redundant storage location for View Profiles and or Roaming Profiles. In practice a Windows DFS share is a convenient and easy way to guard profiles against loss in the event of an outage. DFS can be configured to replicate changes made to the profile in real time between hosts. If the Windows DFS server is provisioned as VMs on shared storage make sure to create a DRS rule to separate the VMs onto different hosts. Where possible DFS servers should be stored on separate disk arrays to ensure they data is preserved in the event of the Disk Array, or Storage Processor failure. For more information regarding Windows DFS you can visit the link below https://technet.microsoft.com/en-us/library/jj127250.aspx Manual pools Manual pools are custom dedicated desktops for each user. A VM is manually built for each user who is using the manual pool. Manual Pools are Stateful pools that generally do not utilize profile management technologies such as View Persona or Roaming Profiles. Like Dedicated pools once a user is assigned to a VM they will always log into the same VM. As such HA requirements for manual pools are very similar to dedicated pools. Manual desktops can be configured in almost any maner desired by the administrator. There are no requirements for more than one disk to be attached to the Manual Pool desktop. Manual pools can also be configured to utilize physical hardware as the Desktop such as Blade Servers, Desktop Computers or even Laptops. In this situation there are limited high availability options without investing in exotic and expensive hardware. As best practice the physical hosts should be built with redundant power supplies, ECC RAM, mirrored hard disks pending budget and HA requirements. There should be a good backup strategy for managing physical hosts connected to the Manual Pools. HA considerations for manual pools Manual pools like dedicated pools have a difficult HA requirement. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Manual Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware VSAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. VSAN deployments are inherently HA safe and are excellent candidates for Manual Pool storage. Manual pools given their static nature also have the option of using replication technology to backup the VMs onto another disk. You can use VMware vSphere Replication to do automatic replication or use a variety of storage replication solutions offered by storage and backup vendors. In some cases it may be possible to use fault tolerance on the Virtual Desktops for truly high availability. Note that this would limit the individual VMs to a single vCPU which may be undesirable. Remote Desktop services pools Remote Desktop Services Pools (RDS Pools) are pools where the remote session or application is hosted on a Windows Remote Desktop Server. The application or remote session is run under the users' credentials. Usually all the user data is stored locally on the Remote Desktop Server but can also be stored remotely using Roaming Profiles or View Persona Profiles. Folder Redirection to a central network location is also used with RDS Pools. Typical uses for Remote Desktop Services is for migrating users off legacy RDS environments, hosting applications, and providing access to troublesome applications or applications with large memory foot prints. The Windows Remote Desktop Server can be either a VM or a standalone physical host. It can be combined with Windows Clustering technology to provide scalability and high availability. You can also deploy a load balancer solution to manage connections between multiple Windows Remote Desktop Servers. Remote Desktop services pool HA considerations Remote Desktop services HA revolves around protecting individual RDS VMs or provisioning a cluster of RDS servers. When a single VM is deployed wilth RDS generally it is best to use vSphere HA and clustering featurs to protect the VM. If the RDS resources are larger than practical for a VM then we must focus on protecting individual host or clustering multiple hosts. When the Windows Remote Desktop Server is deployed as a VM the following options are available: Protect the VM with VMware HA, using shared storage This allows vCenter to fail over the VM to another host in the event of a host failure. vSphere will be responcible for starting the VM on another host. The VM will resume from a crashed state. Replicate the Virtual Machine to separate disks on separate hosts using VMware Virtual SAN: Same as above but in this case the VM has been replicated to another host using Virtual SAN technology. The remote VM will be started up from a crashed state, using the last consistent harddrive image that was replicated. Using replication technologies such as vSphere Replication: The VM will be periodically synchronized to a remote host. In the event of a host failure we can manually activate the remotely synchronized VM. Use a Vendors Storage Level replication: In this case we allow our storage vendor to provide replication technology to provide a redundant backup. This protects us in the event of a storage or host failure. These failures can be automated or manual. Consult with your Storage Vendor for more information. Protect the VM using backup technologies: This provides redundancy in the sense that we won't loose the VM if it fails. Unfortuantely you are at the mercy of your restore process to bring the VM back to life. The VM will resume from a crashed state. Always keep backups of production servers. For RDS servers running on a dedicated server we could utilize the following: Redundant power supplies: Redundant power supplies will keep the server going while a PSU is being replaced or becomes defective. It is also a good idea to have 2 separate power sources for each power supply. Simple things like a power bar going faulty or triping a breaker could bring down the server if there are not two independent power sources. Uninteruptable Power Supply: Battery backups are always a must for production level equipment. Make sure to scale the UPS to provide adequate power and duration for your environment. Redundant network interfaces: In rare sucumstances a NIC can go bad or a cable can be damaged. In this case redundant NICs will prevent a server outage. Remember that to protect against a switch outage we should plug the NICs into separate switches. Mirrored or redundant disks: Harddrives are one of the most common failures in computers. Mirrored harddrives or RAID configurations are a must for production level equipment. 2 or more hosts: Clustering physical servers will ensure that host failures won't cause downtime. Consider multi site configurations for even more redundancy. Shared Strategies for VMs and Hardware: Provide High Availability to the RDS using Microsoft Network Load Balancer (NLB): Microsoft Network Load Balancer can provide load balancing to the RDS servers directy. In this situation the clients would connect to a single IP managed by the NLB which would randomly be assigned to a server. Provide High Availability using a load balancer to manage sessions between RDS servers: Using a hardware or software load balancer is can be used instead of Microsoft Network Load Balancers. Load Balancer vendors provide a high variety of capabilities and features for their load balancers. Consult your load balancer vendor for best practices. Use DNS Round Robin to alternate between RDS hosts: On of the most cost effective load balancing methods. It has the drawback of not being able to balance the load or to direct clients away from failed hosts. Updating DNS may delay adding new capacity to the cluster or delay removing a failed host from the cluster. Remote Desktop Connection Broker with High Availability: We can provide RDS failover using the Connection Broker feature of our RDS server. For more details see the link below. For more information regarding Remote Desktop Connection Broker with High Availability see: https://technet.microsoft.com/en-us/library/ff686148%28WS.10%29.aspx Here is an example topology using physical or virtual Microsoft RDS Servers. We use a load balancing technology for the View Connection Servers as described in the previous chapter. We then will connect to the RDS via either a load balancer, DNS round robin, or Cluster IP. Summary In this article, we covered the concept of stateful and stateless desktops and the consequences and techniques for supporting each in a highly available environment. Resources for Article: Further resources on this subject: Working with Virtual Machines[article] Storage Scalability[article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 8387
article-image-upgrading-previous-versions
Packt
23 Jun 2014
8 min read
Save for later

Upgrading from Previous Versions

Packt
23 Jun 2014
8 min read
(For more resources related to this topic, see here.) This article is about guiding you through the requirements and steps necessary to upgrade your VMM 2008 R2 SP1 to VMM 2012 R2. There is no direct upgrade path from VMM 2008 R2 SP1 to VMM 2012 R2. You must first upgrade to VMM 2012 and then to VMM 2012 R2. VMM 2008 R2 SP1-> VMM 2012-> SCVMM 2012 SP1 -> VMM 2012 R2 is the correct upgrade path. Upgrade notes: VMM 2012 cannot be upgraded directly to VMM 2012 R2. Upgrading it to VMM 2012 SP1 is required VMM 2012 can be installed on a Windows 2008 Server VMM 2012 SP1 requires Windows 2012 VMM 2012 R2 requires minimum Windows 2012 (Windows 2012 R2 is recommended) Windows 2012 hosts can be managed by VMM 2012 SP1 Windows 2012 R2 hosts require VMM 2012 R2 System Center App Controller versions must match the VMM version To debug a VMM installation, the logs are located in %ProgramData%VMMLogs, and you can use the CMTrace.exe tool to monitor the content of the files in real time, including SetupWizard.log and vmmServer.log. VMM 2012 Architecture, VMM 2012 is a huge product upgrade, and there have been many improvements. This article only covers the VMM upgrade. If you have a previous version of System Center family components installed on your environment, make sure you follow the upgrade and installation. System Center 2012 R2 has some new components, in which the installation order is also critical. It is critical that you take the steps documented by Microsoft in Upgrade Sequencing for System Center 2012 R2 at http://go.microsoft.com/fwlink/?LinkId=328675 and use the following upgrade order: Service Management Automation Orchestrator Service Manager Data Protection Manager (DPM) Operations Manager Configuration Manager Virtual Machine Manager (VMM) App Controller Service Provider Foundation Windows Azure Pack for Windows Server Service Bus Clouds Windows Azure Pack Service Reporting Reviewing the upgrade options This recipe will guide you through the upgrade options for VMM 2012 R2. Keep in mind that there is no direct upgrade path from VMM 2008 R2 to VMM 2012 R2. How to do it... Read through the following recommendations in order to upgrade your current VMM installation. In-place upgrade from VMM 2008 R2 SP1 to VMM 2012 Use this method if your system meets the requirements for a VMM 2012 upgrade and you want to deploy it on the same server. The supported VMM version to upgrade from is VMM 2008 R2 SP1. If you need to upgrade VMM 2008 R2 to VMM 2008 R2 SP1, refer to http://go.microsoft.com/fwlink/?LinkID=197099. In addition, keep in mind that if you are running the SQL Server Express version, you will need to upgrade SQL Server to a fully supported version beforehand as the Express version is not supported in VMM 2012. Once the system requirements are met and all of the prerequisites are installed, the upgrade process is straightforward. To follow the detailed recipe, refer to the Upgrading to VMM 2012 R2 recipe. Upgrading from 2008 R2 SP1 to VMM 2012 on a different computer Sometimes, you may not be able to do an in-place upgrade to VMM 2012 or even to VMM 2012 SP1. In this case, it is recommended that you use the following instructions: Uninstall the current VMM that retains the database and then restore the database on a supported version of SQL Server. Next, install the VMM 2012 prerequisites on a new server (or on the same server, as long it meets the hardware and OS requirements). Finally, install VMM 2012, providing the retained database information on the Database configuration dialog, and the VMM setup will upgrade the database. When the install process is finished, upgrade the Hyper-V hosts with the latest VMM agents. The following figure illustrates the upgrade process from VMM 2008 R2 SP1 to VMM 2012: When performing an upgrade from VMM 2008 R2 SP1 with a local VMM database to a different server, the encrypted data will not be preserved as the encryption keys are stored locally. The same rule applies when upgrading from VMM 2012 to VMM 2012 SP1 and from VMM 2012 SP1 to VMM 2012 R2 and not using Distributed Key Management (DKM) in VMM 2012. Upgrading from VMM 2012 to VMM 2012 SP1 To upgrade to VMM 2012 SP1, you should already have VMM 2012 up and running. VMM 2012 SP1 requires a Windows Server 2012 and Windows ADK 8.0. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 SP1 and App Controller. Upgrading from VMM 2012 SP1 to VMM 2012 R2 To upgrade to VMM 2012 R2, you should already have VMM 2012 SP1 up and running. VMM 2012 R2 requires minimum Windows Server 2012 as the OS (Windows 2012 R2 is recommended) and Windows ADK 8.1. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 SP1 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 R2 and App Controller. Some more planning considerations are as follows: Virtual Server 2005 R2: VMM 2012 does not support Microsoft Virtual Server 2005 R2 anymore. If you have Virtual Server 2005 R2 or an unsupported ESXi version running and have not removed these hosts before the upgrade, they will be removed automatically during the upgrade process. VMware ESX and vCenter: For VMM 2012, the supported versions of VMware are from ESXi 3.5 to ESXi 4.1 and vCenter 4.1. For VMM 2012 SP1/R2, the supported VMware versions are from ESXi 4.1 to ESXi 5.1, and vCenter 4.1 to 5.0. SQL Server Express: This is not supported since VMM 2012. A full version is required. Performance and Resource Optimization (PRO): The PRO configurations are not retained during an upgrade to VMM 2012. If you have an Operations Manager (SCOM) integration configured, it will be removed during the upgrade process. Once the upgrade process is finished, you can integrate SCOM with VMM. Library server: Since VMM 2012, VMM does not support a library server on Windows Server 2003. If you have it running and continue with the upgrade, you will not be able to use it. To use the same library server in VMM 2012, move it to a server running a supported OS before starting the upgrade. Choosing a service account and DKM settings during an upgrade: During an upgrade to VMM 2012, on the Configure service account and distributed key management page of the setup, you are required to create a VMM service account (preferably a domain account) and choose whether you want to use DKM to store the encryption keys in Active Directory (AD). Make sure to log on with the same account that was used during the VMM 2008 R2 installation: This needs to be done because, in some situations after the upgrade, the encrypted data (for example, the passwords in the templates) may not be available depending on the selected VMM service account, and you will be required to re-enter it manually. For the service account, you can use either the Local System account or a domain account: This is the recommended option, but when deploying a highly available VMM management server, the only option available is a domain account. Note that DKM is not available with the versions prior to VMM 2012. Upgrading to a highly available VMM 2012: If you're thinking of upgrading to a High Available (HA) VMM, consider the following: Failover Cluster: You must deploy the failover cluster before starting the upgrade. VMM database: You cannot deploy the SQL Server for the VMM database on highly available VMM management servers. If you plan on upgrading the current VMM Server to an HA VMM, you need to first move the database to another server. As a best practice, it is recommended that you have the SQL Server cluster separated from the VMM cluster. Library server: In a production or High Available environment, you need to consider all of the VMM components to be High Available as well, and not only the VMM management server. After upgrading to an HA VMM management server, it is recommended, as a best practice, that you relocate the VMM library to a clustered file server. In order to keep the custom fields and properties of the saved VMs, deploy those VMs to a host and save them to a new VMM 2012 library. VMM Self-Service Portal: This is not supported since VMM 2012 SP1. It is recommended that you install System Center App Controller instead. How it works... There are two methods to upgrade to VMM 2012 from VMM 2008 R2 SP1: an in-place upgrade and upgrading to another server. Before starting, review the initial steps and the VMM 2012 prerequisites and perform a full backup of the VMM database. Uninstall VMM 2008 R2 SP1 (retaining the data) and restore the VMM database to another SQL Server running a supported version. During the installation, point to that database in order to have it upgraded. After the upgrade is finished, upgrade the host agents. VMM will be rolled back automatically in the event of a failure during the upgrade process and reverted to its original installation/configuration. There's more... The names of the VMM services have been changed in VMM 2012. If you have any applications or scripts that refer to these service names, update them accordingly as shown in the following table: VMM version VMM service display name Service name 2008 R2 SP1 Virtual Machine Manager vmmservice   Virtual Machine Manager Agent vmmagent 2012 / 2012 SP1/ 2012 R2 System Center Virtual Machine Manager scvmmservice   System Center Virtual Machine Manager Agent scvmmagent See also To move the file-based resources (for example, ISO images, scripts, and VHD/VHDX), refer to http://technet.microsoft.com/en-us/library/hh406929 To move the virtual machine templates, refer to Exporting and Importing Service Templates in VMM at http://go.microsoft.com/fwlink/p/?LinkID=212431
Read more
  • 0
  • 0
  • 8226

article-image-deploying-app-v-5-virtual-environment
Packt
12 Aug 2015
10 min read
Save for later

Deploying App-V 5 in a Virtual Environment

Packt
12 Aug 2015
10 min read
In this article written by James Preston, author of the book Microsoft Application Virtualization Cookbook, we will cover the following topics: Enabling the App-V shared content store mode Publishing applications through Microsoft RemoteApp Pre-caching applications in the local store Publishing applications through Citrix StoreFront (For more resources related to this topic, see here.) App-V 5 is the perfect companion for your virtual session or desktop delivery environment, allowing you to abstract applications from the user and desktop, as shown in the following image, and, in turn, reducing infrastructure requirements through features such as shared content store mode.   In this article, we will cover how to deploy App-V5 in these environments. Enabling the App-V shared content store mode In this recipe, we will cover enabling the App-V shared content store mode, which prevents the caching of App-V files on a client so that the application is launched from the server hosting the application directly. This feature is ideal for environments where there is ample network bandwidth between remote desktop session hosts (or client virtual machines in a VDI deployment) and where administrators are looking to reduce the overall need for storage by the hosts. While some files are still cached on the local machine (for example, for shortcuts or Shell extensions), the following screenshot shows the amount of storage saved on an Office 2013 deployment, where the shared content store mode is turned on (the screenshot on the right):   With the shared content store mode enabled, you can check the amount of storage space used by a package by checking the size of the individual package's folders at the following path on a client where the package is deployed (where Package ID is the GUID assigned to that package): C:ProgramDataApp-V<Package ID>. Getting ready To complete these steps, you will need to deploy a Remote Desktop Services environment (on the server RDS). The server RDS must also have the App-V client and any prerequisites deployed on it. How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server DC): Link the App-V5 Settings Group Policy Object to the Remote Desktop Server's OU. Create a Group Policy object for the server RDS. Enable the shared content store mode within that policy. The implementation of the preceding tasks is as follows: On the server DC, load the Group Policy Management console. Expand the tree structure to display the Remote Desktop Server's Organizational Unit and click on Link an Existing GPO…. From the window that appears, select the App-V 5 Settings policy and click on OK. Next, right-click on the OU and select Create a GPO in this domain, and Link it here.... Set the name of the policy as App-V 5 Shared Content Store and click on OK. Let's take a look at the following screenshot: Right-click on the policy you have just created and click on Edit…. In the window that appears, right-click on App-V 5 Shared Content Store and click on Properties. Then, tick the Disable User Configuration settings box and click on OK. Next, navigate to Computer Configuration | Policies | Administrative Templates | System | App-V | Streaming and double-click on Shared Content (SCS) mode. Set the policy to Enabled and click on OK. There's more… To verify that the setting is applied on the server RDS, open a PowerShell session and run the following command: Get-AppvClientConfiguration If the SharedContentStoreMode value is 1 and the SetByGroupPolicy value is True, then the policy is correctly applied. Publishing applications through Microsoft RemoteApp In this recipe, we will publish the Audacity package to the RDS server to be accessed by users through the Remote Desktop Web Access. Getting ready To complete these steps, you will need to deploy a Remote Desktop Services environment (on the server RDS). How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server RDS): Create a Security Group for your remote desktop session hosts. Publish the Audacity package to that Security Group through the App-V Management console. Publish the Audacity package through Server Manager. The implementation of the preceding tasks is as follows: On the server DC, launch the Active Directory Users and Computers console and navigate to demo.org | Domain Groups, and create a new Security Group called RDS Session Hosts. Add the server RDS to the group that you just created. On your Windows 8 client PC, log in to the App-V Management console as Sam Adams, select the Audacity package and click on the Edit option next to the AD ACCESS option. Under FIND VALID ACTIVE DIRECTORY GROUP AND GRANT ACCESS, enter demo.orgRDS Session Hosts and click on Check. In the drop-down menu that appears, select RDS Session Hosts and click on Grant Access. On the server RDS, wait for the App-V Publishing Refresh to occur (or force the process manually) for the Audacity shortcut to appear on the desktop. Launch Server Manager, and from the left-hand side bar, select Remote Desktop. From the left-hand side, select QuickSessionCollection (the collection created by default). Under REMOTEAPP PROGRAMS, navigate to Tasks | Publish RemoteApp Programs. In the window that appears, tick the box next to Audacity and click on Next, as shown in the following screenshot: Note that the path to the Audacity application is pointing at the App-V Installation root in %SYSTEMDRIVE%ProgramDataMicrosoftAppV. Review the confirmation window and click on Publish. On your Windows 8 client, open Internet Explorer and browse to https://rds.demo.org/RDWeb, accepting any invalid SSL certificate prompts and allowing the Remote Desktop plugin to run. Log in as Sam Adams and launch the Audacity application.   There's more… It is possible to limit applications within a remote desktop collection to users in a specific Security Group. To do this, right-click on the application as it appears under REMOTEAPP PROGRAMS and click on Edit Properties:   In the window that appears, click on User Assignment and set the radio button to Only specified users and groups. You will now be able to access the Add… button, which brings up an Active Directory search dialog, from where you can add the Audacity Users security group to limit the application to only the users in that group.   Precaching applications in the local store As an alternative to using the Shared Content Store mode, applications can be forced to be cached within the local store on your RDS session hosts. This would be advantageous in scenarios where the bandwidth from a central high speed storage device is more expensive than providing dedicated storage to the RDS session hosts. Getting ready To complete these tasks, you will need to deploy a Remote Desktop Services environment (on the server RDS). How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server DC): Create a group policy object for the server RDS. Enable background application caching within that policy. The implementation of the preceding tasks is as follows: On the server DC, load the Group Policy Management console. Expand the tree structure to display Remote Desktop Servers Organizational Unit, right-click on the OU, and select Create a GPO in this domain, and Link it here.... Set the name of the policy to App-V 5 Cache Applications and click on OK. Right-click on the policy you have just created and click on Edit…. In the window that appears, right-click on App-V5 Cache Applications and click on Properties, tick the Disable User Configuration settings box and click on OK. Next, navigate to Computer Configuration | Policies | Administrative Templates | System | App-V | Specify what to load in background (aka AutoLoad). Set the policy to Enabled, with Autoload Options set to All, and click on OK. There's more… Individual applications can be targeted for caching using the Mount-AppvClientPackage PowerShell command. For example, to mount the package named Audacity 2.0.6 (which has been already published to the Remote Desktop session host), the administrator would run the following command: Mount-AppvClientPackage –Name "Audacity 2.0.6" This would generate the following result:   Note that the PercentLoaded value is shown as 100 to indicate that the package is completely loaded within the local store. Publishing applications through Citrix StoreFront Apart from being a great addition to the Microsoft Virtual environment, App-V is also supported by Citrix XenDesktop. In this recipe, we will look at publishing the Audacity package through Citrix StoreFront. Getting ready In addition to this, the server XenDesktop and XD-HOST will be used in this recipe. XenDesktop is configured with an installation of XenDesktop 7.6 with a Machine Catalogue containing the server XD-HOST (configured as a Server OS Machine) and a delivery group that has been set up to service both applications and desktops. The server XD-HOST should have the App-V RDS client installed. Finally, the App-V applications that you wish to deploy through Citrix StoreFront must also be published to the server XD-HOST through the App-V Management console; in this case, Audacity. How to do it… The following list shows you the high-level steps involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server XenDesktop): Set up App-V Publishing in Citrix Studio. Publish applications through the Delivery Group. The implementation of the preceding tasks is as follows: On the server XenDesktop, launch Citrix Studio. Navigate to Citrix Studio | Configuration, right-click on App-V Publishing, and click on Add App-V Publishing. In the window that appears, enter the details of your App-V Management and Publishing servers, click on Test connection… to confirm that the details are correct and then click on Save. Navigate to Delivery Groups, right-click on the delivery group you have created, and click on Add Applications. On the introduction page of the wizard that appears, click on Next. On the applications page of the wizard, select Audacity from the list provided (which will be discovered automatically from your server XD-HOST) and click on Next. Note that you can also select to publish multiple applications at the same time. Review the summary screen and click on Finish. There's more… Similar to publishing through the Microsoft remote desktop web app, it is possible to limit access to your applications to specific users or security groups. To limit access, right-click on your application in the Applications tab of the Delivery Groups page and click on Properties.   In the window that appears, select the Limit Visibility tab and select Limit visibility for this application to the users listed below. Click on the Add users… button to choose users and security groups from Active Directory to be included in the group. Summary In this article, we learned about enabling App-V shared content store mode which prevents the caching of App-V sored files on the client system. We also had a look into publishing applications through Microsoft RemoteApp which publishes the Audacity package to the RDS server which enables the user to access from the Remote Desktop Web Access. Then we learned about precaching application in the local store which forces the application to be cached in to the RDS session hosts, which has certain advantages. Finally, we learned about publishing the applications through Citrix StoreFront where we published the Audacity server through Citrix StoreFront. Resources for Article: Further resources on this subject: Virtualization [article] Customization in Microsoft Dynamics CRM [article] Installing Postgre SQL [article]
Read more
  • 0
  • 0
  • 8063

article-image-working-virtual-machines
Packt
11 Aug 2015
7 min read
Save for later

Working with Virtual Machines

Packt
11 Aug 2015
7 min read
In this article by Yohan Rohinton Wadia the author of Learning VMware vCloud Air, we are going to walk through setting up and accessing virtual machines. (For more resources related to this topic, see here.) What is a virtual machine? Most of you reading this article must be aware of what a virtual machine is, but for the sake of simplicity, let's have a quick look at what it really is. A virtual machine is basically an emulation of a real or physical computer which runs on an operating system and can host your favorite applications as well. Each virtual machine consists of a set of files that govern the way the virtual machine is configured and run. The most important of these files would be a virtual drive, that acts just as a physical drive storing all your data, applications and operating system; and a configuration file that basically tells the virtual machine how much resources are dedicated to it, which networks or storage adapters to use, and so on. The beauty of these files is that you can port them from one virtualization platform to another and manage them more effectively and securely as compared to a physical server. The following diagram shows an overview of how a virtual machine works over a host: Virtual machine creation in vCloud Air is a very simple and straight forward process. vCloud Air provides you with three mechanisms using which you can create your own virtual machines briefly summarized as follows: Wizard driven: vCloud Air provides a simple wizard using which you can deploy virtual machines from pre-configured templates. This option is provided via the vCloud Air web interface itself. Using vCloud Director: vCloud Air provides an advanced option as well for users who want to create their virtual machines from scratch. This is done via the vCloud Director interface and is a bit more complex as compared to the wizard driven option. Bring your own media: Because vCloud Air natively runs on VMware vSphere and vCloud Director platforms, its relatively easy for you to migrate your own media, templates and vApps into vCloud Air using a special tool called as VMware vCloud Connector. Create a virtual machine using template As we saw earlier, VMware vCloud Air provides us with a default template using which you can deploy virtual machines in your public cloud in a matter of seconds. The process is a wizard driven activity where you can select and configure the virtual machine's resources such as CPU, memory, hard disk space all with a few simple clicks. The following steps will you create a virtual machine using a template: Login to your vCloud Air (https://vchs.vmware.com/login) using the username and password that we set during the sign in process. From the Home page, select the VPC on Demand tab. Once there, from the drop-down menu above the tabs, select your region and the corresponding VDC where you would like to deploy your first virtual machine. In this case, I have selected the UK-Slough-6 as the region and MyFirstVDC as the default VDC where I will deploy my virtual machines:If you have selected more than one VDC, you will be prompted to select a specific virtual data center before you start the wizard as a virtual machine cannot span across regions or VDCs. From the Virtual Machines tab, select the Create your first virtual machine option. This will bring up the VM launch wizard as shown here: As you can see here, there are two tabs provided by default: a VMware Catalog and another section called as My Catalog. This is an empty catalog by default but this is the place where all your custom templates and vApps will be shown if you have added them from the vCloud Director portal or purchased them from the Solutions Exchange site as well. Select any particular template to get started with. You can choose your virtual machine to be either powered by a 32 bit or a 64 bit operating system. In my case, I have selected a CentOS 6.4 64 bit template for this exercise. Click Continue once done. Templates provided by vCloud Air are either free or paid. The paid ones generally have a $ sign marked next to the OS architecture, indicating that you will be charged once you start using the virtual machine. You can track all your purchases using the vCloud Air billing statement. The next step is to define the basic configuration for your virtual machine. Provide a suitable name for your virtual machine. You can add an optional description to it as well. Next, select the CPU, memory and storage for the virtual machine. The CPU and memory resources are linked with each other so changing the CPU will automatically set the default vRAM for the virtual machine as well; however you can always increase the vRAM as per your needs. In this case, the virtual machine has 2 CPUs and 4 GB vRAM allocated to it. Select the amount of storage you want to provide to your virtual machine. VMware can allocate a maximum of 2 TB of storage as a single drive to a virtual machine. However as a best practice; it is always good to add more storage by adding multiple drives rather than storing it all on one single drive. You can optionally select your disks to be either standard or SSD-accelerated; both features we will discuss shortly. Virtual machine configuration Click on Create Virtual Machine once you are satisfied with your changes. Your virtual machine will now be provisioned within few minutes. By default, the virtual machine is not powered on after it is created. You can power it on by selecting the virtual machine and clicking on the Power On icon in the tool bar above the virtual machine: Status of the virtual machine created There you have it. Your very first virtual machine is now ready for use! Once powered on, you can select the virtual machine name to view its details along with a default password that is auto-generated by vCloud Air. Accessing virtual machines using the VMRC Once your virtual machines are created and powered on, you can access and view them easily using the virtual machine remote console (VMRC). There are two ways to invoke the VMRC, one is by selecting your virtual machine from the vCloud Air dashboard, selecting the Actions tab and select the option Open in Console as shown: The other way to do so is by selecting the virtual machine name. This will display the Settings page for that particular virtual machine. To launch the console select the Open Virtual Machine option as shown: Make a note of the Guest OS Password from the Guest OS section. This is the default password that will be used to log in to your virtual machine. To log in to the virtual machine, use the following credentials: Username: root Password: <Guest_OS_Password> This is shown in the following screenshot: You will be prompted to change this password on your first login. Provide a strong new password that contains at least one special character and contains an alphanumeric pattern as well. Summary There you have it! Your very own Linux virtual machine on the cloud! Resources for Article: Further resources on this subject: vCloud Networks [Article] Creating your first VM using vCloud technology [Article] Securing vCloud Using the vCloud Networking and Security App Firewall [Article]
Read more
  • 0
  • 0
  • 8028
article-image-deploying-orchestrator-appliance
Packt
16 Sep 2015
5 min read
Save for later

Deploying the Orchestrator Appliance

Packt
16 Sep 2015
5 min read
This article by Daniel Langenhan, the author of VMware vRealize Orchestrator Essentials, discusses the deployment of Orchestrator Appliance, and then goes on to explaining how to access it using the Orchestrator home page. In the following sections, we will discuss how to deploy Orchestrator in vCenter and with VMware Workstation. (For more resources related to this topic, see here.) Deploying the Appliance with vCenter To make the best use of Orchestrator, its best to deploy it into your vSphere infrastructure. For this, we deploy it with vCenter. Open your vSphere Web Client and log in. Select a host or cluster that should host the Orchestrator Appliance. Right-click the Host or Cluster and select Deploy OVF Template. The deploy wizard will start and ask you the typical OVF questions: Accept the EULA Choose the VM name and the VM folder where it will be stored Select the storage and network it should connect to. Make sure that you select a static IP The Customize template step will now ask you about some more Orchestrator-specific details. You will be asked to provide a new password for the root user. The root user is used to connect to the vRO appliance operating system or the web console. The other password that is needed is for the vRO Configurator interface. The last piece of information needed is the network information for the new VM. The following screenshot shows an example of the Customize template step:   The last step summarizes all the settings and lets you power on the VM after creation. Click on Finish and wait until the VM is deployed and powered on. Deploying the appliance into VMware Workstation For learning how to use Orchestrator, or for testing purposes, you can deploy Orchestrator using VMware Workstation (Fusion for MAC users). The process is pretty simple: Download the Orchestrator Appliance on to your desktop. Double-click on the OVA file. The import wizard now asks you for a name and location of your local file structure for this VM. Chose a location and click on Import. Accept the EULA. Wait until the import has finished. Click on Edit virtual machine settings. Select Network Adapter. Chose the correct network (Bridged, NAT, or Host only) for this VM. I typically use Host Only.   Click on OK to exit the settings. Power on the VM. Watch the boot screen. At some stage, the boot will stop and you will be prompted for the root password. Enter a new password and confirm it. After a moment, you will be asked for the password for the Orchestrator Configurator. Enter a new password and confirm it. After this, the boot process should finish, and you should see the Orchestrator Appliance DHCP IP. If you would like to configure the VM with a fixed IP, access the appliance configuration, as shown on the console screen (see the next section). After the deployment If the deployment is successful, the console of the VM should show a screen that looks like the following screenshot:   You can now access the Orchestrator Appliance, as shown in the next section. Accessing Orchestrator Orchestrator has its own little webserver that can be accessed by any web browser. Accessing the Orchestrator home page We will now access the Orchestrator home page: Open a web browser such as Mozilla Firefox, IE, or Google Chrome. Enter the IP or FQDN of the Orchestrator Appliance. The Orchestrator home page will open. It looks like the following screenshot:   The home page contains some very useful links, as shown in the preceding screenshot. Here is an explanation of each number: Number Description 1 Click here to start the Orchestrator Java Client. You can also access the Client directly by visiting https://[IP or FQDN]:8281/vco/client/client.jnlp. 2 Click here to download and install the Orchestrator Java Client locally. 3 Click here to access the Orchestrator Configurator, which is scheduled to disappear soon, whereupon we won't use it any more. The way forward will be Orchestrator Control Center. 4 This is a selection of links that can be used to find helpful information and download plugins. 5 These are some additional links to VMware sites. Starting the Orchestrator Client Let's open the Orchestrator Client. We will use an internal user to log in until we have hooked up Orchestrator to SSO. For the Orchestrator Client, you need at least Java 7. From the Orchestrator home page, click on Start Orchestrator Client. Your Java environment will start. You may be required to acknowledge that you really want to start this application. You will now be greeted with the login screen to Orchestrator:   Enter vcoadmin as the username and vcoadmin as the password. This is a preconfigured user that allows you to log in and use Orchestrator directly. Click on Login. Now, the Orchestrator Client will load. After a moment, you will see something that looks like the following screenshot: You are now logged in to the Orchestrator Client. Summary This article guided you through the process of deploying and accessing an Orchestrator Appliance with vCenter and VMware workstation. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] Upgrading VMware Virtual Infrastructure Setups [article] VMware vRealize Operations Performance and Capacity Management [article]
Read more
  • 0
  • 0
  • 7781

article-image-solving-some-not-so-common-vcenter-issues
Packt
05 May 2015
7 min read
Save for later

Solving Some Not-so-common vCenter Issues

Packt
05 May 2015
7 min read
In this article by Chuck Mills, author of the book vCenter Troubleshooting, we will review some of the not-so-common vCenter issues that administrators could face while they work with the vSphere environment. The article will cover the following issues and provide the solutions: The vCenter inventory shows no objects after you log in You get the VPXD must be stopped to perform this operation message Removing the vCenter plugins when they are no longer needed (For more resources related to this topic, see here.) Solving the problem of no objects in vCenter After successfully completing the vSphere 5.5 installation (not an upgrade) process with no error messages whatsoever, and logging in you log in to vCenter with the account you used for the installation. In this case, it is the local administrator account. Surprisingly, you are presented with an inventory of 0. The first thing is to make sure you have given vCenter enough time to start. Considering the previously mentioned account was the account used to install vCenter, you would assume the account is granted appropriate rights that allow you to manage your vCenter Server. Also consider the fact that you can log in and receive no objects from vCenter. Then, you might try logging in with your domain administrator account. This makes you wonder, What is going on here? After installing vCenter 5.5 using the Windows option, remember that the administrator@vsphere.local user will have administrator privileges for both the vCenter Single Sign-On Server and vCenter Server. You log in using the administrator@vsphere.local account with the password you defined during the installation of the SSO server: vSphere attaches the permissions along with assigning the role of administrator to the default account administrator@vsphere.local. These privileges are given for both the vCenter Single Sign-On server and the vCenter Server system. You must log in with this account after the installation is complete. After logging in with this account, you can configure your domain as an identity source. You can also give your domain administrator access to vCenter Server. Remember, the installation does not assign any administrator rights to the user account that was used to install vCenter. For additional information, review the Prerequisites for Installing vCenter Single Sign-On, Inventory Service, and vCenter Server document found at https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-C6AF2766-1AD0-41FD-B591-75D37DDB281F.html. Now that you understand what is going on with the vCenter account, use the following steps to enable the use of your Active Directory account for managing vCenter. Add or verify your AD domain as an identity source using the following procedure: Log in with administrator@vsphere.local. Select Administration from the menu. Choose Configuration under the Single Sign-On option. You will see the Single Sign-On | Configuration option only when you log in using the administrator@vsphere.local account. Select the Identity Sources tab and verify that the AD domain is listed. If not, choose Active Directory (Integrated Windows Authentication) found at the top of the window. Enter your Domain name and click on OK at the bottom of the window. Verify that your domain was added to Identity Sources, as shown in the following screenshot: Add the permissions for the AD account using the following steps: Click on Home at the top left of the window. Select vCenter from the menu options. Select vCenter Servers and then choose the vCenter Server object: Select the Manage tab and then the Permissions tab found in the vCenter Object window. Review the image that follows the steps to verify the process. Click on the green + icon to add permission. Choose the Add button located at the bottom of the window. Select the AD domain found in the drop-down option at the top of the window. Choose a user or group you want to assign permission to (the account named Chuck was selected for this example). Verify that the user or group is selected in the window. Use the drop-down options to choose the level of permissions (verify that Propagate to children is checked). Now, you should be able to log into vCenter with your AD account. See the results of the successful login in the following screenshot: Now, by adding the permissions to the account, you are able to log into vCenter using your AD credentials. The preceding screenshot shows the results of the changes, which is much different than the earlier attempt. Fixing the VPXD must be stopped to perform this operation message It has been mentioned several times in this article that the Virtual Center Service Appliance (VCSA) is the direction VMware is moving in when it comes to managing vCenter. As the number of administrators using it keeps increasing, the number of problems will also increase. One of the components an administrator might have problems with is the Virtual Centre Server service. This service should not be running during any changes to the database or the account settings. However, as with most vSphere components, there are times when something happens and you need to stop or start a service in order to fix the problem. There are times when an administrator who works within the VCSA appliance encounters the following error: This service can be stopped using the web console, by performing the following steps: Log into the console using https://ip-of-vcsa:5480. Enter your username and password: Choose vCenter Server after logging in. Make sure the Summary tab is selected. Click on the Stop button to stop the server: This should work most of the time, but if you find that using the web console is not working, then you need to log into the VCSA appliance directly and use the following procedure to stop the server: Connect to the appliance by using an SSH client such as Putty or mRemote. Type the command chkconfig. This will list all the services and their current status: Verify that vmware-vxpd is on: You can stop the server by using service vmware-vpxd stop command: After completing your work, you can start the server using one of the following methods: Restart the VCSA appliance Use the web console by clicking on the Start button on the vCenter Summary page Type service vmware-vpxd start on the SSH command line This should fix the issues that occur when you see the VPXD must be stopped to perform this operation message. Removing unwanted plugins in vSphere Administrators add and remove tools from their environment based on the needs and also the life of the tool. This is no different for the vSphere environment. As the needs of the administrator change, so does the usage of the plugins used in vSphere. The following section can be used to remove any unwanted plugins from your current vCenter. So, if you have lots of plugins and they are no longer needed, use the follow procedure to remove them: Log into your vCenter using http://vCenter_name or IP_address/mob and enter your username and password: Click on the content link under Properties: Click on ExtensionManager, which is found in the VALUE column: Highlight, right-click, and Copy the extension to be removed. Check out the Knowledge Base 1025360 found at http://Kb.vmware.com/kb/1025360 to get an overview of the plugins and their names. Select UnregisterExtension near the bottom of the page: Right-click on the plugin name and Paste it into the Value field: Click on Invoke Method to remove the plugin: This will give you the Method Invocation Result: void message. This message informs you that the selected plugin has been removed. You can repeat this process for each plugin that you want to remove. Summary In this article, we covered some of the not-so-common challenges an administrator could encounter in the vSphere environment. It provided the troubleshooting along with the solutions to the following issues: Seeing NO objects after logging into vCenter with the account you used to install it How to get past the VPXD must be stopped error when you are performing certain tasks within vCenter Removing the unwanted plugins from vCenter Server Resources for Article: Further resources on this subject: Availability Management [article] The Design Documentation [article] Design, Install, and Configure [article]
Read more
  • 0
  • 0
  • 7762
Modal Close icon
Modal Close icon