Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-upgrading-vmware-virtual-infrastructure-setups
Packt
04 Jun 2015
13 min read
Save for later

Upgrading VMware Virtual Infrastructure Setups

Packt
04 Jun 2015
13 min read
In this article by Kunal Kumar and Christian Stankowic, authors of the book VMware vSphere Essentials, you will learn how to correctly upgrade VMware virtual infrastructure setups. (For more resources related to this topic, see here.) This article will cover the following topics: Prerequisites and preparations Upgrading vCenter Server Upgrading ESXi hosts Additional steps after upgrading An example scenario Let's start with a realistic scenario that is often found in data centers these days. I assume that your virtual infrastructure consists of components such as: Multiple VMware ESXi hosts Shared storage (NFS or Fibre-channel) VMware vCenter Server and vSphere Update Manager In this example, a cluster consisting of two ESXi hosts (esxi1 and esxi2) is running VMware ESXi 5.5. On a virtual machine (vc1), a Microsoft Windows Server system is running vCenter Server and vSphere Update Manager (vUM) 5.5. This article is written as a step-by-step guide to upgrade these particular vSphere components to the most recent version, which is 6.0. Example scenario consisting of two ESXi hosts with shared storage and vCenter Server Prerequisites and preparations Before we start the upgrade, we need to fulfill the following prerequisites: Ensure ESXi version support by the hardware vendor Gurarantee ESXi version support on used hardware by VMware Create a backup of the ESXi images and vCenter Server First of all, we need to refer to our hardware vendor's support matrix to ensure that our physical hosts running VMware ESXi are supported in the new release. Hardware vendors evaluate their systems before approving upgrades to customers. As an example, Dell offers a comprehensive list for their PowerEdge servers at http://topics-cdn.dell.com/pdf/vmware-esxi-6.x_Reference%20Guide2_en-us.pdf. Here are some additional links for alternative hardware vendors: Hewlett-Packard: http://h17007.www1.hp.com/us/en/enterprise/servers/supportmatrix/vmware.aspx IBM: http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/vmware.html Cisco UCS: http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html When using Fibre-channel-based storage systems, you might also need to ensure fulfilling that vendor's support matrix. Please check out your vendor's website or contact support for this information. VMware also offers a comprehensive list of tested hardware setups at http://www.vmware.com/resources/compatibility/pdf/vi_systems_guide.pdf. In their Compatibility Guide portal, VMware enabled customers to browse for particular server systems—this information might be more recent than the aforementioned PDF file. Creating a backup of ESXi Before upgrading our ESXi hosts, we also need to make sure that we have a valid backup. In case things go wrong, we might need this backup to restore the previous ESXi version. For creating a backup of the hard disk ESXi is installed on, there are a plenty of tools in the market that implement image-based backups. One possible solution, which is free, is Clonezilla. Clonezilla is a Linux-based live medium that can easily create backup images of hard disks. To create a backup using Clonezilla, proceed with the following steps: Download the Clonezilla ISO image from their website. Make sure you select the AMD64 architecture and the ISO file format. Enable maintenance mode for the particular ESXi host. Make sure you migrate virtual machines to alternative nodes or power them off. Connect the ISO file to the ESXi host and boot from CD. Also, connect a USB drive to the host. This drive will be used to store the backup. Boot from CD and select Clonezilla live. Wait until the boot process completes. When prompted, select your keyboard layout (for example, en_US.utf8) and select Don't touch keymap. In the Start Clonezilla menu, select Start_Clonezilla and device-image. This mode creates an image of the medium ESXi is running on and stores it in the USB storage. Select local_dev and choose the USB storage connected to the host from the list in the next step. Select a folder for storing the backup (optional). Select Beginner and savedisk to store the entire disk ESXi resides on as an image. Enter a name for the backup. Select the hard disk containing the ESXi installation and proceed. You can also specify whether Clonezilla should check the image after creating it (highly recommended). Afterwards, confirm the backup process. The backup job will start immediately. Once the backup completes, select reboot from the menu to reboot the host. A running backup job in Clonezilla To restore a backup using Clonezilla, perform the following steps after booting the Clonezilla media: Complete steps 1 to 8 from the previous guide. Select Beginner and restoredisk to restore the entire disk. Select the image from the USB storage and the hard drive the image should be restored on. Acknowledge the restore process. Once the restoration completes, select reboot from the menu to reboot the host. For the system running vCenter Server, we can easily create a VM snapshot, or also use Clonezilla if a physical machine is used instead. The upgrade path It is very important to execute the particular upgrade tasks in the following order: Upgrade VMware vCenter Server Upgrade the particular ESXi hosts Reformat or upgrade the VMFS data stores (if applicable) Upgrading additional components, such as distributed virtual switches, or additional appliances The first step is to upgrade vCenter Server. This is necessary to ensure that we are able to manage our ESXi hosts after upgrading them. Newer vCenter Server versions are downward compatible with numerous ESXi versions. To double-check this, we can look up the particular version support by browsing VMware's Product Interoperability Matrix on their website. Click on Solution Interoperability, choose VMware vCenter Server from the drop-down menu, and select the version you want to upgrade to. In our example, we will choose the most recent release, 6.0, and select VMware ESX/ESXi from the Add Platform/Solution drop-down menu. VMware Product Interoperability Matrix for vCenter Server and ESXi vCenter Server 6.0 supports management of VMware ESXi 5.0 and higher. We need to ensure the same support agreement for any other used products, such as these: VMware vSphere Update Manager VMware vCenter Operations (if applicable) VMware vSphere Data Protection In other words, we need to upgrade all additional vSphere and vCenter Server components to ensure full functionality. Upgrading vCenter Server Upgrading vCenter Server is the most crucial step, as this is our central management platform. The upgrade process varies according to the chosen architecture. Upgrading Windows-based vCenter Server installations is quite easy, as the installation supports in-place upgrades. When using the vCenter Server Appliance (vCSA), there is no in-place upgrade; it is necessary to deploy a new vCSA and import the settings from the old installation. This process varies between the particular vCSA versions. For upgrading from vCSA 5.0 or 5.1 to 5.5, VMware offers a comprehensive article at http://kb.vmware.com/kb/2058441. To upgrade vCenter Server 5.x on Windows to 6.0 using the Easy Install method, proceed with the following steps: Mount the vCenter Server 6.x installation media (VMware-VIMSetup-all-6.0.0-xxx.iso) on the server running vCenter Server. Wait until the installation wizard starts; if it doesn't start, double-click on the CD/DVD icon in Windows Explorer. Select vCenter Server for Windows and click on Install to start the installation utility. Accept the End-User License Agreement (EULA). Enter the current vCenter Single-Sign-On password and proceed with the next step. The installation utility begins to execute pre-upgrade checks; this might take some time. If you're running vCenter Server along with Microsoft SQL Server Express Edition, the database will be migrated to VMware vPostgres. Review and change (if necessary) the network ports of your vCenter Server installation. If needed, change the directories for vCenter Server and the Embedded Platform Controller (ESC). Carefully review the upgrade information displayed in the wizard. Also verify that you have created a backup of your system and the database. Then click on Upgrade to start the upgrade. After the upgrade, vSphere Web Client can be used to connect to the upgraded vCenter Server system. Also note that the Microsoft SQL Server Express Edition database is not used anymore. Upgrading ESXi hosts Upgrading ESXi hosts can be done using two methods: Using the installation media from the VMware website vSphere Update Manager If you need to upgrade a large number of ESXi hosts, I recommend that you use vSphere Update Manager to save time, as it can automate the particular steps. For smaller landscapes, using the installation media is easier. For using vUM to upgrade ESXi hosts, VMware offers a guide on their knowledge base at http://kb.vmware.com/kb/1019545. In order to upgrade an ESXi host using the installation media, perform the following steps: First of all, enable maintenance mode for the particular ESXi host. Make sure you migrate the virtual machines to alternative nodes or power them off. Connect the installation media to the ESXi host and boot from CD. Once the setup utility becomes available, press Enter to start the installation wizard. Accept the End-User License Agreement (EULA) by pressing F11. Select the disk containing the current ESXi installation. In the ESXi found dialog, select Upgrade. Review the installation information and press F11 to start the upgrade. After the installation completes, press Enter to reboot the system. After the system has rebooted, it will automatically reconnect to vCenter Server. Select the particular ESXi host to see whether the version has changed. In this example, the ESXi host has been successfully upgraded to version 6.0: Version information of an updated ESXi host running release 6.0 Repeat all of these steps for all the remaining ESXi hosts. Note that running an ESXi cluster with mixed versions should only be a temporary solution. It is not recommended to mix various ESXi releases in production usage, as the various features of ESXi might not perform as expected in mixed clusters. Additional steps After upgrading vCenter Server and our ESXi hosts, there are additional steps that can be done: Reformating or upgrading VMFS data stores Upgrading distributed virtual switches Upgrading virtual machine's hardware versions Upgrading VMFS data stores VMware's VMFS (Virtual Machine Filesystem) is the most used filesystem for shared storage. It can be used along with local storage, iSCSI, or Fibre-channel storage. Particularly, ESX(i) releases support various versions of VMFS. Let's take a look at the major differences:   VMFS 2   VMFS 3   VMFS 5   Supported by ESX 2.x, ESXi 3.x/4.x (read-only) ESX(i) 3.x and higher ESXi 5.x and higher Block size(s) 1, 8, 64, or 256 MB 1, 2, 4, or 8 MB 1 MB (fixed) Maximum file size 1 MB block size: 456 MB 8 MB block size: 2.5 TB 64 MB block size: 28.5 TB 256 MB block size: 64 TB 1 MB block size: 256 MB 2 MB block size: 512 GB 4 MB block size: 1 TB 8 MB block size: 2 TB 62 TB Files per volume Ca. 256 (no directories supported) Ca. 37,720 Ca. 130,690 When migrating from an ESXi version such as 4.x or older, it is possible to upgrade VMFS data stores to version 5. VMFS 2 cannot be upgraded to VMFS 5; it first needs to be upgraded to VMFS 3. To enable the upgrade, a VMFS 2 volume must not have a block size more than 8 MB, as VMFS 3 only supports block sizes up to 8 MB. In comparison with older VMFS versions, VMFS 5 supports larger file sizes and more files per volume. I highly recommend that you reformat VMFS data stores instead of upgrading them, as the upgrade does not change the filesystem's block size. Because of this limitation, you won't benefit from all the new VMFS 5 features after an upgrade. To upgrade a VMFS 3 volume to VMFS 5, perform these steps: Log in to vSphere Web Client. Go to the Storage pane. Click on the data store to upgrade and go to Settings under the Manage tab. Click on Upgrade to VMFS5. Then click on OK to start the upgrade. VMware vNetwork Distributed Switch When using vNetwork Distributed Switches (also often called dvSwitches) it is recommended to perform an upgrade to the latest version. In comparison with vNetwork Standard Switches (also called vSwitches), dvSwitches are created at the vCenter Server level and replicated to all subscribed ESXi hosts. When creating a dvSwitch, the administrator can choose between various dvSwitch versions. After upgrading vCenter Server and the ESXi hosts, additional features can be unlocked by upgrading the dvSwitch. Let's take a look at some commonly used dvSwitch versions:   vDS 5.0   vDS 5.1   vDS 5.5   vDS 6.0   Compatible with ESXi 5.0 and higher ESXi 5.1 and higher ESXi 5.5 and higher ESXi 6.0 Common features Network I/O Control, load-based teaming, traffic shaping, VM port blocking, PVLANs (private VLANs), network vMotion, and port policies Additional features Network resource pools, NetFlow, and port mirroring VDS 5.0 +, management network rollback, network health checks, enhanced port mirroring, and LACP (Link Aggregation Control Protocol) VDS 5.1 +, traffic filtering, and enhanced LACP functionality VDS 5.5 +, multicast snooping, and Network I/O Control version 3 (bandwidth guarantee) It is also possible to use the old version furthermore, as vCenter Server is downward compatible with numerous dvSwitch versions. Upgrading a dvSwitch is a task that cannot be undone. During the upgrade, it is possible that virtual machines will lose their network connectivity for some seconds. After the upgrade, older ESXi hosts will not be able to participate in the distributed switch setup. To upgrade a dvSwitch, perform the following steps: Log in to vSphere Web Client. Go to the Networking pane and select the dvSwitch to upgrade. Lorem..... After upgrading the dvSwitch, you will notice that the version has changed: Version information of a dvSwitch running VDS 6.0 Virtual machine hardware version Every virtual machine is created with a virtual machine hardware version specified (also called VMHW or vHW). A vHW version defines a set of particular limitations and features, such as controller types or network cards. To benefit from the new virtual machine features, it is sufficient to upgrade vHW versions. ESXi hosts support a range of vHW versions, but it is always advisable to use the most recent vHW version. Once a vHW version is upgraded, particular virtual machines cannot be started on older ESXi versions that don't support the vHW version. Let's take a deeper look at some popular vHW versions:   vSphere 4.1   vSphere 5.1   vSphere 5.5   vSphere 6.0   Maximum vHW 7 9 10 11 Virtual CPUs 8 64 128 Virtual RAM 255 GB 1 TB 4 TB vDisk size 2 TB 62 TB SCSI adapters / targets 4/60 SATA adapters / targets Not supported 4/30 Parallel / Serial Ports 3/4 3/32 USB controllers / devices per VM 1/20 (USB 1.x + 2.x) 1/20 (USB 1.x, 2.x + 3.x) The upgrade cannot be undone. Also, it might be necessary to update VMware Tools and the drivers of the operating system running in the virtual machine. Summary In this article we learnt how to correctly upgrade VMware virtual infrastructure setups. If you want to know more about VMware vSphere and virtual infrastructure setups, go ahead and get your copy of Packt Publishing's book VMware vSphere Essentials. Resources for Article: Further resources on this subject: Networking [article] The Design Documentation [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 7692

article-image-moving-windows-appliance
Packt
12 Oct 2016
7 min read
Save for later

Moving from Windows to Appliance

Packt
12 Oct 2016
7 min read
In this article by Daniel Langenhan, author of VMware vRealize Orchestrator Cookbook, Second Edition, will show you how to move an existing Windows Orchestrator installation to the appliance. With vRO 7 the Windows install of Orchestrator doesn't exist anymore. (For more resources related to this topic, see here.) Getting ready We need an Orchestrator installed on windows. Download the same version of the Orchestrator appliance as you have installed in the Windows version. If needed upgrade the Windows version to the latest possible one. How to do it... There are three ways, using the migration tool, repointing to an external database or export/import the packages. Migration tool There is a migration tool that comes with vRO7 that allows you to pack up your vRO5.5 or 6.x install and deploy it into a vRO7. The migration tool works on Windows and Linux. It collects the configuration, the plug-ins as well as their configuration certificates, and licensing into a file. Follow these steps to use the migration tool: Deploy a new vRO7 Appliance. Log in to your Windows Orchestrator OS. Stop the VMware vCenter Orchestrator Service (Windows services). Open a Web browser and log in to your new vRO7 - control center and then go to Export/Import Configuration. Select Migrate Configuration and click on the here link. The link points to: https://[vRO7]:8283/vco-controlcenter/api/server/migration-tool . Stop the vRO7 Orchestrator service. Unzip the migration-tool.zip and copy the subfolder called migration‑cli into the Orchestrator director, for example, C:Program FilesVMwareInfrastructureOrchestratormigration-clibin. Open a command prompt. If you have a Java install, make sure your path points to it. Try java ‑version. If that works continue, if not, do the following: Set the PATH environment variable to the Java install that comes with Orchestrator, set PATH=%PATH%;C:Program FilesVMwareInfrastructureOrchestratorUninstall_vCenter Orchestratoruninstall-jrebin CD to the directory ..Orchestratormigration-clibin. Execute the command vro-migrate.bat export. There may be errors showing about SLF4J, you can ignore those. In the main directory (..Orchestrator) you should now find an orchestrator‑config‑export‑VC55‑[date].zip file. Go back to the Web browser and upload the ZIP file into Migration Configuration by clicking on Browse and select the file. Click on Import. You now see what can be imported. You can unselect item you wish not to migrate. Click Finish Migration. Restart the Orchestrator service. Check the settings. External database If you have an external database things are pretty easy. For using the initial internal database please see the additional steps in the There's more section of this recipe. Backup the external database. Connect to the Windows Orchestrator Configurator. Write down all the plugins you have installed as well as their version. Shutdown the Windows version and deploy the Appliance, this way you can use the same IP and hostname if you want. Login to the Appliance version's Configurator. Stop the Orchestrator service Install all plugins you had in the Windows version. Attach the external database. Make sure that all trusted SSL certificates are still there, such as vCenter and SSO. Check the authentication if it is still working. Use the test login. Check your licensing. Force a plugin reinstall (Troubleshooting | Reinstall the plug-ins when the server starts). Start the Orchestrator service and try to log in. Make a complete sanity check. Package transfer This is the method that will only pull your packages across. This the only easy method to use when you are transitioning between different databases, such as between MS SQL and PostgreSQL. Connect to your Windows version Create a package of all the workflows, action, and other items you need. Shutdown Windows and deploy the Appliance. Configure the Appliance with DB, Authentication and all the plugins you previously had. Import the package. How it works... Moving from the Windows version of Orchestrator to the Appliance version isn't such a big thing. Worst case scenario is using the packaging transfer. The only really important thing is to use the same version of the Windows Orchestrator as the Appliance version. You can download a lot of old versions including 5.5 from http://www.vmware.com/in.html. If you can't find the same version, upgrade your existing vCenter Orchestrator to one you can download. After you transferred the data to the appliance you need to make sure that everything works correctly and then you can upgrade to vRO7. There's more... When you just run Orchestrator from your Windows vCenter installation and didn't configure an external database then Orchestrator uses the vCenter database and mixes the Orchestrator tables with the vCenter tables. In order to only export the Orchestrator ones, we will use the MS SQL Server Management Studio (free download from www.microsoft.com called Microsoft SQL Server 2008 R2 RTM). To transfer the only the Orchestrator database tables from the vCenter MS-SQL to an external SQL do the following: Stop the VMware vCenter Orchestrator Service (Windows Services) on your Windows Orchestrator. Start the SQL Server Management Studio on your external SQL server. Connect to the vCenter DB. For SQL Express use: [vcenter]VIM_SQLEXP with Windows Authentication. Right-click on your vCenter Database (SQL Express: VIM_VCDB) and select Tasks | Export Data. In the wizard, select your source, which should be the correct one already and click Next. Choose SQL Server Native Client 10.0 and enter the name of your new SQL server. Click on New to create a new database on that SQL server (or use an empty one you created already. Click Next. Select Copy data from one or more tables or views and click Next. Now select every database which starts with VMO_ and then click Next. Select Run immediately and click Finish. Now you have the Orchestrator database extracted as an external database. You still need to configure a user and rights. Then proceed with the section External database in this recipe. Orchestrator client and 4K display scaling This recipe shows a hack how to make the Orchestrator client scale on 4K displays. Getting ready We need to download the program Resource Tuner (http://www.restuner.com/) the trial version will work, however, consider buying it if it works for you. You need to know the path to your Java install, this should be something like: C:Program Files (x86)Javajre1.x.xxbin How to do it... Before you start…. Please be careful as this impacts your whole Java environment. This worked for me very well with Java 1.8.0_91-b14. Download and install Resource Tuner. Run Resource Tuner as administrator. Open the file javaws.exe in your Java directory. Expand manifest and the click on the first entry (the name can change due to localization). Look for the line <dpiAware>true</dpiAware>. Exchange the true for a false Save end exit. Repeat the same for all the other java*.exe in the same directory as well as j2launcher.exe. Start the Client.jnlp (the file that downloads when you start the web application). How it works... In Windows 10 you can set the scaling of applications when you are using high definition monitors (4K displays). What you are doing is telling Java that it is not DPI aware, meaning it the will use the Windows 10 default scaler, instead of an internal scaler. There's more... For any other application such as Snagit or Photoshop I found that this solution works quite well: http://www.danantonielli.com/adobe-app-scaling-on-high-dpi-displays-fix/. Summary In this article we discussed about moving an existing Windows Orchestrator installation to the appliance and a hack on how to make the Orchestrator client scale on 4K displays. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] vRealize Automation and the Deconstruction of Components [article] FAQ on Virtualization and Microsoft App-V [article]
Read more
  • 0
  • 0
  • 7598

article-image-configuring-organization-network-services
Packt
22 Aug 2014
9 min read
Save for later

Configuring organization network services

Packt
22 Aug 2014
9 min read
This article by Lipika Pal, the author of the book VMware vCloud Director Essentials, teaches you to configure organization network services. Edge devices can be used as DNS relay hosts owing to the release of vCloud Networking and Security suite 5.1. However, before we jump onto how to do it and why you should do it, let us discuss the DNS relay host technology itself. (For more resources related to this topic, see here.) If your client machines want to send their DNS queries, they contact DNS relay, which is nothing but a host. The queries are sent by the relay host to the provider's DNS server or any other entity specified using the Edge device settings. The answer received by the Edge device is then sent back to the machines. The Edge device also stores the answer for a short period of time, so any other machine in your network searching for the same address receives the answer directly from the Edge device without having to ask internet servers again. In other words, the Edge device has this tiny memory called DNS cache that remembers the queries. The following diagram illustrates one of the setups and its workings: In this example, you see an external interface configured on Edge to act as a DNS relay interface. On the client side, we configured Client1 VM such that it uses the internal IP of the Edge device (192.168.1.1) as a DNS server entry. In this setup, Client1 requests DNS resolution (step 1) for the external host, google.com, from Edge's gateway internal IP. To resolve google.com, the Edge device will query its configured DNS servers (step 2) and return that resolution to Client1 (step 3). Typical uses of this feature are as follows: DMZ environment Multi-tenant environment Accelerated resolution time Configuring DNS relay To configure DNS relay in a vShield Edge device, perform the following steps. Configure DNS relay when creating an Edge device or when there is an Edge device available. This is an option for an organization gateway and not for a vApp or Org network. Now, let's develop an Edge gateway in an organization vDC while enabling DNS relay by executing the following steps: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. Click on the Organization VDCs link and on the right-hand side, you will see some organization vDCs created. Click on any organization vDC. Doing this will take you to the vDC page. Click on the Administration page and double-click on Virtual Datacenter. Then click on the Edge Gateways tab. Click on the green-colored + sign as shown in the following screenshot: On the Configure Edge Gateway screen, click on the Configure IP Settings section. Use the other default settings and click on Next. On the Configure External Networks screen, select the external network and click on Add. You will see a checkbox on this same screen. Use the default gateway for DNS relay. Once you do, select it and click on Next, as shown in the following screenshot: Select the default value on the Configure IP Settings page and click on Next. Specify a name for this Edge gateway and click on Next. Review the information and click on Finish. Let's look an alternative way to configure this, assuming you already have an Edge gateway and are trying to configure DNS Relay. Execute the following steps to configure it: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. On the Home screen, click on Edge Gateways. Select an appropriate Edge gateway, right-click, and select Properties, as shown in the following screenshot: Click on the Configure External Networks tab. Scroll down and select the Use default gateway for DNS Relay. checkbox, as shown in the following screenshot: Click on OK. In this section, we learned to configure DNS relay. In the next section, we discuss the configuration of a DHCP service in vCloud Director. DHCP services in vCloud Director vShield Edge devices support IP address pooling using the DHCP service. vShield Edge DHCP service listens on the vShield Edge internal interface for DHCP discovery. It uses the internal interface's IP address on vShield Edge as the default gateway address for all clients. The broadcast and subnet mask values of the internal interface are used for the container network. However, when you translate this with vCloud, not all types of networks support DHCP. That said, the Direct Connect network does not support DHCP. So, only routed and isolated networks support the vCNS DHCP service. The following diagram illustrates a routed organization vCD network: In the preceding diagram, the DHCP service provides an IP address from the Edge gateway to the Org networks connected to it. The following diagram shows how vApp is connected to a routed external network and gets a DHCP service: The following diagram shows a vApp network and a vApp connected to it, and DHCP IP address being obtained from the vShield Edge device: Configuring DHCP pools in vCloud Director The following actions are required to set up Edge DHCP: Add DHCP IP pools Enable Edge DHCP services As a prerequisite, you should know which Edge device is connected to which Org vDC network. Execute the following steps to configure DHCP pool: Open up a supported browser. Go to the URL of the vCD server; for example, https://serverFQDN/cloud. Log in to vCD by typing an administrator user ID and password. Click on the Edge Gateways link. Select the appropriate gateway, right-click on it, and select Edge Gateway Services, as shown in the following screenshot: The first service is DHCP, as shown in the following screenshot: Click on Add. From the drop-down combobox, select the network that you want the DHCP to applied be on. Specify the IP range. Select Enable Pool and click on OK, as shown in the following screenshot: Click on the Enable DHCP checkbox and then on OK. In this section, we learned about the DHCP pool, its functionality, and how to configure it. Understanding VPN tunnels in vCloud Director It's imperative that we first understand the basics of CloudVPN tunnels and then move on to a use case. We can then learn to configure a VPN tunnel. A VPN tunnel is an encrypted or more precisely, encapsulated network path on a public network. This is often used to connect two different corporate sites via the Internet. In vCloud Director, you can connect two organizations through an external network, which can also be used by other organizations. The VPN tunnel prevents users in other organizations from being able to monitor or intercept communications. VPNs must be anchored at both ends by some kind of firewall or VPN device. In vCD, the VPNs are facilitated by vShield Edge devices. When two systems are connected by a VPN tunnel, they communicate like they are on the same network. Let's have a look at the different types of VPN tunnels you can create in vCloud Director: VPN tunnels between two organization networks in the same organization VPN tunnels between two organization networks in two different organizations VPN tunnels between an organization network and a remote network outside of VMware vCloud While only a system administrator can create an organization network, organization administrators have the ability to connect organization networks using VPN tunnels. If the VPN tunnel connects two different organizations, then the organization administrator from each organization must enable the connection. A VPN cannot be established between two different organizations without the authorization of either both organization administrators or the system administrator. It is possible to connect VPN tunnels between two different organizations in two different instances of vCloud Director. The following is a diagram of a VPN connection between two different organization networks in a single organization: The following diagram shows a VPN tunnel between two organizations. The basic principles are exactly the same. vCloud Director can also connect VPN tunnels to remote devices outside of vCloud. These devices must be IPSec-enabled and can be network switches, routers, firewalls, or individual computer systems. This ability to establish a VPN tunnel to a device outside of vCD can significantly increase the flexibility of vCloud communications. The following diagram illustrates a VPN tunnel to a remote network: Configuring a virtual private network To configure an organization-to-organization VPN tunnel in vCloud Director, execute the following steps: Start a browser. Insert the URL of the vCD server into it, for example, https://serverFQDN/cloud. Log in to vCD using the administrator user ID and password. Click on the Manage & Monitor tab. Click on the Edge Gateways link in the panel on the left-hand side. Select an appropriate gateway, right-click, and select Edge Gateway Services. Click on the VPN tab. Click on Configure Public IPs. Specify a public IP and click on OK, as shown in the following screenshot: Click on Add to add the VPN endpoint. Click on Establish VPN to and specify an appropriate VPN type (in this example, it is the first option), as shown in the following screenshot: If this VPN is within the same organization, then select the Peer Edge Gateway option from the dropdown. Then, select the local and peer networks. Select the local and peer endpoints. Now click on OK. Click on Enable VPN and then on OK. This section assumes that either the firewall service is disabled or the default rule is set to accept all on both sides. In this section, we learned what VPN is and how to configure it within a vCloud Director environment. In the next section, we discuss static routing and various use cases and implementation.
Read more
  • 0
  • 0
  • 7505

article-image-storage-ergonomics
Packt
07 Aug 2015
19 min read
Save for later

Storage Ergonomics

Packt
07 Aug 2015
19 min read
In this article by Saurabh Grover, author of the book Designing Hyper-V Solutions, we will be discussing the last of the basics to get you equipped to create and manage a simple Hyper-V structure. No server environment, physical or virtual, is complete without a clear consideration and consensus over the underlying storage. In this article, you will learn about the details of virtual storage, how to differentiate one from the other, and how to convert one to the other and vice versa. We will also see how Windows Server 2012 R2 removes dependencies on raw device mappings by way of pass-through or iSCSI LUN, which were required for guest clustering. VHDX can now be shared and delivers better results than pass-through disks. There are more merits to VHDX than the former, as it allows you to extend the size even if the virtual machine is alive. Previously, Windows Server 2012 added a very interesting facet for storage virtualization in Hyper-V when it introduced virtual SAN, which adds a virtual host bus adapter (HBA) capability to a virtual machine. This allows a VM to directly view the fibre channel SAN. This in turn allows FC LUN accessibility to VMs and provides you with one more alternative for shared storage for guest clustering. Windows Server 2012 also introduced the ability to utilize the SMI-S capability, which was initially tested on System Center VMM 2012. Windows 2012 R2 carries the torch forward, with the addition of new capabilities. We will discuss this feature briefly in this article. In this article, you will cover the following: Two types of virtual disks, namely VHD and VHDX Merits of using VHDX from Windows 2012 R2 onwards Virtual SAN storage Implementing guest clustering using shared VHDX Getting an insight into SMI-S (For more resources related to this topic, see here.) Virtual storage A virtual machine is a replica of a physical machine in all rights and with respect to the building components, regardless of the fact that it is emulated, resembles, and delivers the same performance as a physical machine. Every computer ought to have storage for the OS or application loading. This condition applies to virtual machines as well. If VMs are serving as independent servers for roles such as domain controller or file server, where the server needs to maintain additional storage apart from the OS, the extended storage can be extended for domain user access without any performance degradation. Virtual machines can benefit from multiple forms of storage, namely VHD/VHDX, which are file-based storage; iSCSI LUNs; pass-through LUNs, which are raw device mappings; and of late, virtual-fibre-channel-assigned LUNs. There have been enhancements to each of these, and all of these options have a straightforward implementation procedure. However, before you make a selection, you should identify the use case according to your design strategy and planned expenditure. In the following section, we will look at the storage choices more closely. VHD and VHDX VHD is the old flag bearer for Microsoft virtualization ever since the days of virtual PC and virtual server. The same was enhanced and employed in early Hyper-V releases. However, as a file-based storage that gets mounted as a normal storage for a virtual machine, VHD had its limitations. VHDX, a new feature addition to Windows Server 2012, was built further upon the limitations of its predecessor and provides greater storage capacity, support for large sector disks, and better protection against corruption. In the current release of Windows Server 2012 R2, VHDX has been bundled with more ammo. VHDX packed a volley of feature enhancements when it was initially launched, and with Windows Server 2012 R2, Microsoft only made it better. If we compare the older, friendlier version of VHD with VHDX, we can draw the following inferences: Size factor: VHD had an upper size limit of 2 TB, while VHDX gives you a humungous maximum capacity of 64 TB. Large disk support: With the storage industry progressing towards 4 KB sector disks from the 512 bytes sector, for applications that still may depend on the older sector format, there are two offerings from the disk alignment perspective: native 4 KB disk and 512e (or 512 byte emulation disks). The operating system, depending on whether it supports native 4 KB disk or not, will either write 4 KB chunks of data or inject 512 bytes of data into a 4 KB sector. The process of injecting 512 bytes into a 4 KB sector is called RMW, or Read-Write-Modify. VHDs are generically supported on 512e disks. Windows Server 2012 and R2 both support native 4 KB disks. However, the VHD driver has a limitation; it cannot open VHD files on physical 4 KB disks. This limitation is checked by enabling VHD to be aligned to 4 KB and RMW ready, but if you are migrating from the older Hyper-V platform, you will need to convert it accordingly. VHDX, on the other hand, is the "superkid". It can be used on all disk forms, namely 512, 512e, and the native 4 KB disk as well, without any RMW dependency. Data corruption safety: In the event of power outages or failures, the possibility of data corruption is reduced with VHDX. Metadata inside the VHDX is updated via a logging process that ensures that the allocations inside VHDX are committed successfully. Offloaded data transfers (ODX): With Windows Server 2012 Hyper-V supporting this feature, data transfer and moving and sizing of virtual disks can be achieved at the drop of a hat, without host server intervention. The basic prerequisite for utilizing this feature is to host the virtual machines on ODX-capable hardware. Thereafter, Windows Server 2012 self-detects and enables the feature. Another important clause is that virtual disks (VHDX) should be attached to the SCSI, not IDE. TRIM/UNMAP: Termed by Microsoft in its documentation as efficiency in representing data, this feature works in tandem with thin provisioning. It adds the ability to allow the underlying storage to reclaim space and maintain it optimally small. Shared VHDX: This is the most interesting feature in the collection released with Windows Server 2012 R2. It made guest clustering (failover clustering in virtual machines) in Hyper-V a lot simpler. With Windows Server 2012, you could set up a guest cluster using virtual fibre channel or iSCSI LUN. However, the downside was that the LUN was exposed to the user of the virtual machine. Shared VHDX proves to be the ideal shared storage. It gives you the benefit of storage abstraction, flexibility, and faster deployment of guest clusters, and it can be stored on an SMB share or a cluster-shared volume (CSV). Now that we know the merits of using VHDX over VHD, it is important to realize that either of the formats can be converted into the other and can be used under various types of virtual disks, allowing users to decide a trade-off between performance and space utilization. Virtual disk types Beyond the two formats of virtual hard disks, let's talk about the different types of virtual hard disks and their utility as per the virtualization design. There are three types of virtual hard disks, namely dynamically expanding, fixed-size, and differencing virtual hard disks: Dynamically expanding: Also called a dynamic virtual hard disk, this is the default type. It gets created when you create a new VM or a new VHD/VHDX. This is Hyper-V's take on thin provisioning. The VHD/VHDX file will start off from a small size and gradually grow up to the maximum defined size for the file as and when chunks of data get appended or created inside the OSE (short for operating system environment) hosted by the virtual disk. This disk type is quite beneficial, as it prevents storage overhead and utilizes as much as required, rather than committing the entire block. However, due to the nature of the virtual storage, as it spawns in size, the actual file gets written in fragments across the Hyper-V CSV or LUN (physical storage). Hence, it affects the performance of the disk I/O operations of the VM. Fixed size: As the name indicates, the virtual disk type commits the same block size on the physical storage as its defined size. In other words, if you have specified a fixed size 1 TB, it will create a 1 TB VHDX file in the storage. The creation of a fixed size takes a considerable amount of time, commits space on the underlying storage, and does allow SAN thin provisioning to reclaim it, somewhat like whitespaces in a database. The advantage of using this type is that it delivers amazing read performance and heavy workloads from SQL, and exchange can benefit from it. Differencing: This is the last of the lot, but quite handy as an option when it comes to quick deployment of virtual machines. This is by far an unsuitable option, unless employed for VMs with a short lifespan, namely pooled VDI (short for virtual desktop infrastructure) or lab testing. The idea behind the design is to have a generic virtual operating system environment (VOSE) in a shut down state at a shared location. The VHDX of the VOSE is used as a parent or root, and thereafter, multiple VMs can be spawned with differencing or child virtual disks that use the generalized OS from the parent and append changes or modifications to the child disk. So, the parent stays unaltered and serves as a generic image. It does not grow in size; on the contrary, the child disk keeps on growing as and when data is added to the particular VM. Unless used for short-lived VMs, the long-running VMs could enter an outage state or may be performance-stricken soon due to the unpredictable growth pattern of a differencing disk. Hence, these should be avoided for server virtual machines without even a second thought. Virtual disk operations Now we will apply all of the knowledge gained about virtual hard disks, and check out what actions and customizations we can perform on them. Creating virtual hard disks This goal can be achieved in different ways: You can create a new VHD when you are creating a new VM, using the New Virtual Machine Wizard. It picks up the VHDX as the default option. You can also launch the New Virtual Hard Disk Wizard from a virtual machine's settings. This can be achieved by PowerShell cmdlets as well:New-VHD You may employ the Disk Management snap-in to create a new VHD as well. The steps to create a VHD here are pretty simple: In the Disk Management snap-in, select the Action menu and select Create VHD, like this: Figure 5-1: Disk Management – Create VHD This opens the Create and Attach Virtual Hard Disk applet. Specify the location to save the VHD at, and fill in Virtual hard disk format and Virtual hard disk type as depicted here in figure 5-2: Figure 5-2: Disk Management – Create and Attach Virtual Hard Disk The most obvious way to create a new VHD/VHDX for a VM is by launching New Virtual Hard Disk Wizard from the Actions pane in the Hyper-V Manager console. Click on New and then select the Hard Disk option. It will take you to the following set of screens: On the Before You Begin screen, click on Next, as shown in this screenshot: Figure 5-3: New Virtual Hard Disk Wizard – Create VHD The next screen is Choose Disk Format, as shown in figure 5-4. Select the relevant virtual hard disk format, namely VHD or VHDX, and click on Next. Figure 5-4: New Virtual Hard Disk Wizard – Virtual Hard Disk Format In the screen for Choose Disk Type, select the relevant virtual hard disk type and click on Next, as shown in the following screenshot: Figure 5-5: New Virtual Hard Disk Wizard– Virtual Hard Disk Type The next screen, as shown in figure 5-6, is Specify Name and Location. Update the Name and Location fields to store the virtual hard disk and click on Next. Figure 5-6: New Virtual Hard Disk Wizard – File Location The Configure Disk screen, shown in figure 5-7, is an interesting one. If needs be, you can convert or copy the content of a physical storage (local, LUN, or something else) to the new virtual hard disk. Similarly, you can copy the content from an older VHD file to the Windows Server 2012 or R2 VHDX format. Then click on Next. Figure 5-7: New Virtual Hard Disk Wizard – Configure Disk On the Summary screen, as shown in the following screenshot, click on Finish to create the virtual hard disk: Figure 5-8: New Virtual Hard Disk Wizard – Summary Editing virtual hard disks There may be one or more reasons for you to feel the need to modify a previously created virtual hard disk to suit a purpose. There are many available options that you may put to use, given a particular virtual disk type. Before you edit a VHDX, it's a good practice to inspect the VHDX or VHD. The Inspect Disk option can be invoked from two locations: from the VM settings under the IDE or SCSI controller, or from the Actions pane of the Hyper-V Manager console. Also, don't forget how to do this via PowerShell: Get-VHD -Path "E:Hyper-VVirtual hard disks1.vhdx" You may now proceed with editing a virtual disk. Again, the Edit Disk option can be invoked in exactly the same fashion as Inspect Disk. When you edit a VHDX, you are presented with four options, as shown in figure 5-9. It may sound obvious, but not all the options are for all the disk types: Compact: This operation is used to reduce or compact the size of a virtual hard disk, though the preset capacity remains the same. A dynamic disk, or differencing disk, grows as data elements are added, though deletion of the content does not automatically reclaim the storage capacity. Hence, a manual compact operation becomes imperative reduce the file size. PowerShell cmdlet can also do this trick, as follows: Optimize-VHD Convert: This is an interesting one, and it almost makes you change your faith. As the name indicates, this operation allows you to convert one virtual disk type to another and vice versa. You can also create a new virtual disk of the desired format and type at your preferred location. The PowerShell construct used to help you achieve the same goal is as follows: Convert-VHD Expand: This operation comes in handy, similar to Extend a LUN. You end up increasing the size of a virtual hard disk, which happens visibly fast for a dynamic disk and a bit slower for its fixed-size cousins. After this action, you have to perform the follow-up action inside the virtual machine to increase the volume size from disk management. Now, for the PowerShell code: Resize-VHD Merge: This operation is disk-type-specific—differencing virtual disks. It allows two different actions. You can either merge the differencing disk with the original parent, or create a new merged VHD out of all the contributing VHDs, namely the parent and the child or the differencing disk. The latter is the preferred way of doing it, as in utmost probability, there would be more than differencing to a parent. In PowerShell, the alternative the cmdlet is this: Merge-VHD Figure 5-9: Edit Virtual Hard Disk Wizard – Choose Action Pass-through disks As the name indicates, these are physical LUNs or hard drives passed on from the Hyper-V hosts, and can be assigned to a virtual machine as a standard disk. A once popular method on older Hyper-V platforms, this allowed the VM to harness the full potential of the raw device bypassing the Hyper-V host filesystem and also not getting restricted by the 2 TB limit of VHDs. A lot has changed over the years, as Hyper-V has matured into a superior virtualization platform and introduced VHDX, which went past the size limitation. with Windows Server 2012 R2 can be used as a shared storage for Hyper-V guest clusters. There are, however, demerits to this virtual storage. When you employ a pass-through disk, the virtual machine configuration file is stored separately. Hence, the snapshotting becomes unknown to this setup. You would not be able to utilize the dynamic disk's or differential disk's abilities here too. Another challenge of using this form of virtual storage is that when using a VSS-based backup, the VSS writer ignores the pass-through and iSCSI LUN. Hence, a complex backup plan has to be implemented by involving a running backup within VM and on the virtualization host separately. The following are steps, along with a few snapshots, that show you how to set up a pass-through disk: Present a LUN to the Hyper-V host. Confirm the LUN in Disk Management and ensure that it stays in the Offline State and as Not Initialized. Figure 5-10: Hyper-V Host Disk Management In Hyper-V Manager, right-click on the VM you wish to assign the pass-through to and select Settings. Figure 5-11: VM Settings – Pass-through disk placement Select SCSI Controller (or IDE in the case of Gen-1 VM) and then select the Physical hard disk option, as shown in the preceding screenshot. In the drop-down menu, you will see the raw device or LUN you wish to assign. Select the appropriate option and click on OK. Check Disk Management within the virtual machine to confirm that the disk has visibility. Figure 5-12: VM Disk Management – Pass-through Assignment Bring it online and initialize. Figure 5-13: VM Disk Management – Pass-through Initialization As always the preceding path can be chalked out with the help of a PowerShell cmdlet: Add-VMHardDiskDrive -VMName VM5 –ControllerType SCSI – ControllerNumber 0 –ControllerLocation 2 –DiskNumber 3 Virtual fibre channel Let's move on to the next big offering in Windows Server 2012 and R2 Hyper-V Server. There was pretty much a clamor for direct FC connectivity to virtual machines, as pass-through disks were supported only via iSCSI LUNs (with some major drawbacks not with FC). Also, needless to say, FC is faster. Enterprises with high-performance workloads relying on the FC SAN refrained from virtualizing or migrating to the cloud. Windows Server 2012 introduced the virtual fibre channel SAN ability in Hyper-V, which extended the HBA (short for host bus adapter) abilities to a virtual machine, granting them a WWN (short for world wide node name) and allowing access to a fibre channel SAN over a virtual SAN. The fundamental principle behind the virtual SAN is the same as the Hyper-V virtual switch, wherein you create a virtual SAN that hooks up to the SAN fabric over the physical HBA of the Hyper-V host. The virtual machine has new synthetic hardware for the last piece. It is called a virtual host bus adapter or vHBA, which gets its own set of WWNs, namely WWNN (node name) and WWPN (port name). The WWN is to the FC protocol as MAC is to the Ethernet. Once the WWNs are identified at the fabric and the virtual SAN, the storage admins can set up zoning and present the LUN to the specific virtual machine. The concept is straightforward, but there are prerequisites that you will need to ensure are in place before you can get down to the nitty-gritty of the setup: One or more Windows Server 2012 or R2 Hyper-V hosts. Hosts should have one or more FC HBAs with the latest drivers, and should support the virtual fibre channel and NPIV. NPIV may be disabled at the HBA level (refer to the vendor documentation prior to deployment). The same can be enabled using command-line utilities or GUI-based such as OneCommand manager, SANSurfer, and so on. NPIV should be enabled on the SAN fabric or actual ports. Storage arrays are transparent to NPIV, but they should support devices that present LUNs. Supported guest operating systems for virtual SAN are Windows 2008, Windows 2008 R2, Windows 2012, and Windows 2012 R2. The virtual fibre channel does not allow boot from SAN, unlike pass-through disks. We are now done with the prerequisites! Now, let's look at two important aspects of SAN infrastructure, namely NPIV and MPIO. N_Port ID virtualization (NPIV) An ANSI T11 standard extension, this feature allows virtualization of the N_Port (WWPN) of an HBA, allowing multiple FC initiators to share a single HBA port. The concept is popular and is widely accepted and promoted by different vendors. Windows Server 2012 and R2 Hyper-V utilizes this feature to the best, wherein each virtual machine partaking in the virtual SAN gets assigned a unique WWPN and access to the SAN over a physical HBA spawning its own N_Port. Zoning follows next, wherein the fabric can have the zone directed to the VM WWPN. This attribute leads to a very small footprint, and thereby, easier management and operational and capital expenditure. Summary It is going to be quite a realization that we have covered almost all the basic attributes and aspects required for a simple Windows Server 2012 R2 Hyper-V infrastructure setup. If we revise the contents, we will notice this: we started off in this article by understanding and defining the purpose of virtual storage, and what the available options are for storage to be used with a virtual machine. We reviewed various virtual hard disk types, formats, and associated operations that may be required to customize a particular type or modify it accordingly. We recounted how the VHDX format is superior to its predecessor VHD and which features were added with the latest Window Server releases, namely 2012 and 2012 R2. We discussed shared VHDX and how it can be used as an alternative to the old-school iSCSI or FC LUN as a shared storage for Windows guest clustering. Pass-through disks are on their way out, and we all know the reason why. The advent of the virtual fibre channel with Windows Server 2012 has opened the doors for virtualization of high-performance workloads relying heavily on FC connectivity, which until now was a single reason and enough of a reason to decline consolidation of these workloads. Resources for Article: Further resources on this subject: Hyper-V Basics [article] Getting Started with Hyper-V Architecture and Components [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 7174

article-image-article-creating-a-sample-c-net-application
Packt
27 Jul 2012
4 min read
Save for later

Creating a sample C#.NET application

Packt
27 Jul 2012
4 min read
First, open C#.NET. Then, go to File | New Project | Windows Form Applications. Type the desired name for our project and click on the OK button. Adding references We need to add a reference to the System.Management.Automation.dll assembly. Adding this assembly is tricky; first, we need to copy the assembly file to our application folder using the following command: Copy %windir%assemblyGAC_MSILSystem.Management.Automation1.0.0.0__31b f3856ad364e35System.Management.Automation.dll C:CodeXA65Sample where C:CodeXA65Sample is the folder of our application. Then we need to add the reference to the assembly. In the Project menu, we need to select Add Reference, click on the Browse tab, search and select the file System. Management.Automation.dll. After referencing the assembly, we need to add the following directive statements to our code: using System.Management.Automation; using System.Management.Automation.Host; using System.Management.Automation.Runspaces; Also, adding the following directive statements will make it easier to work with the collections returned from the commands: using System.Collections.Generic; using System.Collections.ObjectModel; Creating and opening a runspace To use the Microsoft Windows PowerShell and Citrix XenApp Commands from managed code, we must first create and open a runspace. A runspace provides a way for the application to execute pipelines programmatically. Runspaces construct a logical model of execution using pipelines that contains cmdlets, native commands, and language elements. So let's go and create a new function called ShowXAServers for the new runspace: void ShowXAServers() Then the following code creates a new instance of a runspace and opens it: Runspace myRunspace = RunspaceFactory.CreateRunspace(); myRunspace.Open(); The preceding piece of code provides access only to the cmdlets that come with the default Windows PowerShell installation. To use the cmdlets included with XenApp Commands, we must call it using an instance of the RunspaceConfiguration class. The following code creates a runspace that has access to the XenApp Commands: RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException=null; PSSnapInInfo info = rsConfig.AddPSSnapIn ("Citrix.XenApp.Commands", out snapInException); Runspace myRunSpace = RunspaceFactory.CreateRunspace(rsConfig); myRunSpace.Open(); This code specifies that we want to use Windows PowerShell in the XenApp Command context. This step gives us access to Windows PowerShell cmdlets and Citrix-specific cmdlets. Running a cmdlet Next, we need to create an instance of the Command class by using the name of the cmdlet that we want to run. The following code creates an instance of the Command class that will run the Get-XAServer cmdlet, add the command to the Commands collection of the pipeline, and finally run the command calling the Pipeline.Invoke method: Pipeline pipeLine = myRunSpace.CreatePipeline(); Command myCommand = newCommand("Get-XAServer"); pipeLine.Commands.Add(myCommand); Collection commandResults = pipeLine.Invoke(); Displaying results Now we run the command Get-XAServer on the shell and get this output: In the left-hand side column, the properties of the cmdlet are located, and in this case, we are looking for the first one, the ServerName, so we are going to redirect the output of the ServerName property to a ListBox. So the next step will be to add a ListBox and Button controls. The ListBox will show the list of XenApp servers when we click the button. Then we need to add the following code at the end of Function ShowXAServers: foreach (PSObject cmdlet in commandResults) { string cmdletName = cmdlet.Properties["ServerName"].Value. ToString(); listBox1.Items.Add (cmdletName); } The full code of the sample will look like this: And this is the final output of the application when we run it: Passing parameters to cmdlets We can pass parameters to cmdlets, using the Parameters.Add option. We can add multiple parameters. Each parameter will require a line. For example, we can add the ZoneName parameter to filter server members of the US-Zone zone: Command myCommand = newCommand("Get-XAServer"); myCommand.Parameters.Add("ZoneName", "US-ZONE") pipeLine.Commands.Add(myCommand); Summary In this article, we have learned about managing XenApp with Windows PowerShell and developed sample .NET applications on C#.NET. Specially, we saw: How to list all XenApp servers by using Citrix XenApp Commands How to add a reference to the System.Management.Automation.dll assembly How to create and open runspace, which helps to execute pipelines programmatically How to create an instance using the name of cmdlet How to pass parameters to cmdlets Further resources on this subject: Designing a XenApp 6 Farm [Article] Getting Started with XenApp 6 [Article] Microsoft Forefront UAG Building Blocks [Article]
Read more
  • 0
  • 0
  • 6625

article-image-getting-started-hyper-v-architecture-and-components
Packt
04 Jun 2015
19 min read
Save for later

Getting Started with Hyper-V Architecture and Components

Packt
04 Jun 2015
19 min read
In this article by Vinícius R. Apolinário, author of the book Learning Hyper-V, we will cover the following topics: Hypervisor architecture Type 1 and 2 Hypervisors Microkernel and Monolithic Type 1 Hypervisors Hyper-V requirements and processor features Memory configuration Non-Uniform Memory Access (NUMA) architecture (For more resources related to this topic, see here.) Hypervisor architecture If you've used Microsoft Virtual Server or Virtual PC, and then moved to Hyper-V, I'm almost sure that your first impression was: "Wow, this is much faster than Virtual Server". You are right. And there is a reason why Hyper-V performance is much better than Virtual Server or Virtual PC. It's all about the architecture. There are two types of Hypervisor architectures. Hypervisor Type 1, like Hyper-V and ESXi from VMware, and Hypervisor Type 2, like Virtual Server, Virtual PC, VMware Workstation, and others. The objective of the Hypervisor is to execute, manage and control the operation of the VM on a given hardware. For that reason, the Hypervisor is also called Virtual Machine Monitor (VMM). The main difference between these Hypervisor types is the way they operate on the host machine and its operating systems. As Hyper-V is a Type 1 Hypervisor, we will cover Type 2 first, so we can detail Type 1 and its benefits later. Type 1 and Type 2 Hypervisors Hypervisor Type 2, also known as hosted, is an implementation of the Hypervisor over and above the OS installed on the host machine. With that, the OS will impose some limitations to the Hypervisor to operate, and these limitations are going to reflect on the performance of the VM. To understand that, let me explain how a process is placed on the processor: the processor has what we call Rings on which the processes are placed, based on prioritization. The main Rings are 0 and 3. Kernel processes are placed on Ring 0 as they are vital to the OS. Application processes are placed on Ring 3, and, as a result, they will have less priority when compared to Ring 0. The issue on Hypervisors Type 2 is that it will be considered an application, and will run on Ring 3. Let's have a look at it: As you can see from the preceding diagram, the hypervisor has an additional layer to access the hardware. Now, let's compare it with Hypervisor Type 1: The impact is immediate. As you can see, Hypervisor Type 1 has total control of the underlying hardware. In fact, when you enable Virtualization Assistance (hardware-assisted virtualization) at the server BIOS, you are enabling what we call Ring -1, or Ring decompression, on the processor and the Hypervisor will run on this Ring. The question you might have is "And what about the host OS?" If you install the Hyper-V role on a Windows Server for the first time, you may note that after installation, the server will restart. But, if you're really paying attention, you will note that the server will actually reboot twice. This behavior is expected, and the reason it will happen is because the OS is not only installing and enabling Hyper-V bits, but also changing its architecture to the Type 1 Hypervisor. In this mode, the host OS will operate in the same way a VM does, on top of the Hypervisor, but on what we call parent partition. The parent partition will play a key role as the boot partition and in supporting the child partitions, or guest OS, where the VMs are running. The main reason for this partition model is the key attribute of a Hypervisor: isolation. For Microsoft Hyper-V Server you don't have to install the Hyper-V role, as it will be installed when you install the OS, so you won't be able to see the server booting twice. With isolation, you can ensure that a given VM will never have access to another VM. That means that if you have a compromised VM, with isolation, the VM will never infect another VM or the host OS. The only way a VM can access another VM is through the network, like all other devices in your network. Actually, the same is true for the host OS. This is one of the reasons why you need an antivirus for the host and the VMs, but this will be discussed later. The major difference between Type 1 and Type 2 now is that kernel processes from both host OS and VM OS will run on Ring 0. Application processes from both host OS and VM OS will run on Ring 3. However, there is one piece left. The question now is "What about device drivers?" Microkernel and Monolithic Type 1 Hypervisors Have you tried to install Hyper-V on a laptop? What about an all-in-one device? A PC? A server? An x64 based tablet? They all worked, right? And they're supposed to work. As Hyper-V is a Microkernel Type 1 Hypervisor, all the device drivers are hosted on the parent partition. A Monolithic Type 1 Hypervisor hosts its drivers on the Hypervisor itself. VMware ESXi works this way. That's why you should never use a standard ESXi media to install an ESXi host. The hardware manufacturer will provide you with an appropriate media with the correct drivers for the specific hardware. The main advantage of the Monolithic Type 1 Hypervisor is that, as it always has the correct driver installed, you will never have a performance issue due to an incorrect driver. On the other hand, you won't be able to install this on any device. The Microkernel Type 1 Hypervisor, on the other hand, hosts its drivers on the parent partition. That means that if you installed the host OS on a device, and the drivers are working, the Hypervisor, and in this case Hyper-V, will work just fine. There are other hardware requirements. These will be discussed later in this article. The other side of this is that if you use a generic driver, or a wrong version of it, you may have performance issues, or even driver malfunction. What you have to keep in mind here is that Microsoft does not certify drivers for Hyper-V. Device drivers are always certified for Windows Server. If the driver is certified for Windows Server, it is also certified for Hyper-V. But you always have to ensure the use of correct driver for a given hardware. Let's take a better look at how Hyper-V works as a Microkernel Type 1 Hypervisor: As you can see from the preceding diagram, there are multiple components to ensure that the VM will run perfectly. However, the major component is the Integration Components (IC), also called Integration Services. The IC is a set of tools that you should install or upgrade on the VM, so that the VM OS will be able to detect the virtualization stack and run as a regular OS on a given hardware. To understand this more clearly, let's see how an application accesses the hardware and understand all the processes behind it. When the application tries to send a request to the hardware, the kernel is responsible for interpreting this call. As this OS is running on an Enlightened Child Partition (Means that IC is installed), the Kernel will send this call to the Virtual Service Client (VSC) that operates as a synthetic device driver. The VSC is responsible for communicating with the Virtual Service Provider (VSP) on the parent partition, through VMBus, so the VSC can use the hardware resource. The VMBus will then be able to communicate with the hardware for the VM. The VMBus, a channel-based communication, is actually responsible for communicating with the parent partition and hardware. For the VMBus to access the hardware, it will communicate directly with a component on the Hypervisor called hypercalls. These hypercalls are then redirected to the hardware. However, only the parent partition can actually access the physical processor and memory. The child partitions access a virtual view of these components that are translated on the guest and the host partitions. New processors have a feature called Second Level Address Translation (SLAT) or Nested Paging. This feature is extremely important on high performance VMs and hosts, as it helps reduce the overhead of the virtual to physical memory and processor translation. On Windows 8, SLAT is a requirement for Hyper-V. It is important to note that Enlightened Child Partitions, or partitions with IC, can be Windows or Linux OS. If the child partitions have a Linux OS, the name of the component is Linux Integration Services (LIS), but the operation is actually the same. Another important fact regarding ICs is that they are already present on Windows Server 2008 or later. But, if you are running a newer version of Hyper-V, you have to upgrade the IC version on the VM OS. For example, if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012 R2, you probably don't have to worry about it. But if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012, then you have to upgrade the IC on the VM to match the parent partition version. Running guest OS Windows Server 2012 R2 on a VM on top of Hyper-V 2012 is not recommended. For Linux guest OS, the process is the same. Linux kernel version 3 or later already have LIS installed. If you are running an old version of Linux, you should verify the correct LIS version of your OS. To confirm the Linux and LIS versions, you can refer to an article at http://technet.microsoft.com/library/dn531030.aspx. Another situation is when the guest OS does not support IC or LIS, or an Unenlightened Child Partition. In this case, the guest OS and its kernel will not be able to run as an Enlightened Child Partition. As the VMBus is not present in this case, the utilization of hardware will be made by emulation and performance will be degraded. This only happens with old versions of Windows and Linux, like Windows 2000 Server, Windows NT, and CentOS 5.8 or earlier, or in case that the guest OS does not have or support IC. Now that you understand how the Hyper-V architecture works, you may be thinking "Okay, so for all of this to work, what are the requirements?" Hyper-V requirements and processor features At this point, you can see that there is a lot of effort for putting all of this to work. In fact, this architecture is only possible because hardware and software companies worked together in the past. The main goal of both type of companies was to enable virtualization of operating systems without changing them. Intel and AMD created, each one with its own implementation, a processor feature called virtualization assistance so that the Hypervisor could run on Ring 0, as explained before. But this is just the first requirement. There are other requirement as well, which are as follows: Virtualization assistance (also known as Hardware-assisted virtualization): This feature was created to remove the necessity of changing the OS for virtualizing it. On Intel processors, it is known as Intel VT-x. All recent processor families support this feature, including Core i3, Core i5, and Core i7. The complete list of processors and features can be found at http://ark.intel.com/Products/VirtualizationTechnology. You can also use this tool to check if your processor meets this requirement which can be downloaded at: https://downloadcenter.intel.com/Detail_Desc.aspx?ProductID=1881&DwnldID=7838. On AMD Processors, this technology is known as AMD-V. Like Intel, all recent processor families support this feature. AMD provides a tool to check processor compatibility that can be downloaded at http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization. Data Execution Prevention (DEP): This is a security feature that marks memory pages as either executable or nonexecutable. For Hyper-V to run, this option must be enabled on the System BIOS. For an Intel-based processor, this feature is called Execute Disable bit (Intel XD bit) and No Execute Bit (AMD NX bit). This configuration will vary from one System BIOS to another. Check with your hardware vendor how to enable it on System BIOS. x64 (64-bit) based processor: This processor feature uses a 64-bit memory address. Although you may find that all new processors are x64, you might want to check if this is true before starting your implementation. The compatibility checkers above, from Intel and AMD, will show you if your processor is x64. Second Level Address Translation (SLAT): As discussed before, SLAT is not a requirement for Hyper-V to work. This feature provides much more performance on the VMs as it removes the need for translating physical and virtual pages of memory. It is highly recommended to have the SLAT feature on the processor ait provides more performance on high performance systems. As also discussed before, SLAT is a requirement if you want to use Hyper-V on Windows 8 or 8.1. To check if your processor has the SLAT feature, use the Sysinternals tool—Coreinfo— that can be downloaded at http://technet.microsoft.com/en-us/sysinternals/cc835722.aspx. There are some specific processor features that are not used exclusively for virtualization. But when the VM is initiated, it will use these specific features from the processor. If the VM is initiated and these features are allocated on the guest OS, you can't simply remove them. This is a problem if you are going to Live Migrate this VM from a host to another host; if these specific features are not available, you won't be able to perform the operation. At this moment, you have to understand that Live Migration moves a powered-on VM from one host to another. If you try to Live Migrate a VM between hosts with different processor types, you may be presented with an error. Live Migration is only permitted between the same processor vendor: Intel-Intel or AMD-AMD. Intel-AMD Live Migration is not allowed under any circumstance. If the processor is the same on both hosts, Live Migration and Share Nothing Live Migration will work without problems. But even within the same vendor, there can be different processor families. In this case, you can remove these specific features from the Virtual Processor presented to the VM. To do that, open Hyper-V Manager | Settings... | Processor | Processor Compatibility. Mark the Migrate to a physical computer with a different processor version option. This option is only available if the VM is powered off. Keep in mind that enabling this option will remove processor-specific features for the VM. If you are going to run an application that requires these features, they will not be available and the application may not run. Now that you have checked all the requirements, you can start planning your server for virtualization with Hyper-V. This is true from the perspective that you understand how Hyper-V works and what are the requirements for it to work. But there is another important subject that you should pay attention to when planning your server: memory. Memory configuration I believe you have heard this one before "The application server is running under performance". In the virtualization world, there is an obvious answer to it: give more virtual hardware to the VM. Although it seems to be the logical solution, the real effect can be totally opposite. During the early days, when servers had just a few sockets, processors, and cores, a single channel made the communication between logical processors and memory. But server hardware has evolved, and today, we have servers with 256 logical processors and 4 TB of RAM. To provide better communication between these components, a new concept emerged. Modern servers with multiple logical processors and high amount of memory use a new design called Non-Uniform Memory Access (NUMA) architecture. Non-Uniform Memory Access (NUMA) architecture NUMA is a memory design that consists of allocating memory to a given node, or a cluster of memory and logical processors. Accessing memory from a processor inside the node is notably faster than accessing memory from another node. If a processor has to access memory from another node, the performance of the process performing the operation will be affected. Basically, to solve this equation you have to ensure that the process inside the guest VM is aware of the NUMA node and is able to use the best available option: When you create a virtual machine, you decide how many virtual processors and how much virtual RAM this VM will have. Usually, you assign the amount of RAM that the application will need to run and meet the expected performance. For example, you may ask a software vendor on the application requirements and this software vendor will say that the application would be using at least 8 GB of RAM. Suppose you have a server with 16 GB of RAM. What you don't know is that this server has four NUMA nodes. To be able to know how much memory each NUMA node has, you must divide the total amount of RAM installed on the server by the number of NUMA nodes on the system. The result will be the amount of RAM of each NUMA node. In this case, each NUMA node has a total of 4 GB of RAM. Following the instructions of the software vendor, you create a VM with 8 GB of RAM. The Hyper-V standard configuration is to allow NUMA spanning, so you will be able to create the VM and start it. Hyper-V will accommodate 4 GB of RAM on two NUMA nodes. This NUMA spanning configuration means that a processor can access the memory on another NUMA node. As mentioned earlier, this will have an impact on the performance if the application is not aware of it. On Hyper-V, prior to the 2012 version, the guest OS was not informed about the NUMA configuration. Basically, in this case, the guest OS would see one NUMA node with 8 GB of RAM, and the allocation of memory would be made without NUMA restrictions, impacting the final performance of the application. Hyper-V 2012 and 2012 R2 have the same feature—the guest OS will see the virtual NUMA (vNUMA) presented to the child partition. With this feature, the guest OS and/or the application can make a better choice on where to allocate memory for each process running on this VM. NUMA is not a virtualization technology. In fact, it has been used for a long time, and even applications like SQL Server 2005 already used NUMA to better allocate the memory that its processes are using. Prior to Hyper-V 2012, if you wanted to avoid this behavior, you had two choices: Create the VM and allocate the maximum vRAM of a single NUMA node for it, as Hyper-V will always try to allocate the memory inside of a single NUMA node. In the above case, the VM should not have more than 4 GB of vRAM. But for this configuration to really work, you should also follow the next choice. Disable NUMA Spanning on Hyper-V. With this configuration disabled, you will not be able to run a VM if the memory configuration exceeds a single NUMA node. To do this, you should clear the Allow virtual machines to span physical NUMA nodes checkbox on Hyper-V Manager | Hyper-V Settings... | NUMA Spanning. Keep in mind that disabling this option will prevent you from running a VM if no nodes are available. You should also remember that even with Hyper-V 2012, if you create a VM with 8 GB of RAM using two NUMA nodes, the application on top of the guest OS (and the guest OS) must understand the NUMA topology. If the application and/or guest OS are not NUMA aware, vNUMA will not have effect and the application can still have performance issues. At this point you are probably asking yourself "How do I know how many NUMA nodes I have on my server?" This was harder to find in the previous versions of Windows Server and Hyper-V Server. In versions prior to 2012, you should open the Performance Monitor and check the available counters in Hyper-V VM Vid NUMA Node. The number of instances represents the number of NUMA Nodes. In Hyper-V 2012, you can check the settings for any VM. Under the Processor tab, there is a new feature available for NUMA. Let's have a look at this screen to understand what it represents: In Configuration, you can easily confirm how many NUMA nodes the host running this VM has. In the case above, the server has only 1 NUMA node. This means that all memory will be allocated close to the processor. Multiple NUMA nodes are usually present on servers with high amount of logical processors and memory. In the NUMA topology section, you can ensure that this VM will always run with the informed configuration. This is presented to you because of a new Hyper-V 2012 feature called Share Nothing Live Migration, which will be explained in detail later. This feature allows you to move a VM from one host to another without turning the VM off, with no cluster and no shared storage. As you can move the VM turned on, you might want to force the processor and memory configuration, based on the hardware of your worst server, ensuring that your VM will always meet your performance expectations. The Use Hardware Topology button will apply the hardware topology in case you moved the VM to another host or in case you changed the configuration and you want to apply the default configuration again. To summarize, if you want to make sure that your VM will not have performance problems, you should check how many NUMA nodes your server has and divide the total amount of memory by it; the result is the total memory on each node. Creating a VM with more memory than a single node will make Hyper-V present a vNUMA to the guest OS. Ensuring that the guest OS and applications are NUMA aware is also important, so that the guest OS and application can use this information to allocate memory for a process on the correct node. NUMA is important to ensure that you will not have problems because of host configuration and misconfiguration on the VM. But, in some cases, even when planning the VM size, you will come to a moment when the VM memory is stressed. In these cases, Hyper-V can help with another feature called Dynamic Memory. Summary In this we learned about the Hypervisor architecture and different Hypervisor types. We explored in brief about Microkernel and Monolithic Type 1 Hypervisors. In addition to this, this article also explains the Hyper-V requirements and processor features, Memory configuration and the NUMA architecture. Resources for Article: Further resources on this subject: Planning a Compliance Program in Microsoft System Center 2012 [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 6502
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-designing-and-building-vrealize-automation-62-infrastructure
Packt
29 Sep 2015
16 min read
Save for later

Designing and Building a vRealize Automation 6.2 Infrastructure

Packt
29 Sep 2015
16 min read
 In this article by J. Powell, the author of the book Mastering vRealize Automation 6.2, we put together a design and build vRealize Automation 6.2 from POC to production. With the knowledge gained from this article, you should feel comfortable installing and configuring vRA. In this article, we will be covering the following topics: Proving the technology Proof of Concept Proof of Technology Pilot Designing the vRealize Automation architecture (For more resources related to this topic, see here.) Proving the technology In this section, we are going to discuss how to approach a vRealize Automation 6.2 project. This is a necessary component in order to assure a successful project, and it is specifically necessary when we discuss vRA, due to the sheer amount of moving parts that comprise the software. We are going to focus on the end users, whether they are individuals or business units, such as your company's software development department. These are the people that will be using vRA to provide the speed and agility necessary to deliver results that drive the business and make money. If we take this approach and treat our co-workers as customers, we can give them what they need to perform their jobs as opposed to what we perceive they need from an IT perspective. Designing our vRA deployment around the user and business requirements, first gives us a better plan to implement the backend infrastructure as well as the service offerings within the vRA web portal. This allows us to build a business case for vRealize Automation and will help determine which of the three editions will make sense to meet these needs. Once we have our business case created, validated, and approved, we can start testing vRealize Automation. There are three common phases to a testing cycle: Proof of Concept Proof of Technology Pilot implementation We will cover these phases in the following sections and explore whether you need them for your vRealize Automation 6.2 deployment. Proof of Concept A POC is typically an abbreviated version of what you hope to achieve during production. It is normally spun up in a lab, using old hardware, with a limited number of test users. Once your POC is set up, one of two things happen. First, nothing happens or it gets decommissioned. After all, it's just the IT department getting their hands dirty with new technology. This also happens when there is not a clear business driver, which provides a reason to have the technology in a production environment. The second thing that could happen is that the technology is proven, and it moves into a pilot phase. Of course, this is completely up to you. Perhaps, a demonstration of the technology will be enough, or testing some limited outcomes in VMware's HOL for vRealize Automation 6.2 will do the trick. Due to the number of components and features within vRA, it is strongly recommended that you create a POC, documenting the process along the way. This will give you a strong base if you take the project from POC to production. Proof of Technology The object of a POT project is to determine whether the proposed solution or technology will integrate in your existing IT landscape and add value. This is the stage where it is important to document any technical issues you encounter in your individual environment. There is no need to involve pilot users in this process as it is specifically to validate the technical merits of the software. Pilot implementation A pilot is a small scale and targeted roll out of the technology in a production environment. Its scope is limited, typically by a number of users and systems. This is to allow testing, so as to make sure the technology works as expected and designed. It also limits the business risk. A pilot deployment in terms of vRA is also a way to gain feedback from the users who will ultimately use it on a regular basis. vRealize Automation 6.2 is a product that empowers the end users to provision everything as a service. How the users feel about the layout of the web portal, user experience, and automated feedback from the system directly impacts how well the product will work in a full-blown production scenario. This also gives you time to make any necessary modifications to the vRA environment before providing access to additional users. When designing the pilot infrastructure, you should use the same hardware that is used during production. This includes ESXi hosts, storage, fiber or Internet Small Computer System Interface (iSCSI) connectivity, and vCenter versions. This will take into account any variances between platforms and configurations that could affect performance. Even at this stage, design, attention to detail, and following VMware best practices is key. Often, pilot programs get rolled straight into production. Adhering to these concepts will put you on the right path to a successful deployment. To get a better understanding, let's look at some of the design elements that should be considered: Size of the deployment: A small deployment will support 10,000 managed machines, 500 catalog items, and 10 concurrent deployments. Concurrent provisioning: Only two concurrent provisions per endpoint are allowed by default. You may want to increase this limit to suit your requirements. Hardware sizing: This refers to the number of servers, the CPU, and the memory. Scale: This refers to whether there will be multiple Identity and vRealize Automation vApps. Storage: This refers to pools of storage from Storage Area Network (SAN) or Network Attached Storage (NAS) and tiers of storage for performance requirements. Network: This refers to LANs, load balancing, internal versus external access to web portals, and IP pools for use with the infrastructure provisioned through vRA. Firewall: This refers to knowing what ports need to be opened between the various components that make up vRA, as well as the other endpoint that may fall under vRA's purview. Portal layout: This refers to the items you want to provide to the end user and the manner in which you categorize them for future growth. IT Business Management Suite Standard Edition: If you are going to implement this product, it can scale up to 20,000 VMs across four vCenter servers. Certificates: Appliances can be self-signed, but it is recommended to use an internal Certificate Authority for vRA components and an externally signed certificate to use on the vRA web portal if it is going to be exposed to the public Internet. VMware has published a Technical White Paper that covers all the details and considerations when deploying vRA. You can download the paper by visiting http://www.vmware.com/files/pdf/products/vCloud/VMware-vCloud-Automation-Center-61-Reference-Architecture.pdf. VMware provides the following general recommendation when deploying vRealize Automation: keep all vRA components in the same time zone with their clocks synced. If you plan on using VMware IT Business Management Suite Standard Edition, deploy it in the same LAN as vCenter. You can deploy Worker DEMs and proxy agents over the WAN, but all other components should not go over the WAN, as to prevent performance degradation. Here is a diagram of the pilot process: Designing the vRealize Automation architecture We have discussed the components that comprise vRealize Automation as well as some key design elements. Now, let's see some of the scenarios at a high level. Keep in mind that vRA is designed to manage tens of thousands of VMs in an infrastructure. Depending on your environment, you may never exceed the limitations of what VMware considers to be a small deployment. The following diagram displays the minimum footprint needed for small deployment architecture: A medium deployment can support up to 30,000 managed machines, 1,000 catalog items, and 50 concurrent deployments. The following diagram shows you the minimum required footprint for a medium deployment: Large deployments support 50,000 managed machines, 2,500 catalog items, and 100 concurrent deployments. The following diagram shows you the minimum required footprint for a large deployment: Design considerations Now that we understand the design elements for a small, medium, and large infrastructure, let's explore the components of vRA and build an example design, based on the small infrastructure requirements from VMware. Since there are so many options and components, we have broken them down into easily digestible components. Naming conventions It is important to give some thought to naming conventions for different aspects of the vRA web portal. Your company has probably set a naming convention for servers and environments, and we will have to make sure items provisioned from vRA adhere to those standards. It is important to name the different components of vRealize Automation in a method that makes sense for what your end goal may be regarding what vRA will do. This is necessary because it is not easy (and in some cases not possible) to rename the elements of the vRA web portal once you have implemented them. Compute resources Compute resources in terms of vRA refers to an object that represents a host, host cluster, virtual data center, or a public Cloud region, such as Amazon, where machines and applications can be provisioned. For example, compute resources can refer to vCenter, Hyper-V, or Amazon AWS. This list grows with each subsequent release of vRA. Business and Fabric groups A Business group in the vRA web portal is a set of services and resources assigned to a set of users. Quite simply, it is a way to align a business department or unit with the resources it needs. For example, you may have a Business group named Software Developers, and you would want them to be able to provision SQL 2012 and 2014 on Windows 2012 R2 servers. Fabric groups enable IT administrators to provide resources from your infrastructure. You can add users or groups to the Fabric group in order to manage the infrastructure resources you have assigned. For example, if you have a software development cluster in vCenter, you could create a Fabric group that contains the users responsible for the management of this cluster to oversee the cluster resources. Endpoints and credentials Endpoints can represent anything from vCenter, to storage, physical servers, and public Cloud offerings, such as Amazon AWS. The platform address is defined with the endpoint (in terms of being accessed through a web browser) along with the credentials needed to manage them. Reservations Reservations refer to how we provide a portion of our total infrastructure that is to be used for consumption by end users. It is a key design element in the vRealize Automation 6.2 infrastructure design. Each reservation created will need to define the disk, memory, networking, and priority. The lower their number, the higher will be the priority. This is to resolve conflicts in case there are multiple matching reservations. If the priorities of the multiple reservations are equal, vRA will choose a reservation in a round-robin style order: In the preceding diagram, on the far right-hand side, we can see that we have Shared Infrastructure composed of Private Physical and Private Virtual space, as well as a portion of a Public Cloud offering. By creating different reservations, we can assure that there is enough infrastructure for the business, while providing a dedicated portion of the total infrastructure to our end users. Reservation policies A reservation policy is a set of reservations that you can select from a blueprint to restrict provisioning only to specific reservations. Reservation policies are then attached to a reservation. An example of reservations policies can be taken when using them to create different storage policies. You can create a separate Bronze, Silver, and Gold policy to reflect the type of disk available on our SAN (such as SATA, SAS, and SDD). Network profiles By default, vRA will assign an IP address from a DHCP server to all the machines it provisions. However, most production environments do not use DHCP for their servers. A network profile will need to be created to allocate and assign static IPs to these servers. Network profile options consist of external, private, NAT (short for Network Address Translation), and routed. For the scope of our examples, we will focus on the external option. Compute resources Compute resources are tied in with Fabric groups, endpoints, storage reservation policies, and cost profiles. You must have these elements created before you can configure compute resources, although some components, such as storage and cost profiles, are optional. An example of a compute resource is a vCenter cluster. It is created automatically when you add an endpoint to the vRA web portal. Blueprints Blueprints are instruction sets to build virtual, physical, and Cloud-based machines, as well vApps. Blueprints define a machine or a set of application properties, the way it is provisioned, and its policy and management settings. For an end user, a blueprint is listed as an item in the Service Catalog tab. The user can request the item, and vRA would use the blueprint to provision the user's request. Blueprints also provide a way to prompt the user making the request for additional items, such as more compute resources, application or machine names, as well as network information. Of course, this can be automated as well and will probably be the preferred method in your environment. Blueprints also contain workflow logic. vRealize Automation contains built-in workflows for cloning snapshots, Kickstart, ISO, SCCM, and WIM deployments. You can define a minimum and maximum for CPU, memory, and storage. This will give end users the option to customize their machines to match their individual needs. It is a best practice to define the minimum for servers with very low resources, such as 1 vCPU and 512 MB for memory. It is easy to hot add these resources if the end user needs more compute after an initial request. However, if you set the minimum resources too high in the blueprint, you cannot lower the value. You will have to create a new blueprint. You can also define customized properties in the blueprints. For example, if you want to provide a VM with a defined MAC address or without a virtual CD-ROM attached, you can do so. VMware has published a detailed guide of the Custom Properties and their values. You can find it at http://pubs.vmware.com/vra-62/topic/com.vmware.ICbase/PDF/vrealize-automation-62-custom-properties.pdf. Custom Properties are case sensitive. It is recommended to test Custom Properties individually until you are comfortable using them. For example, a blueprint referencing an ISO workflow would fail if you have a Custom Property to remove the CD-ROM. Users and groups Users and groups are defined in the Administration section of the vRA web portal. This is where we would assign vRA specific roles to groups. It is worth mentioning when you login to the vRA web portal and click on users, it is blank. This is because of the sheer number of users that could be potentially allowed to access the portal and would slow the load time. In our examples, we will focus on users and groups from our Identity Appliance that ties in to Active Directory. Catalog management Catalog management consists of services, catalog items, actions, and entitlement. We will discuss them in more detail in the following sections. Services Services are another key design element and are defined by the vRA administrators to help group subsets of your environment. For example, you may have services defined for applications, where you would list items, such as SQL and Oracle databases. You could also create a service called OperatingSystems where you would group catalog items, such as Linux and Windows. You can make these services active or inactive, and also define maintenance windows when catalog items under this category would be unavailable for provisioning. Catalog items Catalog items are essentially links back to blueprints. These items are tied back to a service that you previously defined and helped shape the Service Catalog tab that the end user will use to provision machines and applications. Also, you will entitle users to use the catalog item. Entitlements As mentioned previously, entitlements are how we link business users and groups to services, catalog items, and actions. Actions Actions are a list of operations that gives a user the ability to perform certain tasks with services and catalog items. There are over 30 out of the box action items that come with vRA. This includes creating and destroying VMs, changing the lease time, as well as adding additional compute resources. You also have the option of creating custom actions as well. Approval policies Approval policies are the sets of rules that govern the use of catalog items. They can be used in the pre or post configuration life cycle of an item. Let's say, as an example, we have a Red Hat Linux VM that a user can provision. We have set the minimum vCPU to 1, but have defined a maximum of 4. We would want to notify the user's manager and the IT team when a request to provision the VM exceeds the minimum vCPU we have defined. We could create an approval policy to perform a pre-check to see if the user is requesting more than one vCPU. If the threshold is exceeded, an e-mail will be sent out to approve the additional vCPU resources. Until the notification is approved, the VM will not be provisioned. Advanced services Advanced services is an area of the vRA web portal where we can tie in customized workflows from vRealize Orchestrator. For example, we may need to check for a file in the VM's operating system once it has been deployed. We need to do this to make sure that an application has been deployed successfully or a baseline compliance is in order. We can present vRealize Orchestrator workflows for end users to leverage in almost the same manner as we do IaaS. Summary In this article, we covered the design and build principles of vRealize Automation 6.2. We discussed how to prove the technology by performing due diligence checks with the business users and creating a case to implement a POC. We detailed considerations when rolling out vRA in a pilot program and showed you how to gauge its success. Lastly, we detailed the components that comprise the design and build of vRealize Automation, while introducing additional elements. Resources for Article: Further resources on this subject: vROps – Introduction and Architecture[article] An Overview of Horizon View Architecture and its Components[article] Master Virtual Desktop Image Creation [article]
Read more
  • 0
  • 0
  • 6430

article-image-citrix-xenapp-performance-essentials
Packt
21 Aug 2013
18 min read
Save for later

Citrix XenApp Performance Essentials

Packt
21 Aug 2013
18 min read
(For more resources related to this topic, see here.) Optimizing Session Startup The most frequent complaint that system administrators receive from users about XenApp is definitely that the applications start slowly. They certainly do not consider that at least the first time you launch an application published by XenApp, an entire login process takes place. In this article you'll learn: Which steps form the login process and which systems are involved The most common causes of logon delays and how to mitigate them The use of some advanced XenApp features, like session pre-launch The logon process Let's briefly review the logon process that starts when a user launches an application through the Web Interface or through a link created by the Receiver. The following diagram explains the logon process: The logon process Resolution The user launches an application (A) and the Web Interface queries the Data Collector (B) that returns the least-loaded server for the requested application (C). The Web Interface generates an ICA file and passes it to the client (D). Connection The Citrix client running on the user's PC establishes a connection to the session-host server specified in the ICA file. In the handshake process, client and server agree on the security level and capabilities. Remote Desktop Services (RDS) license Windows Server validates that an RDS/Terminal Server (TS) license is available. AD authentication Windows Server authenticates the user against the Active Directory (AD). If the authentication is successful, the server queries account details from the AD, including Group Policies (GPOs) and roaming profiles. Citrix license XenApp validates that a Citrix license is available. Session startup If the user has a roaming profile, Windows downloads it from the specified location (usually a file server). Windows then applies any GPOs and XenApp applies any Citrix policies. Windows executes applications included in the Startup menu and finally launches the requested application. Some other steps may be necessary if other Citrix components (for example, the Citrix Access Gateway) are included in your infrastructure. Analysing the logon process Users perceive the overall duration of the process from the time when they click on the icon until the appearance of the application on their desktops. To troubleshoot slowness, a system administrator must know the duration of the individual steps. Citrix EdgeSight Citrix EdgeSight is a performance and availability management solution for XenApp and XenDesktop. If you own an Enterprise or Platinum XenApp license, you're entitled to install EdgeSight Basic (for Enterprise licensing) or Advanced (for Platinum licensing). You can also license it as a standalone product. If you deployed Citrix EdgeSight in your farm, you can run the Session Startup Duration Detail report, which includes information on both, the duration of server-side and client-side steps. This report is available only with EdgeSight Advanced. For each session, you can drill down the report to display information about server-side and client-side startup processes. An example is shown in the following screenshot: EdgeSight's Session Startup Duration Detail report The columns report the time (in milliseconds) spent by the startup process in the different steps. SSD is the total server-side time, while CSD the total client-side time. You can find a full description of the available reports and the meaning of the different acronyms in the EdgeSight Report List at http://community.citrix.com/display/edgesight/EdgeSight+5.4+Report+List. In the preceding example most of the time was spent in the Profile Load (PLSD) and Login Script Execution (LSESD) steps on the server and in the Session Creation (SCCD) step on the client. EdgeSight is a very valuable tool to analyze your farm. The available reports cover all the critical areas and gives detailed information about the "hidden" work of Citrix XenApp. With the Session Startup Duration Detail report you can identify which steps cause a slow session startup. If you want to understand why server-side steps, like the Profile Load step in the preceding example that lasted more than 15 seconds, take too much time, you need a different tool. Windows Performance Toolkit Windows Performance Toolkit (WPT) is a tool included in the Windows ADK, freely downloadable from the Microsoft website (http://www.microsoft.com/en-us/download/details.aspx?id=30652). You need an Internet connection to install Windows ADK. You can run the setup on a client with Internet access and configure it to download all the required components in a folder. Move the folder on your server and perform an offline installation. WPT has two components: Windows Performance Recorder, which is used to record all the performance data in an .etl file Windows Performance Analyzer, a graphical program to analyze the recorded data Run the following command from the WPT installed folder to capture slow logons: C:WPT>xperf -on base+latency+dispatcher+NetworkTrace+Registry+File IO -stackWalk CSwitch+ReadyThread+ThreadCreate+Profile -BufferSize 128 -start UserTrace -on "Microsoft-Windows-Shell-Core+Microsoft-Windows-Wininit+Microsoft-Windows-Folder Redirection+Microsoft-Windows-User Profiles Service+Microsoft-Windows-GroupPolicy+Microsoft-Windows-Winlogon+Microsoft-Windows-Security-Kerberos+Microsoft-Windows-User Profiles General+e5ba83f6-07d0-46b1-8bc7-7e669a1d31dc+63b530f8-29c9-4880-a5b4-b8179096e7b8+2f07e2ee-15db-40f1-90ef-9d7ba282188a" -BufferSize 1024 -MinBuffers 64 -MaxBuffers 128 -MaxFile 1024 After having recorded at least one slow logon, run the following command to stop recording and save the performance data to an .etl file: C:WPT>xperf -stop -stop UserTrace -d merged.etl This command creates a file called merged.etl in the WPT install folder. You can open this file with Windows Performance Analyzer. The Windows Performance Analyzer timeline is shown in the following screenshot: Windows Performance Analyzer timeline Windows Performance Analyzer shows a timeline with the duration of each step; for any point in time you can view the running processes, the usage of CPU and memory, the number of I/O operations, and the bytes sent or received through the network. WPT is a great tool to identify the reason for delays in Windows; it, however, has no visibility of Citrix processes. This is why EdgeSight is still necessary for complete troubleshooting. Common causes of logon delays After having analyzed many logon problems, I found that the slowness was usually caused by one or more of the following reasons: Authentication issues Profile issues GPO and logon script issues In the next paragraphs, you'll learn how to identify those issues and how to mitigate them. Even if you can't use the advanced tools (EdgeSight, WPT, and so on) described in the preceding sections, you can follow the guidelines in the next sections and best practices to solve or prevent most of the problems related to the logon process. Authentication issues During the logon process, authentication happens at multiple stages; at minimum when a user logs on to the Web Interface and when the session-host server creates a session for launching the requested application. Citrix XenApp integrates with Active Directory. The authentication is therefore performed by a Domain Controller (DC) server of your domain. Slowness in the Domain Controller response, caused by an overloaded server, can slow down the entire process. Worse, if the Domain Controller is unavailable, a domain member server may try to connect for 30 seconds before timing out and choosing a different DC. Domain member servers choose the Domain Controller for authenticating users based on their membership to Active Directory Sites. If sites are not correctly configured or don't reflect the real topology of your network, a domain member server may decide to use a remote Domain Controller, through a slow WAN link, instead of using a Domain Controller on the same LAN. Profile issues Each user has a profile that is a collection of personal files and settings. Windows offers different types of profiles, with advantages and disadvantages as shown in the following table: Type Description Local The profile folder is local to each server. Roaming The profile folder is saved on a central storage (usually a file server). Mandatory A read-only profile is assigned to users; changes are not saved across sessions. From the administrator's point of view, mandatory profiles are the best option because they are simple to maintain, allow fast logon, and users can't modify Windows or application settings. This option however is not often feasible. I could use mandatory profiles only in specific cases, for example; when users have to run only a single application without the need to customize it. Local profiles are almost never used in a XenApp environment because even if they offer the fastest logon time, they are not consistent across servers and sessions. Furthermore, you'll end up with all your session-host servers storing local profiles for all your users, and that is a waste of disk space. Finally, if you're provisioning your servers with Provisioning Server, this option, if not applicable as local profiles. would be saved in the local cache, which is deleted every time the server reboots. System administrators usually choose roaming profiles for their users. Roaming profiles indeed allow consistency across servers and sessions and preserve user. Roaming profiles are, however, the most significant cause of slow logons. Without a continuous control, they can rapidly grow to a large size. A small profile with a large number of files, for example, a profile with many cookies, can cause delays too. Roaming profiles also suffer of the last write wins problem. In a distributed environment like a XenApp farm, it is not unlikely that users are connected to different servers at the same time. Profiles are updated when users log off, so with different sessions on different servers, some settings could be overwritten, or worse, the profile could be corrupted. Folder redirection To reduce the size of roaming profiles, you can redirect most of the user folders to a different location. Instead of saving files in the user's profile, you can configure Windows to save them on a file sharing system. The advantages of using folder redirection are: The data in the redirected folders is not included in the synchronization job of the roaming profile, making the user logon and logoff processes faster Using disk quotas and redirecting folders to different disks, you can limit how much space is taken up by single folders instead of the whole profile Windows Offline Files technology allows users to access their files even when no network connection is available You can redirect some folders (for example, the Start Menu) to a read-only share, giving all your users the same content Folder redirection is configured through group policies as shown in the following screenshot: Configuring Folder Redirection For each folder, you can choose to redirect it to a fixed location (useful if you want to provide the same content to all your users), to a subfolder (named as the username) under a fixed root path to the user's home directory, or to the local user profile location. You may also apply different redirections based on group membership and define advanced settings for the Documents folder. In my experience, folder redirection plays a key role in speeding up the logon process. You should enable it at least for the Desktop and My Documents folder if you're using roaming profiles. Background upload With Windows 2008 R2, Microsoft added the ability to perform periodic upload of the user's profile registry file (NTUSER.DAT) on the file share. Even if this option wasn't added to address the last write wins problem, it may help to avoid profile corruption and Microsoft recommends enabling it through a GPO as shown in the following screenshot: Enabling Background upload Citrix Profile Management Citrix developed its own solution for managing profiles, Citrix Profile Management. You're entitled to use Citrix Profile Management if you have an active Subscription Advantage for the following products: XenApp Enterprise and Platinum edition XenDesktop Advanced, Enterprise, and Platinum edition You need to install the software on each computer whose user profiles you want to manage. In a XenApp farm install it on your session-host servers. Features Citrix Profile Management was designed specifically to solve some of the problems Windows roaming profiles introduced. Its main features are: Support for multiple sessions, without the last write wins problem Ability to manage large profiles, without the need to perform a full sync when the user logs on Support for v1 (Windows XP/2003) and v2 (Windows Vista/Seven/2008) profiles Ability to define inclusion/exclusion lists Extended synchronization can include files and folders external to the profile to support legacy applications Configuring Citrix Profile Management is configured using Windows Group Policy. In the Profile Management package, downloadable from the Citrix website, you can find the administrative template (.admx) and its language file (.adml). Copy the ADMX file in C:WindowsPolicyDefintions and the ADML file in C:WindowsPolicyDefintionslang (for example, on English operating systems the lang folder is en-US). A new Profile Management folder in Citrix is then available in your GPOs as shown in the following screenshot: Profile Management's settings in Windows GPOs Profile Management settings are in the Computer section, therefore, link the GPO to the Organizational Unit (OU) that contains your session-host servers. Profiles priority order If you deployed Citrix Profile Management, it takes precedence over any other profile assignment method. The priority order on a XenApp server is the following: Citrix Profile Management Remote Desktop Services profile assigned by a GPO Remote Desktop Services profile assigned by a user property Roaming profile assigned by a GPO Roaming profile assigned by a user property Troubleshooting Citrix Profile Management includes a logging functionality, you can enable via GPO as shown in the following screenshot: Enabling the logging functionality With the Log settings setting, you can also enable verbose logging for specific events or actions. Logs are usually saved in %SystemRoot%System32LogfilesUserProfileManager but you can change the path with the Path to log file property. Profile Management's logs are also useful to troubleshoot slow logons. Each step is logged with a timestamp so analyzing those logs you can find which steps last for a long time. GPO and logon script issues In a Windows environment, it's common to apply settings and customizations via Group Policy Objects (GPOs) or using logon scripts. Numerous GPOs and long-running scripts can significantly impact the speed of the logon process. It's sometimes hard to find which GPOs or scripts are causing delays. A suggestion is to move the XenApp server or a test user account in a new Organizational Unit, with no policies applied and blocked inheritance. Comparing the logon time in this scenario with the normal logon time can help you understand how GPOs and scripts affect the logon process. The following are some of the best practices when working with GPOs and logon scripts: Reduce the number of GPOs by merging them when possible. The time Windows takes to apply 10 GPOs is much more than the time to apply a single GPO including all their settings. Disable unused GPOs sections. It's common to have GPOs with only computer or user settings. Explicitly disabling the unused sections can speed up the time required to apply the GPOs. Use GPOs instead of logon scripts. Windows 2008 introduced Group Policy Preferences, which can be used to perform common tasks (map network drives, change registry keys, and so on) previously performed by logon scripts. The following screenshot displays configuring drive maps through GPOs. Configuring drive maps through GPO Session pre-launch, sharing, and lingering Setting up a session is the most time-consuming task Citrix and Windows have to perform when a user requests an application. In the latest version of XenApp, Citrix added some features to anticipate the session setup (pre-launch) and to improve the sharing of the same session between different applications (lingering). Session pre-launch Session pre-launch is a new feature of XenApp 6.5. Instead of waiting for the user to launch an application, you can configure XenApp to set up a session as soon as the user logs on to the farm. At the moment, session pre-launch works only if the user logs on using the Receiver, not through the Web Interface. This means that when the user requests an application, a session is already loaded and all the steps of the logon process you've learned have already taken place. The application can start without any delay. From my experience, this is a feature much appreciated by users and I use it in the production farms. Please note that if you enable session pre-launch, a license is consumed at the time the user logs on. Configuring A session pre-launch is based on a published application on your farm. A common mistake is thinking that when you configure a pre-launch application, Citrix effectively launches that application when the user logs on. The application is actually used as a template for the session. Citrix uses some of its settings, like users, servers/worker groups, color depth, and so on. To create a pre-launch session, right-click on the application and choose Other Tasks | Create pre-launch application as shown in the following screenshot: Creating pre-launch application To avoid confusion, I suggest renaming the configured pre-launch session removing the actual application name, for example, you can name it Pre-launch WGProd. A pre-launched session can be used to run applications that have the same settings of the application you chose when you created the session. For example, it can be used by applications that run on the same servers. If you published different groups of applications, usually assigned to different worker groups, you should create pre-launch sessions choosing one application for each group to be sure that all your users' benefit from this feature. Life cycle of a session If you configured a pre-launch session, when the Receiver passes the user credentials to the XenApp server a new session is created. The server that will host the session is chosen in the usual way by the Data Collector. In Citrix AppCenter, you can identify pre-launched sessions from the value in the Application State column as shown in the following screenshot: Pre-launched session Using Citrix policies, you can set the maximum time a pre-launch session is kept: Pre-launch Disconnect Timer Interval, is the time after which the pre-launch session is put in disconnected state Pre-launch Terminate Timer Interval, is the time after which the pre-launch session is terminated Session sharing Session sharing occurs when a user has an open session on a server and launches an application that is published on the same server. The launch time for the second application is quicker because Citrix doesn't need to set up a new session for it. Session sharing is enabled by default if you publish your applications in seamless window mode. In this mode, applications appear in their own windows without containing an ICA session window. They seem physically installed on the client. Session sharing fails if applications are published with different settings (for example, color depth, encryption, and so on). Make sure to publish your applications using the same settings if possible. Session sharing takes precedence over load balancing; the only exception is if the server reports full load. You can force XenApp to override the load check and to also use fully loaded servers for session sharing. Refer to CTX126839 for the requested registry changes. This is, however, not a recommended configuration; a fully loaded server can lead to poor performance. Session lingering If a user closes all the applications running in a session, the session is ended too. Sometimes it would be useful to keep the session open to speed up the start of new applications. With XenApp 6.5 you can configure a lingering time. XenApp waits before closing a session even if all the running applications are closed. Configuring With Citrix user policies, you can configure two timers for session lingering: Linger Disconnect Timer Interval, is the time after which a session without applications is put in disconnected state LingerTerminate Timer Interval, is the time after which a session without applications is terminated If you're running an older version of XenApp, you can keep a session open even if users close all the running applications with the KeepMeLoggedIn tool; refer to CTX128579. Summary The optimization of the logon process can greatly improve the user experience. With EdgeSight and Windows Performance Toolkit you can perform a deep analysis and detect any causes of delay. If you can't use those tools, you are still able to implement some guidelines and best practices that will surely make users' logon faster. Setting up a session is a time-consuming task. With XenApp 6.5, Citrix implemented some new features to improve session management. With session pre-launch and session lingering you can maximize the reuse of existing sessions when users request an application, speeding up its launch time. Resources for Article: Further resources on this subject: Managing Citrix Policies [Article] Getting Started with XenApp 6 [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 0
  • 6369

article-image-design-install-and-configure
Packt
18 Mar 2014
4 min read
Save for later

Design, Install, and Configure

Packt
18 Mar 2014
4 min read
(For more resources related to this topic, see here.) In this article, we will cover the following key subjects: Horizon Workspace Architecture Overview Designing a solution Sizing guidelines vApp deployment Step-by-step configuration Install Certificates Setting up Kerberos Single Sign-On (SSO) Reading this article will provide you with an introduction to the solution, and also provides you with useful reference points throughout the book that will help you install, configure, and manage a Horizon Workspace deployment. A few things are out of scope for this article, such as setting up vSphere, configuring HA, and using certificates. We will assume that the core infrastructure is already in place. We start by looking at the solution architecture and then how to size a deployment, based on best practice, and suitable to meet the requirements of your end users. Next we will cover the preparation steps in vCenter and then deploy the Horizon Workspace vApp. There are then two steps to installation and configuration. First we will guide you through the initial command-line-based setup and then finally the web-based Setup Wizard. Each section is described in easy to follow steps, and shown in detail using actual screenshots of our lab deployment. So let's get started with the architecture overview. The Horizon Workspace architecture The following diagram shows a more detailed view of how the architecture fits together: The Horizon Workspace sizing guide We have already discussed that Horizon Workspace is made up of five virtual appliances. However, for production environments, you will need to deploy multiple instances to provide for high availability, offer load balancing, and support the number of users that you need in your environment. For a Proof of Concept (POC) or pilot deployment, this is of less importance. Sizing the Horizon Workspace virtual appliances The following diagram shows the maximum number of users that each appliance can accommodate. Using these maximum values, you can calculate the number of appliances that you need to deploy. For example, if you had 6,000 users in your environment, you would need to deploy a single connector-va appliance, three gateway-va appliances, one service-va appliance, seven data-va appliances, and a single configurator-va appliance. Please note that data-va should be sized using N+1. The first data-va appliance should never contain any user data. For high availability, you may want to use two connector-va appliances and two service-va appliances. Sizing for Preview services If you plan to use a Microsoft Preview Server, this needs to be sized based on the requirements shown in the following diagram: If we use our previous example of 6,000 users, then to use Microsoft Preview, you would require a total of six Microsoft Preview Servers. The Horizon Workspace database You have a few options for the database. For a POC or pilot environment, you can use the internal database functionality. In a production deployment, you would use an external database option, using either VMware PostgreSQL or Oracle 11g. This allows you to have a highly available and protected database. The VMware recommendation is PostgreSQL, and the following diagram details the sizing information for the Horizon Workspace database: External access and load balancing considerations In a production environment, high availability, redundancy, and external access is a core requirement. This needs planning and configuration. For a POC or pilot deployment, this is usually not of high importance but should be something to be aware of. To achieve high availability and redundancy, a load balancer is required in front of the gateway-va and the connector-va appliances that are used for Kerberos (Windows authentication). If external access is required, then typically you will also need a load balancer in the Demilitarized Zone (DMZ). This is detailed in the diagram at the beginning of this article. It is not supported to place gateway-va appliances in the DMZ. For more detailed information about load balancing, please visit the following guide: https://communities.vmware.com/docs/DOC-24577 Summary In this article, we had an overview of the Horizon Workspace architecture. We made sure that all the prerequisites are in place before we deploy the Horizon Workspace vApp. This article covers the basic sizing, configuration, and the installation of Horizon Workspace 1.5. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [Article] Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 6349

article-image-publishing-applications
Packt
02 Aug 2013
6 min read
Save for later

Publishing applications

Packt
02 Aug 2013
6 min read
(For more resources related to this topic, see here.) Step 1 – connecting with AppCenter AppCenter is installed along with the XenApp server; however, if required, it can be installed onto a client machine. Often AppCenter is run remotely by administrators who connect to the XenApp server's desktop. Use the following path to run AppCenter: Start|Administrative Tools | Citrix | Management Consoles | Citix AppCenter The first time the console is run, it is necessary to discover the farm. We can do this by selecting to add the local computer (this is usual if we are running AppCenter on the XenApp Server). Ensure that we only discover XenApp, and not Single Sign-on (SSO) servers. If connecting remotely from AppCenter to XenApp, instead of discovering the local computer, we would add the IP address of any XenApp Server in the farm and ensure we have connectivity on TCP port 2513. Step 2 – publishing an application using AppCenter When users connect to the Web Interface server, they are presented with a list of applications that they may access. To create published applications with AppCenter: If you want an application to appear in the list, it must be "published". Navigating to the Application node of AppCenter, we can use the right-click menu and select the option Publish Application. As we run through the wizard, for our example, we will choose to publish notepad.exe. Set the application name. Set the application type by selecting Application | Accessed from a Server | Installed application . Applications can be installed or streamed to servers. We can see from the following screenshot the application type page from the wizard: As the wizard continues, we choose the remaining options as follows: Set the executable's name and location. Set the servers that will host the application. This could be a list of server names from the farm, or the assignment could be to a group of servers known as a worker group . Assign user access to the application by selecting, most usually, domain groups on the users page of the wizard. Finally, we can choose to add the application to folders either on the web page or start menu depending on the type of access. The properties that we set here are the basic properties, and once set, the application can be published immediately. Step 3 – publishing applications using PowerShell Besides using the GUI AppCenter, it is also possible to add and configure published applications with PowerShell. The following steps will guide you to do the same: The PowerShell modules are installed along with the XenApp Server, but as with AppCenter, it can be added to client machines if required. If it is required to install the snap-ins independently to XenApp, we would navigate the installation DVD to the :AdministrationDelivery Services Consolesetup folder. From the Citrix.XenApp.Commands.Client.Install_x86 folder we can install the snap-ins for a 32-bit platform and from Citrix.XenApp.Commands.Client.Install_x64 for the 64-bit architecture. Once we have a PowerShell prompt open, we will need to load the required snap-in: Add-PSSnapin Citrix.XenApp.Commands With the snap-in loaded, we create the new application: New-XAApplication -BrowserName Notepad -ApplicationTypeServerInstalled -DisplayName Notepad -CommandLineExecutable"notepad.exe" -ServerNames XA1 The application can be associated with users with the following PowerShell command: Add-XAApplicationAccount Notepad "ExampleDomain Users" Finally, the application will need to be enabled with this last line of code: Set-XAApplication Notepad -Enabled $true Step 4 – publishing server desktops If required, users can access a server desktop. For this, the application type would be set to Server Desktop. By default, only administrators are allowed access to published desktops in XenApp. For other users, this can be enabled using Citrix Group Policies as shown in the following screenshot from a Citrix User Policy (Group Policies are covered in more detail later): Step 5 – publishing content The published content can be web URL or files on network shares accessible by the client from the XenApp Server. The content may be in the form of PDF documentation or in the form of access to intranet sites that the users will need access to. We can see from the following screenshot that we can provide shortcuts to web resources such as the licensing server web console: Step 6 – prelaunching applications You will soon note that launching an application for the first time takes a little time as the client device has to establish a session on the server. This involves running login scripts, evaluating policies, and loading the user's profile. Once the session is established, other applications launch more quickly on the same server as they share the existing session. The session exists until the user logs out from all applications. When accessing applications using the Web Interface Services Site with the Citrix Receiver, it is possible to configure session prelaunch. This is achieved by creating a prelaunch application from one of our published applications. When a user logs onto the receiver, a session is then created for him/her immediately or at a scheduled time. This gives faster access to the applications as the session pre-exists. A user license is used as soon as the user logs on onto the Receiver, not just when they launch apps. To create a prelaunch application, right-click on an existing application and from the context menu select Other Tasks | Create pre-launch application, as shown in the following screenshot: Step 7 – accessing published applications We are now ready to test the connection and we will connect to the Web Interface Web Site. The Web Interface Services Site is for direct connections from the Citrix Receiver; the Web Interface Web Site is used by the web browser and online plugins. The default Web Site will be http://<yourwebinterfaceserver>/Citrix/XenApp. You will be prompted for your username and password, and possibly your domain name. Once authenticated, you can see a list of your applications associated with your login, content, and desktops. From the following screenshot, we can see the published application, that is, Notepad: Summary We saw you how to perform one of the core tasks of XenApp: making applications available to remote users. We will do this by using the consoles and the command line using PowerShell. Resources for Article : Further resources on this subject: Getting Started with the Citrix Access Gateway Product Family [Article] Designing a XenApp 6 Farm [Article] Creating a sample C#.NET application [Article]
Read more
  • 0
  • 0
  • 6039
article-image-xendesktop-architecture
Packt
08 Mar 2013
12 min read
Save for later

The XenDesktop architecture

Packt
08 Mar 2013
12 min read
(For more resources related to this topic, see here.) Virtual desktops can reside in two locations: Hosted at the data center: In this case, you host the desktop/application at the data center and provide users with these desktops through different types of technologies (remote desktop client and Citrix ReceiverXenDesktop). Hosted at the client-side machine: In this case, users access the virtual desktop or application through a VM that is loaded at the client side through different types of client-side virtualizations (level 1, level 2, or streaming). Hosted desktops can be provided using: VM or physical PC : In this type, users are accessing the desktop on an OS that is installed to a dedicated VM or physical machine. You typically install the OS to the VM and clone the VM using hypervisor or storage cloning technology, or install the OS to a physical machine using automatic installation technology, enabling users to access the desktop. The problem with this method is that storage consumption is very high since you will have to load the VM/PC with the OS. Additionally, adding/removing/managing the infrastructure is sometimes complex and time consuming. Streamed OS to VM or physical PC : This type is the same as the VM or physical PC type; however, an OS is not installed on the VM or the physical machine but rather streamed to them. Machines (virtual or physical) will boot from the network, and load the required virtual hosted desktop (VHD) over the network and boot from the VHD again. It has a lot of benefits, including simplicity and storage optimization since an OS is not installed. A single VHD can support thousands of users. Hosted shared desktop : In this model, you provide users with access to a desktop that is hosted on the server and published to users. This method is implemented with Microsoft Terminal Services and XenApp plus Terminal services. This provides high scalability in terms of the number of users per server scale but provides limited security and limited environment isolation for the user. Now that we have visited the different types of DV and how they are delivered and located, let's see what the pros and cons are of each type: Virtualization Type Pros Cons Usage Hosted PC - not streamed PCs are located at the DC, allowing better control. Can be used in a consolidation scenario along with refurbished PCs. Superior performance, since a physical HW provides HW resources to the OS and the OS is preinstalled. Management/deployment is difficult, especially in large-scale deployment scenarios. Operating system deployment and maintenance is an overwhelming process. Low storage optimization and utilization since each OS is preinstalled. High electricity consumption since all of those PCs (normal or blade) are consuming electricity and HVAC resources. Requires users to be connected to the DC if they wish to access the VD. Suitable for refurbished PCs and reusing old PCs. Suitable for task workers, call center agents, and individuals with similar workloads. Hosted VM - not streamed VMs are located at the DC, allowing better control. Superior performance since the OS is preinstalled, but performance is less than what is provided with physical PCs. High consolidation scenario using server virtualization; it provides green IT for your DC. Management/deployment is difficult, especially in large-scale deployment scenarios. Low storage optimization and utilization since each OS is preinstalled, thus requiring a storage side optimization/de-duplication technique that might add costs for some vendors. Hypervisor management could add a costs for some vendors. Requires users to be connected to the DC if they wish to access the VD. Suitable for task workers and call center agents. Some organizations use this model to give users access to server-level hardware resources such as rendering and CAD software. Hosted PC - streamed PCs are located at the DC, allowing better control. Can be used in a consolidation scenario along with refurbished PCs. Superior performance since a physical HW provides HW resources to the OS. High storage optimization and utilization since each OS is streamed on demand. Deployment is difficult, especially in large-scale deployment scenarios. High electricity consumption since all of those PCs (normal or blade) are consuming electricity and HVAC resources. Requires high storage performance to provide write cache storage for the OS. Requires the deployment of a streaming technology provider within the architecture. PCs must be connected to streaming servers over the network to stream the OS's VHD. Requires users to be connected to the DC if they wish to access the VD. Suitable for task workers and call center agents. Some organizations use this model to give users access to server-level hardware resources such as rendering and CAD software. Hosted VMs - Streamed VMs are located at the DC, allowing better control. Superior performance, but performance less than what is provided with physical PCs. High consolidation scenario using of server virtualization; it provides green IT for your DC. High storage optimization and utilization since each OS is streamed on demand. Requires high storage performance to provide write cache storage for the OS. Requires the deployment of a streaming technology provider within the architecture. Hyper-v management could add some costs for some vendors. VMs must be connected to streaming servers over the network to stream the OS's VHD. Requires users to be connected to the DC if they wish to access the VD. Hosted for users who need hosted environment but do not need powerful resources and isolated environment that is not shared between them. Hosted shared desktops Desktop is located in the DC, allowing better control. High capacity for server/user ratio. Server consolidation can be achieved by hosting server on top of hypervisor technology. Easy to manage, deploy, and administrate. Different types of users will require different types of applications, HW resources, and security, which cannot be fully achieved in HSD mode. Resource isolation for users cannot be fully achieved as in other models. Customization for the OS, application, and for profile management could be a little tricky. Requires users to be connected to the DC if they wish to access the VD. Suitable for most VDs, task workers, call center agents, and PC users. Level 1 client hypervisors Since VMs are hosted on the clients, the load is offloaded from the DC and less HW investments are required. Users can use their VD without being connected to the network. Client HW can be utilized, including graphic cards and sound devices. VD are not located on the DC, thus could be less secure for some organizations. Might not fit with organizations that allow users to use their own personal computers. Could require some HW investments at the client HW side to allow adequate performance. The client is connected to a central storage for the purpose of backing up the company's data. This type might fit into smaller environments, but for larger environments, management could be hard or even impossible. Suitable for mobile users and power users who are not connected to the data center at all times. Level 2 client hypervisors Since VMs are hosted on the clients, the load is offloaded from the DC and less HW investments are required. Users can use their VD without being connected to the network. Client HW can be utilized, including graphic cards and sound devices. Allows the users to mix the usage of virtual and physical desktops. VD are not located on the DC, thus could be less secure for some organizations. Might not fit with organizations that allows users to use their own personal computers. Could require some HW investments at the client HW side to allow adequate performance. The client is connected to a central storage for the purpose of backing up the company's data backup. This type might fit into smaller environments, but for larger environments, management could be hard or even impossible. Suitable for mobile users and power users who are not connected to the data center at all times. XenDesktop components Now that we have reviewed the DV delivery methods, let us see the components that are used by XenDesktop to build the DV infrastructure. The XenDesktop infrastructure can be divided into two major blocks the delivery block and the virtualization block, as shown in the next two sections. Delivery block This section is responsible for the VD delivery part, including authenticating users, remote access, redirecting users to the correct servers, and so on. This part consists of the following servers: The Desktop Delivery Controller Description : The Desktop Delivery Controller (DDC) is the core service within the XenDesktop deployment. Think of it as the glue that holds everything together within the XenDesktop infrastructure. It is responsible for authenticating the user, managing the user's connections, and redirecting the user to the assigned VD. Workload type: The DDC handles an HTTP/HTTPS connection sent from the web interface services to its XML service, passing the user's credentials and querying for the user's assigned VD. High availability: This is achieved by introducing extra DDCs in the infrastructure within the farm. To achieve high availability and load balancing between them, these DDCs will be configured in the web interface that will be loaded to balance them. Web interface (WI) Description : The web interface is a very critical part of the XenDesktop infrastructure. The web interface has three very important tasks to handle within the XenDesktop infrastructure. They are as follows: 1. It receives the user credentials either explicitly, by a pass-through, or by other means 2. It passes the credentials to the DDC 3. It passes the list of assigned VDs to the user after communicating with the DDC XML service 4. It passes the connection information to the client inside the ica file so the client can establish the connection Workload type: The WI receives an HTTP/HTTPS connection from users initiating connections to the XenDesktop farm either internally and externally. High availability: This is achieved by installing multiple WI servers and load balancing the traffic between the WI servers, using either Windows NLB or other external hardware load balancer devices, such as Citrix NetScaler. Licensing server Description : The licensing server is the one responsible for supplying the DDC with the required license when the virtual desktop agent tries to acquire one from the DDC. Workload type: When you install the DDC, a license server is required to be configured to provide the licenses to the DDC. When you install the license server, you import the licenses you purchased from Citrix to the licensing server. High availability: This can be achieved in two ways, by either installing a licensing server on Windows Cluster or installing another offline licensing server that could be brought online in the case of the first server failure. Database server Description : The Database server is responsible for holding the static and dynamic data as configuration and session information. XenDesktop 5 no longer uses the IMA data store as the central database in which the con figuration information is stored. Instead, a Microsoft SQL Server database is used as the data store for both configuration and session information. Workload type: The database server receives the DB connections from DDC and provisioning servers (PVS), querying for farm/site configuration. It is important to note that the XenDesktop creates a DB that is different from a provisioning services farm database; both databases could be located on the same servers, same instance or different instances, or different servers. High availability: This can be achieved by using SQL clustering plus Microsoft Windows or SQL replication. Virtualization block The virtualization block provides the VD service to the users through the delivery block; I often see that the virtualization block is referenced as a hypervisor stack, but as you saw previously, this can be done using blade PCs or PCs located in the DC, so don't miss that. The virtualization block consists of: Desktop hosting infrastructure Description : This is the infrastructure that hosts the virtual desktop that is accessed by users. This could be either a virtual machine infrastructure, which could be XenServer, Hyper-v, or VMware VSPHERE ESXI servers, or a physical machine infrastructure, for either regular PCs or blade ones. Workload type: Hosts OSs that will provide the desktops to users that are accessing them from the delivery network. The OS could be running as either installed or streamed and can be either physical or virtual. High availability: Depends on the type of DHI used but can be developed by using clustering for hypervisors or adding extra HW in the case of the physical ones. Provisioning servers infrastructure Description : The provisioning servers are usually coupled with the XenDesktop deployment. Most of the organizations I worked with never deployed preinstalled OSs to VMs or physical machines, but rather streamed the OSs to physical or virtual machines since they provide huge storage optimization and utilization. Provisioning servers are the servers that actually stream the OS in the form of VHD over the network to the machines requesting the streamed OS. Workload type: Provisioning servers stream the OS as VHD files over the network to the machines, which can be either physical or virtual. High availability: This can be achieved by adding extra PVS servers to the provisioning servers' farm and configuring them in the DHCP servers or in the bootstrap file. XenDesktop system requirements XenDesktop has different components; each has its own requirements. The detailed requirements for these components can be checked and reviewed on the Citrix XenDesktop requirements website: http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-sys-reqs-wrapper-rho.html Summary In this article, we saw what the XenDesktop architecture consists of and its various components. Resources for Article : Further resources on this subject: Application Packaging in VMware ThinApp 4.7 Essentials [Article] Designing a XenApp 6 Farm [Article] Managing Citrix Policies [Article]
Read more
  • 0
  • 0
  • 6018

article-image-windows-8-vmware-view
Packt
10 Sep 2013
3 min read
Save for later

Windows 8 with VMware View

Packt
10 Sep 2013
3 min read
(For more resources related to this topic, see here.) Deploying VMware View on Windows 8 (Advanced) If you want to get a hands-on experience with Windows 8 on VMware View and get ready for future View deployments, this should be a must-take guide for deploying it. Getting ready Let's keep the following requirements ready: You should have VMware vSphere 5.1 deployed You should have VMware View 5.1 Connection Server deployed You should have Windows 8 Release Preview installer and license keys for 32-bit version How to do it... Let's list the steps required to complete the task. To create a Windows 8 Virtual Machine, perform the following steps: Create a standard virtual hardware version 9 VM with Windows 8 as the guest operating system. As this is a testing phase, keep the memory and disk size optimal. Edit the settings in VM and under Video card , select the Enable 3D support checkbox (this step is to make sure graphics, and adobe Flash content works with Windows 8 using VMware driver). Mount the ISO image of Windows 8 in the virtual machine and proceed with the Windows 8 Installation. Enter Windows license keys appropriately available with you. Install VMware tools in the VM; Shutdown and restart the VM. Install a VMware View 5.1 agent in VM, uncheck Persona Management during agent installation. Power-on the VM, set the network option to DHCP, and disable Windows Defender. Create a snapshot of VM and power the VM down. Now, we are ready with Windows 8 Parent virtual machine with Snapshot. To create a pool for Windows 8 in View Admin console, perform the following steps: Launch the Connection Server Admin console and navigate to the Pool Creation wizard. Select the Automated pool type (you can use either Dedicated or Floating ). Choose a View Composer linked-clone-based pool. Navigate through the rest of the wizard accepting all defaults, and choosing the snapshot of your Windows 8 VM with the View Agent installed. Use QuickPrep for instant customization. You may require to manually restart the VM if Quickprep doesn't get initiated by itself once the VM boots. Allow provisioning to proceed. Make sure you set Allow users to choose protocol: to No , else 3D rendering gets disabled automatically. If you want to set Allow users to choose protocol: to Yes , make sure you stick to the RDP protocol and not PCoIP in the Default display protocol: field, else you will end up with a black screen. To install View Client, perform the following step: Install View Client 5.1 on any device with iOS/Android/Linux/Windows. To view a list of supported clients, visit http://www.vmware.com/support/viewclients/doc/viewclients_pubs.html. How it works... Once all the preceding steps are performed, you should have Windows 8 with VMware View 5.x ready. You should be able to see the VM in ready status under Desktop Resources in VMware View Admin console. You should be able to launch Windows 8 with View Client now. Please note that you have to entitle the users to the respective pools before users can access the VM. More information You can even refer to http://kb.vmware.com/kb/2033640 to know more on how to install Windows 8 in a VM. Resources for Article : Further resources on this subject: Cloning and Snapshots in VMware Workstation [Article] Creating an Image Profile by cloning an existing one [Article] Building a bar graph cityscape [Article]
Read more
  • 0
  • 0
  • 5979

article-image-introduction-microsoft-remote-desktop-services-and-vdi
Packt
09 Jul 2014
10 min read
Save for later

An Introduction to Microsoft Remote Desktop Services and VDI

Packt
09 Jul 2014
10 min read
(For more resources related to this topic, see here.) Remote Desktop Services and VDI What Terminal Services did was to provide simultaneous access to the desktop running on a server/group of servers. The name was changed to Remote Desktop Services (RDS) with the release of Windows Server 2008 R2 but actually this encompasses both VDI and what was Terminal Services and now referred to as Session Virtualization. This is still available alongside VDI in Windows Server 2012 R2; each user who connects to the server is granted a Remote Desktop session (RD Session) on an RD Session Host and shares this same server-based desktop. Microsoft refers to any kind of remote desktop as a Virtual Desktop and these are grouped into Collections made available to specific group users and managed as one, and a Session Collection is a group of virtual desktop based on session virtualization. It's important to note here that what the users see with Session Virtualization is the desktop interface delivered with Windows Server which is similar to but not the same as the Windows client interface, for example, it does have a modern UI by default. We can add/remove the user interface features in Windows Server to change the way it looked, one of these the Desktop Experience option is there specifically to make Windows Server look more like Windows client and in Windows Server 2012 R2 if you add this feature option in you'll get the Windows store just as you do in Windows 8.1. VDI also provides remote desktops to our users over RDP but does this in a completely different way. In VDI each user accesses a VM running Windows Client and so the user experience looks exactly the same as it would on a laptop or physical desktop. In Windows VDI, these Collections of VMs run on Hyper-V and our users connect to them with RDP just as they can connect to RD Sessions described above. Other parts of RDS are common to both as we'll see shortly but what is important for now is that RDS manages the VDI VMs for us and organises which users are connected to which desktops and so on. So we don't directly create VMs in a Collection or directly set up security on them. Instead a collection is created from a template which is a special VM that is never turned on as the Guest OS is sysprepped and turning it on would instantly negate that. The VMs in a Collection inherit this master copy of the Guest OS and any installed applications as well. The settings for the template VM are also inherited by the virtual desktops in a collection; CPU memory, graphics, networking, and so on, and as we'll see there are a lot of VM settings that specifically apply to VDI rather than for VMs running a server OS as part of our infrastructure. To reiterate VDI in Windows Server 2012 R2, VDI is one option in an RDS deployment and it's even possible to use VDI alongside RD Sessions for our users. For example, we might decide to give RD Sessions for our call center staff and use VDI for our remote workforce. Traditionally, the RD Session Hosts have been set up on physical servers as older versions of Hyper-V and VMware weren't capable of supporting heavy workloads like this. However, we can put up to 4 TB RAM and 64 logical processors into one VM (physical hardware permitting) and run large deployment RD Sessions virtually. Our users connect to our Virtual Desktop Collections of whatever kind with a Remote Desktop Client which connects to the RDS servers using the Remote Desktop Protocol (RDP). When we connect to any server with the Microsoft Terminal Services Client (MSTSC.exe), we are using RDP for this but without setting up RDS there are only two administrative sessions available per server. Many of the advantages and disadvantages of running any kind of remote desktop apply to both solutions. Advantages of Remote Desktops Given that the desktop computing for our users is now going to be done in the data center we only need to deploy powerful desktop and laptops to those users who are going to have difficulty connecting to our RDS infrastructure. Everyone else could either be equipped with thin client devices, or given access in from other devices they already have if working remotely, such as tablets or their home PCs and laptops. Thin client computing has evolved in line with advances in remote desktop computing and the latest devices from 10Zig, Dell, HP among others support multiple hi-res monitors, smart cards, and web cams for unified communications which are also enabled in the latest version of RDP (8.1). Using remote desktops can also reduce overall power consumption for IT as an efficiently cooled data center will consume less power than the sum of thin clients and servers in an RDS deployment; in one case, I saw this saving resulted in a green charity saving 90 percent of its IT power bill. Broadly speaking, managing remote desktops ought to be easier than managing their physical equivalents, for a start they'll be running on standard hardware so installing and updating drivers won't be so much of an issue. RDS has specific tooling for this to create a largely automatic process for keeping our collections of remote desktops in a desired state. VDI doesn't exist in a vacuum and there are other Microsoft technologies to make any desktop management easier with other types of virtualization: User profiles: They have long been a problem for desktop specialists. There are dedicated technologies built into session virtualization and VDI to allow user's settings and data to be stored away from the VHD with the Guest OS on. Techniques such as folder redirection can also help here and new for Windows Server 2012 is User Environment Virtualization (UE-V) which provides a much richer set of tools that can work across VDI, RD Sessions and physical desktops to ensure the user has the same experience no matter what they are using for a Windows Client desktop. Application Virtualization (App-V): This allows us to deploy application to the user who need them when and where they need them, so we don't need to create desktops for different type of users who need special applications; we just need a few of these and only deploy generic applications on these and those that can't make use of App-V. Even if App-V is not deployed VDI administrator have total control over remote desktops, as any we have any number of security techniques at our disposal and if the worst comes to the worst we can remove any installed applications every time a user logs off! The simple fact that the desktop and applications we are providing to our users are now running on servers under our direct control also increase our IT security. Patching is now easy to enable particularly for our remote workforce as when they are on site or working remotely their desktop is still in the data center. RDS in all its forms is then an ideal way of allowing a Bring Your Own Device (BYOD) policy. Users can bring in whatever device they wish into work or work at home on their own device (WHOYD is my own acronym for this!) by using an RD Client on that device and securely connecting with it. Then there are no concerns about partially or completely wiping users' own devices or not being able to because they aren't in a connected state when they are lost or stolen. VDI versus Session Virtualization So why are there these two ways of providing Remote Desktops in Windows and what are the advantages and disadvantages of each? First and foremost Session Virtualization is always going to be much more efficient than VDI as it's going to provide more desktops on less hardware. This makes sense if we look at what's going on in each scenario if we have a hundred remote desktop users to provide for: In Session Virtualization, we are running and sharing one operating system and this will comfortably sit on 25 GB of disk. For memory we need roughly 2 GB per CPU, we can then allocate 100 MB memory to each RD Session. So on a quad CPU Server for a hundred RD Sessions we'll need 8 GB + (100MB*100), so less than 18 GB RAM. If we want to support that same hundred users on VDI, we need to provision a hundred VMs each with its own OS. To do this we need 2.5 TB of disk and if we give each VM 1 GB of RAM then 100 GB of RAM is needed. This is a little unfair on VDI in Windows Server 2012 R2 as we can cut down on the disk space need and be much more efficient in memory than this but even with these new technologies we would need 70 GB of RAM and say 400 GB of disk. Remember that with RDS our users are going to be running the desktop that comes with Windows Server 2012 R2 not Windows 8.1. Our RDS users are sharing the one desktop so they cannot be given administrative rights to that desktop. This may not be that important for them but can also affect what applications can be put onto the server desktop and some applications just won't work on a Server OS anyway. Users' sessions can't be moved from one session host to another while the user is logged in so planned maintenance has to be carefully thought through and may mean working out of hours to patch servers and update applications if we want to avoid interrupting our users' work. VDI on the other hand means that our users are using Windows 8.1 which they will be happier with. And this may well be the deciding factor regardless of cost as our users are paying for this and their needs should take precedence. VDI can be easier to manage without interrupting our users as we can move running VMs around our physical hosts without stopping them. Remote applications in RDS Another part of the RDS story is the ability to provide remote access to individual applications rather than serving up a whole desktop. This is called RD Remote App and it simply provides a shortcut to an individual application running either on a virtual or remote session desktop. I have seen this used for legacy applications that won't run on the latest versions of Windows and to provide access to secure application as it's a simple matter to prevent cut and paste or any sharing of data between a remote application and the local device it's executing on. RD Remote Apps work by publishing only the specified applications installed on our RD Session Hosts or VDI VMs. Summary This article discusses the advantages of Remote Desktops. We also learned how they can be provided in Windows. This article also compares VDI to Session Virtualization. Resources for Article: Further resources on this subject: Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Installing Virtual Desktop Agent – server OS and desktop OS [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 5967
article-image-storage-policy-based-management
Packt
07 Sep 2015
10 min read
Save for later

Storage Policy-based Management

Packt
07 Sep 2015
10 min read
In this article by Jeffery Taylor, the author of the book VMware Virtual SAN Cookbook, we have a functional VSAN cluster, we can leverage the power of Storage Policy-based Management (SPBM) to control how we deploy our virtual machines (VMs).We will discuss the following topics, with a recipe for each: Creating storage policies Applying storage policies to a new VM or a VM deployed from a template Applying storage policies to an existing VM migrating to VSAN Viewing a VM's storage policies and object distribution Changing storage policies on a VM already residing in VSAN Modifying existing storage policies (For more resources related to this topic, see here.) Introduction SPBM is where the administrative power of converged infrastructure becomes apparent. You can define VM-thick provisioning on a sliding scale, define how fault tolerant the VM's storage should be, make distribution and performance decisions, and more. RAID-type decisions for VMs resident on VSAN are also driven through the use of SPBM. VSAN can provide RAID-1 (mirrored) and RAID-0 (striped) configurations, or a combination of the two in the form of RAID-10 (mirrored set of stripes). All of this is done on a per-VM basis. As the storage and compute infrastructures are now converged, you can define how you want a VM to run in the most logical place—at the VM level or its disks. The need for a datastore-centric configuration, storage tiering, and so on is obviated and made redundant through the power of SPBM. Technically, the configuration of storage policies is optional. If you choose not to define any storage policies, VSAN will create VMs and disks according to its default cluster-wide storage policy. While this will provide for generic levels of fault tolerance and performance, it is strongly recommended to create and apply storage policies according to your administrative need. Much of the power given to you through a converged infrastructure and VSAN is in the policy-driven and VM-centric nature of policy-based management.While some of these options will be discussed throughout the following recipes, it is strongly recommended that you review the storage-policy appendix to familiarize yourself with all the storage-policy options prior to continuing. Creating VM storage policies Before a storage policy can be applied, it must be created. Once created, the storage policy can be applied to any part of any VM resident on VSAN-connected storage. You will probably want to create a number of storage policies to suit your production needs. Once created, all storage policies are tracked by vCenter and enforced/maintained by VSAN itself. Because of this, your policy selections remain valid and production continues even in the event of a vCenter outage. In the example policy that we will create in this recipe, the VM policy will be defined as tolerating the failure of a single VSAN host. The VM will not be required to stripe across multiple disks and it will be 30percent thick-provisioned. Getting ready Your VSAN should be deployed and functional as per the previous article. You should be logged in to the vSphere Web Client as an administrator or as a user with rights to create, modify, apply, and delete storage policies. How to do it… From the vSphere 5.5 Web Client, navigate to Home | VM Storage Policies. From the vSphere 6.0 Web Client, navigate to Home | Policies and Profiles | VM Storage Policies. Initially, there will be no storage policies defined unless you have already created storage policies for other solutions. This is normal. In VSAN 6.0, you will have the VSAN default policy defined here prior to the creation of your own policies. Click the Create a new VM storage policy button:   A wizard will launch to guide you through the process. If you have multiple vCenter Server systems in linked-mode, ensure that you have selected the appropriate vCenter Server system from the drop-down menu. Give your storage policy a name that will be useful to you and a description of what the policy does. Then, click Next:   The next page describes the concept of rule sets and requires no interaction. Click the Next button to proceed. When creating the rule set, ensure that you select VSAN from the Rules based on vendor-specific capabilities drop-down menu. This will expose the <Add capability> button. Select Number of failures to tolerate from the drop-down menu and specify a value of 1:   Add other capabilities as desired. For this example, we will specify a single stripe with 30% space reservation. Once all required policy definitions have been applied, click Next:   The next page will tell you which datastores are compatible with the storage policy you have created. As this storage policy is based on specific capabilities exposed by VSAN, only your VSAN datastore will appear as a matching resource. Verify that the VSAN datastore appears, and then click Next. Review the summary page and ensure that the policy is being created on the basis of your specifications. When finished, click Finish. The policy will be created. Depending on the speed of your system, this operation should be nearly instantaneous but may take several seconds to finish. How it works… The VSAN-specific policy definitions are presented through VMware Profile-Driven Storage service, which runs with vCenter Server. Profile-Driven Storage Service determines which policy definitions are available by communicating with the ESXi hosts that are enabled for VSAN. Once VSAN is enabled, each host activates a VASA provider daemon, which is responsible for communicating policy options to and receiving policy instructions from Profile-Driven Storage Service. There's more… The nature of the storage policy definitions enabled by VSAN is additive. No policy option mutually excludes any other, and they can be combined in any way that your policy requirements demand. For example, specifying a number of failures to tolerate will not preclude the specification cache reservation. See also For a full explanation of all policy options and when you might want to use them Applying storage policies to a new VM or a VM deployed from a template When creating a new VM on VSAN, you will want to apply a storage policy to that VM according to your administrative needs. As VSAN is fully integrated into vSphere and vCenter, this is a straightforward option during the normal VM deployment wizard. The workflow described in this recipe is for creating a new VM on VSAN. If deployed from a template, the wizard process is functionally identical from step 4 of the How to do it… section in this recipe. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create virtual machines. You should have at least one storage policy defined (see previous recipe). How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Right-click the cluster, and then select New Virtual Machine…:   In the subsequent screen, select Create a new virtual machine. Proceed through the wizard through Step 2b. For the compute resource, ensure that you select your VSAN cluster or one of its hosts:   On the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   Complete the rest of the VM-deployment wizard as you normally would to select the guest OS, resources, and so on. Once completed, the VM will deploy and it will populate in the inventory tree on the left side. The VM summary will reflect that the VM resides on the VSAN storage:   How it works… All VMs resident on the VSAN storage will have a storage policy applied. Selecting the appropriate policy during VM creation means that the VM will be how you want it to be from the beginning of the VM's life. While policies can be changed later, this could involve a reconfiguration of the object, which can take time to complete and can result in increased disk and network traffic once it is initiated. Careful decision making during deployment can help you save time later. Applying storage policies to an existing VM migrating to VSAN When introducing VSAN into an existing infrastructure, you may have existing VMs that reside on the external storage, such as NFS, iSCSI, or Fibre Channel (FC). When the time comes to move these VMs into your converged infrastructure and VSAN, we will have to make policy decisions about how these VMs should be handled. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create, migrate, and modify VMs. How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Identify the VM that you wish to migrate to VSAN. For the example used in this recipe, we will migrate the VM called linux-vm02 that resides on NFS Datastore. Right-click the VM and select Migrate… from the context menu:   In the resulting page, select Change datastore or Change both host and datastore as applicable, and then click Next. If the VM does not already reside on one of your VSAN-enabled hosts, you must choose the Change both host and datastore option for your migration. In the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   You can apply different storage policies to different VM disks. This can be done by performing the following steps: Click on the Advanced >> button to reveal various parts of the VM: Once clicked, the Advanced >> button will change to << Basic.   In the Storage column, click the existing datastore to reveal a drop-down menu. Click Browse. In the subsequent window, select the desired policy from the VM Storage Policy drop-down menu. You will find that the only compatible datastore is your VSAN datastore. Click OK:   Repeat the preceding step as needed for other disks and the VM configuration file. After performing the preceding steps, click on Next. Review your selection on the final page, and then click Finish. Migrations can potentially take a long time, depending on how large the VM is, the speed of the network, and other considerations. Please monitor the progress of your VM relocation tasks using the Recent Tasks pane:   Once the migration task finishes, the VM's Summary tab will reflect that the datastore is now the VSAN datastore. For the example of this VM, the VM moved from NFS Datastore to vsanDatastore:   How it works… Much like the new VM workflow, we select the storage policy that we want to use during the migration of the VM to VSAN. However, unlike the deploy-from-template or VM-creation workflows, this process requires none of the VM configuration steps. We only have to select the storage policy, and then SPBM instructs VSAN how to place and distribute the objects. All object-distribution activities are completely transparent and automatic. This process can be used to change the storage policy of a VM already resident in the VSAN cluster, but it is more cumbersome than modifying the policies by other means. Summary In this article, we learned that storage policies give you granular control over how the data for any given VM or VM disk is handled. Storage policies allow you to define how many mirrors (RAID-1) and how many stripes (RAID-0) are associated with any given VM or VM disk. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 5818

article-image-microsoft-application-virtualization-managing-dynamic-suite-composition
Packt
12 Jan 2011
6 min read
Save for later

Microsoft application virtualization: managing Dynamic Suite Composition

Packt
12 Jan 2011
6 min read
With Dynamic Suite Composition, you get the chance to avoid large-sized packages or redundant sets of components, virtualized separately in different applications. You can virtualize one application normally, like Microsoft Office, and on a different package a Microsoft Offce plugin that will only be streamed down to the clients whenever it is required Having separate environments not only reduces the chance of getting application components captured more than once in different packages but also gives us control of these dependencies and lets you distribute the applications with more accuracy to users, achieving a one-to-many strategy on applications. For example, having several web applications packages but only one Java Runtime Environment (JRE) used by all these web applications. The dependencies mentioned in DSC are nothing but parameters in the virtual environment of each application. Manually editing the OSD file represents the main task in DSC preparation, but, of course, the risks increase as editing some of these parameters could end up causing erratic functionality in the App-V packages. Fortunately, Microsoft provides the Dynamic Suite Composition tool with which you can easily establish the necessary communication channels between applications. How Dynamic Suite Composition works Dynamic Suite Composition represents the way in which administrators can define dependencies between App-V packages and guarantee final users transparent operability in applications. In normal use of the operating system, you can find several applications which are dependent on other applications. Probably the best example are web applications interacting, from a browser of course, constantly with Java Runtime Environment, Silverlight, and other applications like a PDF reader. DSC is also suitable in any plugin scenario for other large applications like Microsoft Office. Dynamic Suite Composition always identifies two types of applications: Primary application: This is the main application and is usually the full software product. This should be identified as the application users execute primarily before needing a second application. Secondary application: This is usually a plugin attached to the primary application. It can be streamed in the normal way and does not need to be fully cached to run. An important note is that a primary application can have more than one secondary application but only one level of dependency is supported. You cannot define a secondary package as dependent on another secondary package. Here is an example from the App-V Sequencing Guide by Microsoft, showing the "many-to-one" and "one-to-many" relationship in Dynamic Suite Composition. These These application dependencies are customizable in the App-V configuration file for virtual applications, the OSD, where you are going to use the <DEPENDENCIES> tag in the primary application, adding the identifiers of the secondary application(s). So every time the main application needs a secondary application (like a plugin), it can find the right path and execute it without any additional user intervention. In some environments you can find secondary applications that could be a requirement for normal use of a primary application. With DSC, you can also use the variable MANDATORY=TRUE in the primary application OSD file. This value is added at the end of the secondary application reference. DSC does not control the interaction When you discuss implementing Dynamic Suite Composition in your organization you must always remember that DSC does not control the interaction between the two applications, and is only in charge of sharing the virtual environment between the App-V packages. The SystemGuard, the virtual environment created by each App-V application, is stored in one file, OSGuard.cp (you can find it in the OSD file description using the name SYSGUARDFILE). Once the application is distributed, every change made by the operating system's client and/or user changes, are stored in Settings.cp. DSC represents by sharing, between primary and secondary application, the Settings.cp file, while always maintaining the OSGuard.cp. This task will guarantee the interaction between the two applications, but Dynamic Suite Composition does not control how this interaction is occurring and which components are involved. The reason for this is that when you find yourself in a complex DSC scenario, where you have a primary application with several secondary applications that use the same shared environment as well, some conflicts may appear in the virtual environment; secondary applications overriding DLLs or registry keys, which were already being used by another secondary application in the same environment. If a conflict occurs, the last application to load wins. So, for example, the user data, which is saved in PKG files, will be kept within the secondary application package. Dynamic Suite Composition was designed to be used on simple dependencies situations and not when you want full and large software packages interacting as secondary packages. Microsoft Office is a good example of software that must be used as a "primary application" but never as a "secondary application". Configuring DSC manually Now you are going to take a closer look at configuring Dynamic Suite Composition. When you are working with DSC, using virtual machine snapshots is highly recommended as both applications, the primary and secondary, must be captured separately. This example will use a familiar environment for most, integrating an Internet browser with another program; Mozilla Firefox 3.6 with Adobe Reader 9. The scenario is very well known to most users as most find themselves with a PDF file that needs to be opened from within the browser while you are surfing around (or receiving an attachment via web mail). If a PDF reader is not a requirement on the client machines you will be obligated to capture and deliver one to all possible users, even though they only use it when a PDF files appears in their browsing session. Using Dynamic Suite Composition you can easily configure the browser, Mozilla Firefox, with a secondary application, Adobe Reader, which will only be streamed down to clients if the browser accesses a PDF file. These are the steps to follow: Log on to the sequencer machine using a clean image, install and capture the primary application. Import the primary application in the App-V Management Server. Set the permissions needed for the selected users. Restore the sequencer operating system to the base clean image. Install the primary application locally again; do not capture this installation. Install and capture the secondary application with App-V Sequencer. Import the secondary application in the App-V Management Server. Set the permissions needed for the selected users. Modify the dependencies on the OSD file from the primary application. Here is a detailed look at this process. Install and capture the primary application, Mozilla Firefox.This example is using an already captured application; you can review the process of sequencing Mozilla Firefox in the previous chapter. Here's a quick look at the procedure: Start the App-V Sequencer Application and begin the capture. Start the Mozilla Firefox installation (if using Windows 7 you may need to use compatibility mode for the installer). Select the installation folder in Q: using an 8.3 name in the folder. Complete the installation. Stop the capturing process and launch the application. It is recommended to remove automatic updates from the Mozilla Firefox options. Complete the package customization and save the App-V project.
Read more
  • 0
  • 0
  • 5532
Modal Close icon
Modal Close icon