Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-networking
Packt
20 Mar 2014
8 min read
Save for later

Networking

Packt
20 Mar 2014
8 min read
(For more resources related to this topic, see here.) Working with vSphere Distributed Switches A vSphere Distributed Switch (vDS) is similar to a standard switch, but vDS spans across multiple hosts instead of creating an individual switch on each host. The vDS is created at the vCenter level, and the configuration is stored in the vCenter database. A cached copy of the vDS configuration is also stored on each host in case of a vCenter outage. Getting ready Log in to the vCenter Server using the vSphere Web Client. How to do it… In this section, you will learn how to create a vDS, dvportgroup, and manage the ESXi host using the vDS. First, we will create a vSphere Distributed Switch. The steps involved in creating a vDS are as follows: Select the datacenter on which the vDS has to be created. Navigate to Actions | New Distributed Switch...., as shown in the following screenshot Enter the Name and location for the vDS and click on Next. Select the version for the vDS, as shown in the following screenshot, and click on Next: In the Edit settings page, provide the following details: Number of uplinks: This specifies the number of physical NIC of the host which would be part of the vDS. Network I/O Control: This option controls the input/output to the network and can be set to either Enabled or Disabled. Default port group: This option lets you create a default port group. To create one, enable the checkbox and provide the Port group name. Click on Next when finished. In the Ready to complete screen, review the settings and click on Finish. The following steps will create a new distributed port group: The next step after creating a vDS is to create a new port group if it is not been created as part of the vDS. Select the vDS and click on Actions | New Distributed Port Group. Provide the name and select the location for the port group and click on Next. In the Configure settings screen, set the following general properties for the port group: Port binding: This provides us with three options, namely, Static, Dynamic, and Ephemeral (no binding). Static binding: This is selected when a VM is connected to the port group where a port is assigned and reserved for the VM. Only when the VM is deleted, the port is freed up. Ephemeral binding: This port is created and assigned to the VM by the host when a VM is powered on and the port is deleted when the VM is powered off. Dynamic binding: This is depreciated in ESXi 5.x version and is no longer in use, but the option is still available in the vSphere Client. Port allocation: This can be set to either Elastic or Fixed. Elastic: The default port is 8, and when all ports are used, a new set of ports is created automatically Fixed: The ports are fixed to 8, and no additional ports are created when all ports are used up Number of ports: This option is set to 8 by default. Network resource pool: This option is enabled only if a user-defined network pool is created; it can be set even after creating the port group. VLAN type: The available options are None, VLAN, VLAN trunking, and Private VLAN. None: This means that no VLAN is used VLAN: This implies that VLAN is used and the ID has to be specified VLAN trunking: This implies that a group of VLANs is being trunked and their respective ID have to be used Private VLAN: This menu is empty if a private VLAN does not exist In the Ready to complete screen, review the settings and click on Finish. The next step after creating a distributed port group is to add the ESXi host to the vDS. While the host is being added, it is possible to migrate the VMkernel and VM port group from the vSS to vDS, or it can be done later. Now, let's see the steps involved: Select the Distributed Switch in the vSphere Web Client. Navigate to Actions | Add and Manage Hosts. In the Select task screen, select Add hosts, as shown in the following screenshot, and click on Next: Click on the + icon to select hosts to be added and click on OK. Click on Next in the Select new hosts screen. Select the physical network adapters, which will be used as an uplink for the vDS, and click on Next. In the Select virtual network adapters screen, you will have the option to migrate the VMkernel interface to the vDS group; select the appropriate option and click on Next. Review any dependencies on the validation page and click on Next. Optionally, you can migrate the VM Network to the vDS port group in the Select VM network adapters screen by selecting the appropriate option and clicking on Next. In the Ready to complete screen, review the settings and click on Finish. An ESXi host can be removed from the vDS only if there is no VM still connected to the vDS. Make sure the VMs are either migrated to the standard switch or to another vDS. The following steps will remove an ESXi host from the Distributed Switch: Browse to the Distributed Switch in the vSphere Web Client. Navigate to Actions | Add and Manage Hosts. In the Select task screen, select Remove hosts, as shown in the following screenshot, and click on Next: Click on the + icon to select new hosts to be removed and click on OK. Click on Next in the Select hosts screen. In the Ready to complete screen, review the settings and click on Finish. When the entire host is being added to the vDS, you can start to migrate the resources from vSS to vDS. The following steps will help you migrate from a Standard to a Distributed Switch: Select the Distributed Switch in the vSphere Web Client. Navigate to Actions | Migrate VM to Another Network. In the Select source and destination networks screen, you have the option to browse to a specific network or no network for source network migration. These options are described as follows: Specific network: This option allows you to select the VMs residing on a particular port group No network: This option implies that VMs that are not connected to any network will be selected for migration In the Destination network option, browse and select the distributed port group for the VM network and click on Next. Select the VM to migrate and click on Next. In the Ready to complete screen, review the settings and click on Finish. How it works... vSphere Distributed Switches extend the capabilities of virtual networking. vDS can be broken into the following two logical sections; one is the data plane and the other is management plane: Data plane: This is also called the I/O plane and it takes cares of the actual packet switching, filtering, tagging, and all networking-related activities. Management plane: This is also known as the control plane. It is a centralized control to manage and configure the data plane functionality. There's more... It is possible to preserve the vSphere Distributed Switch configuration information to a file. You can use these configurations for other deployments and also as a backup. You can restore the port group configuration in case of any misconfiguration. The following steps will export the vSphere Distributed Switch configuration: Select the vSphere Distributed Switch from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Export Configurations. In Configuration to export, you will have the following two options. Select the appropriate one. Distributed Switch and all port groups Distributed Switch only Click on OK. Exporting would begin, and once done, it would ask for saving the configuration. Click on Yes and provide the path to store the file. The import configuration function can be used to create a copy of the exported vDS from the existing configuration file. The following steps will import the vSphere Distributed Switch configuration file: Select the Distributed Switch from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Import distributed port group. In the Import Port Group Configuration option, browse to the backup file and click on Next. Review the import settings and click on Finish. The following steps will restore the vSphere distributed port group configuration: Select the distributed port group from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Restore Configuration. Select one of the following options and click on OK: Restore to a previous configuration: This allows you to restore the configuration of the port group to your previous snapshot Restore configuration from a file: This allows you to restore to the configuration from the file saved on your local system In the Ready to complete screen, review the settings and click on Finish. Summary In this article, we understood the vSphere networking concepts and how to work with vSphere distributed switches. We also discussed some of the more advanced networking configurations available in the distributed switch. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 5323

Packt
16 Apr 2014
9 min read
Save for later

Introduction to Veeam® Backup & Replication for VMware

Packt
16 Apr 2014
9 min read
(For more resources related to this topic, see here.) Veeam Backup & Replication v7 for VMware is a modern solution for data protection and disaster recovery for virtualized VMware vSphere environments of any size. Veeam Backup & Replication v7 for VMware supports VMware vSphere and VMware Infrastructure 3 (VI3), including the latest version VMware vSphere 5.5 and Microsoft Windows Server 2012 R2 as the management server(s). Its modular approach and scalability make it an obvious choice regardless of the environment size or complexity. As your data center grows, Veeam Backup & Replication grows with it to provide complete protection for your environment. Remember, your backups aren't really that important, but your restore is! Backup strategies A common train of thought when dealing with backups is to follow the 3-2-1 rule: 3: Keep three copies of your data—one primary and two backups 2: Store the data in two different media types 1: Store at least one copy offsite This simple approach ensures that no matter what happens, you will be able to have a recoverable copy of your data. Veeam Backup & Replication lets you accomplish this goal by utilizing the backup copy jobs. Back up your production environment once, then use the backup copy jobs to copy the backed-up data to a secondary location, utilizing the Built-in WAN Acceleration features and to tape for long-term archival. You can even "daisy-chain" these jobs to each other, which ensures that as soon as the backup job is finished, the copy jobs are fired automatically. This allows you to easily accomplish the 3-2-1 rule without the need for complex configurations that makes it hard to manage. Combining this with a Grandfather-Father-Son (GFS) backup media rotation scheme, for tape-based archiving, ensures that you always have a recoverable media available. In such a scheme, there are three, or more, backup cycles: daily, weekly, and monthly. The following table shows how you might create a GFS rotation schedule: Monday Tuesday Wednesday Thursday Friday         WEEK 1 MON TUE WED THU WEEK 2 MON TUE WED THU WEEK 3 MON TUE WED THU WEEK 4 MON TUE WED THU MONTH 1 "Grandfather" tapes are kept for a year, "Father" tapes for a month, and "Son" tapes for a week. In addition, quarterly, half-yearly, and/or annual backups could also be separately retained if required. Recovery point objective and recovery time objective Both these terms come into play when defining your backup strategy. The recovery point objective (RPO) is a definition of how much data you can afford to lose. If you run backups every 24 hours, you have, in effect, defined that you can afford to lose up to a day's worth of data for a given application or infrastructure. If that is not the case, you need to have a look at how often you back up that particular application. The recovery time objective (RTO) is a measure of the amount of time it should take to restore your data and return the application to a steady state. How long can your business afford to be without a given application? 2 hours? 24 hours? A week? It all depends, and it is very important that you as a backup administrator have a clear understanding of the business you are supporting to evaluate these important parameters. Basically, it boils down to this: If there is a disaster, how much downtime can your business afford? If you don't know, talk to the people in your organization who know. Gather information from the various business units in order to assist in determining what they consider acceptable. Odds are that your views as an IT professional might not coincide with the views of the business units; determine their RPO and RTO values, and determine a backup strategy based on that. Native tape support By popular demand, native tape support was introduced in Veeam Backup & Replication v7. While the most effective method of backup might be disk based, lots and lots of customers still want to make use of their existing investment in tape technology. Standalone drives, tape libraries, and Virtual Tape Libraries (VTL) are all supported and make it possible to use tape-based solutions for long-term archival of backup data. Basically any tape device recognized by the Microsoft Windows server on which Backup & Replication is installed is also supported by Veeam. If Microsoft Windows recognizes the tape device, so will Backup & Replication. It is recommended that customers check the user guide and Veeam Forums (http://forums.veeam.com) for more information on native tape support. Veeam Backup & Replication architecture Veeam Backup & Replication consists of several components that together make up the complete architecture required to protect your environment. This distributed backup architecture leaves you in full control over the deployment, and the licensing options make it easy to scale the solution to fit your needs. Since it works on the VM layer, it uses advanced technologies such as VMware vSphere Changed Block Tracking (CBT) to ensure that only the data blocks that have changed since the last run are backed up, ensuring that the backup is performed as quickly as possible and that the least amount of data needs to be transferred each time. By talking directly to the VMware vStorage APIs for Data Protection (VADP), Veeam Backup & Replication can back up VMs without the need to install agents or otherwise touch the VMs directly. It simply tells the vSphere environment that it wants to take a backup of a given VM; vSphere then creates a snapshot of the VM, and the VM is read from the snapshot to create the backup. Once the backup is finished, the snapshot is removed, and changes that happened to the VM while it was backed up are rolled back into the production VM. By integrating with VMware Tools and Microsoft Windows VSS, application-consistent backups are provided if available in the VMs that are being backed up. For Linux-based VMs, VMware Tools are required and its native quiescence option is used. Not only does it let you back up your VMs and restore them if required, but you can also use it to replicate your production environment to a secondary location. If your secondary location has a different network topology, it helps you remap and re-IP your VMs in case there is a need to failover a specific VM or even an entire datacenter. Of course, failback is also available once the reason for the failover is rectified and normal operations can resume. Veeam Backup & Replication components The Veeam Backup & Replication suite consists of several components, which in combination, make up the backup and replication architecture. Veeam backup server: This is installed on a physical or virtual Microsoft Windows server. Veeam backup server is the core component of an implementation, and it acts as the configuration and control center that coordinates backup, replication, recovery verification, and restore tasks. It also controls jobs scheduling and resource allocation, and is the main entry point configuring the global settings for the backup infrastructure. The backup server uses the following services and components: Veeam Backup Service: This is the main components that coordinates all operations, such as backup, replication, recovery verification, and restore tasks. Veeam Backup Shell: This is the application user interface. Veeam Backup SQL Database: This is used by the other components to store data about the backup infrastructure, backup and restore jobs, and component configuration. This database instance can be installed locally or on a remote server. Veeam Backup PowerShell Snap-in: These are extensions to Microsoft Windows PowerShell that add a set of cmdlets for management of backup, replication, and recovery tasks through automation. Backup proxy Backup proxies are used to offload the Veeam backup server and are essential as you scale your environment. Backup proxies can be seen as data movers, physical or virtual, that run a subset of the components required on the Veeam backup server. These components, which include the Veeam transport service, can be installed in a matter of seconds and are fully automated from the Veeam backup server. You can deploy and remove proxy servers as you see fit, and Veeam Backup &Replication will distribute the backup workload between available backup proxies, thus reducing the load on the backup server itself and increasing the amount of simultaneous backup jobs that can be performed. Backup repository A backup repository is just a location where Veeam Backup & Replication can store backup files, copies of VMs, and metadata. Simply put, it's nothing more than a folder on the assigned disk-based backup storage. Just as you can offload the backup server with multiple proxies, you can add multiple repositories to your infrastructure and direct backup jobs directly to them to balance the load. The following repository types are supported: Microsoft Windows or Linux server with local or directly attached storage: Any storage that is seen as a local/directly attached storage on a Microsoft Windows or Linux server can be used as a repository. That means that there is great flexibility when it comes to selecting repository storage; it can be locally installed storage, iSCSI/FC SAN LUNs, or even locally attached USB drives. When a server is added as a repository, Veeam Backup & Replication deploys and starts the Veeam transport service, which takes care of the communication between the source-side transport service on the Veeam backup server (or proxy) and the repository. This ensures efficient data transfer over both LAN and WAN connections. Common Internet File System (CIFS) shares: CIFS (also known as Server Message Block (SMB)) shares are a bit different as Veeam cannot deploy transport services to a network share directly. To work around this, the transport service installed on a Microsoft Windows proxy server handles the connection between the repository and the CIFS share. Summary In this article, we will learned about various backup strategies and also went through some components of Veeam® Backup and Replication. Resources for Article: Further resources on this subject: VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article] Use Of ISO Image for Installation of Windows8 Virtual Machine [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 5196

article-image-use-iso-image-installation-windows8-virtual-machine
Packt
29 Oct 2013
5 min read
Save for later

Use Of ISO Image for Installation of Windows8 Virtual Machine

Packt
29 Oct 2013
5 min read
(For more resources related to this topic, see here.) In the past, the only way that a Windows consumer could acquire the Windows OS was to purchase the installation media on a CD-ROM, floppy disk, or physical computer accessory, which had to be ordered online or bought from a local bricks and mortar store. Now with the recent release of Windows 8, Microsoft is continuing to extend its installation platform to digital media on a large scale. Windows 8 simplifies the process by using a web platform installer called Windows 8 Upgrade Assistant, which makes it easier to download, burn physical copies, and create a backup copy of the installation media. This form of digital distribution allows Microsoft to deploy products at a faster speed to the market, and increase its capacity to meet consumer demands. Getting ready To proceed with this recipe you will need to download Windows 8, so go to http://www.windows.com/buy. How to do it... In the first part of this section we will look into downloading the Windows 8 ISO file. Skip these steps if you have already downloaded the Windows 8 ISO file in advance. Visit the Microsoft website to purchase Windows 8 and then select the option to download the Windows 8 Upgrade Assistant file. Launch the Windows 8 Upgrade Assistant file and proceed with purchasing Windows 8. After completing the transaction, wait while the Windows 8 setup files are downloaded. The estimated download time varies, based on your Internet connection.speed. In addition, you have the option to pause the download and resume it later. Once the download is complete, the Windows 8 Upgrade Assistant will verify the integrity of the download by checking for file corruption and missing files. Wait while the Windows 8 setup gets the files ready to begin the installation. You will see a prompt that says Install Windows 8. Select the Install by creating media radio button. Select ISO file, then click on the Save button. When prompted to select a location to save the ISO file, choose a folder location and type Windows 8 as the filename. Then click on the Save button. When the product key is revealed, write it down and store it somewhere secure. Then click on the Finish button. The following set of instructions explains the details of installing Windows 8 on a newly created virtual machine using VMware Player. These steps are similar to the installation procedures encountered when installing Windows 8 on a physical computer: Open the VMware application by going to Start | All Programs| VMware. Then click on the VMware Player menu item. If you are opening VMware Player for the first time, the VMware Player License Agreement prompt will be displayed. Read the terms and conditions. Select Yes, I accept the terms in the license agreement to proceed to open the VMware Player application. If you select No, I do not accept the terms in the license agreement, you will not be permitted to continue. The Welcome to the New Virtual Machine Wizard screen will be visible. Click on Create a New Virtual Machine on the right side of the window. Select the Installer disc image file (iso): radio button. Then click on the Browse button. Browse to the directory of the Windows 8 ISO image and click on Open. You will see an information icon that says Windows 8 detected. This operating system will use Easy Install. Click on the Next button to continue. Under Easy Install Information, type in the Windows product information and click on Next to continue. You now have the options to: Enter the Windows product key Select the version of Windows to install (Windows 8 or Windows 8 Pro) Enter the full name of the computer Enter a password, which is optional If you do not enter a product key, you will receive a message saying that it can be manually entered later. Enter a new virtual machine name and directory location to install the virtual machine. For example, type Windows 8 VM and then click on the Next button to continue. Enter 16 as the Maximum disk size (GB) and store the virtual disk as a single file. Then click on the Finish button. This is because Windows 8 requires a minimum of 16 GB of free hard drive space. The Windows 8 virtual machine will power on automatically for the first time. At the Ready to Create Virtual Machine prompt, click on the Finish button. Remember that Windows 8 requires a minimum of 16 GB hard disk free space for the 32 bit installation and 20 GB space for the 64 bit installation. VMware will prompt you to install VMware Tools for Windows 2000 and later, click on the Remind Me Later button. The virtual machine will automatically boot up to the Windows 8 setup wizard. Wait until the Windows installation is complete. The virtual machine will reboot several times during this process. You will see various Windows 8 pictorials during the installation; please be patient. Once the installation is complete, your virtual machine will be immediately directed to the Windows 8 home screen. Summary This article introduced you to downloading of the Windows 8 operating system as an ISO, creating a new virtual machine, and installing Window 8 as a virtual machine. Resources for Article: Further resources on this subject: VMware View 5 Desktop Virtualization [Article] Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article]    
Read more
  • 0
  • 0
  • 5098

article-image-application-packaging-vmware-thinapp-47-essentials
Packt
15 Jan 2013
19 min read
Save for later

Application Packaging in VMware ThinApp 4.7 Essentials

Packt
15 Jan 2013
19 min read
(For more resources related to this topic, see here.) The capture and build environment You cannot write a book about a packaging format without discussing the environment used to create the packages. The environment you use to capture an installation is of great importance. ThinApp uses a method of snapshotting when capturing an application installation. This means you create a snapshot (Pre-Installation Snapshot) of the current state of the machine. After modifying the environment, you create another snapshot, the Post-Installation Snapshot. The differences between the two snapshots represent the changes made by the installer. This should be all the information you need in order to run the application. Many packaging products use snapshotting as a method of capturing changes. The alternative would be to try to hook into the installer itself. Both methods have their pros and cons. Using snapshot is much more flexible. You don't even have to run an installer. You can copy files and create registry keys manually and it will all be captured. But, your starting point will decide the outcome. If your machine already contains the Java Runtime Environment ( JRE ) and the application you are capturing requires Java, then you will not be able to capture the JRE. Since it was already there when you ran the pre-install snapshot, it will not be a part of the captured differences. This means your package would require Java installed or it will fail to run. The package will not be self-contained. The other method, monitoring the installer, will be more independent of the capturing environment but will not support all the installer formats and will not support manual tweaking during capture. Nothing is black or white. Snapshotting can be made a little more independent of the capture environment. When an installer discovers components already installed, it can register itself to the same components. ThinApp will recognize this, investigate which files are related to a component, and mark them as needed to be included in the package. But this is not a bulletproof method. So the general rule is to make sure your environment allows ThinApp to capture all required dependencies of the application. ThinApp packages are able to support multiple operating systems with one single package. This is a great feature and really helps in lowering the overall administration of maintaining an application. The possibility of running the same package on your Windows XP clients, Windows 7 machines, and your XenApp servers is unique. Most other packaging formats require you to maintain one package per environment. The easiest method to package an application is to capture it on the platform where it will run. Normally you can achieve an out of the box success rate of 60 — 80 percent. This means you have not tweaked the project in any way. The package might not be ready for production but it will run on a clean machine not having the application locally installed. If you want to support multiple operating systems you should choose the lowest platform you need to support. Most of the time this would be Windows XP. From ThinApp's point of view, Windows XP and Windows Server 2003 are of the same generation and Windows 7 and Windows 2008 R2 are of the same generation. Most installers are environment aware. They will investigate the targeting platform and if it discovers a Windows 7 operating system, it knows that some files are already present in the same or newer version than required. Installing on a Windows XP with no service pack would force those required files to be installed locally, and therefore captured by the capturing process. Having these files captured from and installation made on Windows XP rarely conflicts the running of the application on Windows 7 and helps you achieve multiple OS support. Creating a package for one single operating system is of course the easiest task. Creating a package supporting multiple operating systems, all being 32-bit systems is a little harder. How hard depends on the application. Creating a package supporting many different OS and both 32-bit and 64-bit versions is the hardest. But it is doable. It just requires a little extra packaging effort. Some applications cannot run on a 64-bit OS, but most applications offer some kind of work around. If the application contains 16-bit code, then it's impossible to make it run on a 64-bit environment. 64-bit environments cannot handle 16-bit code. Your only workaround for those scenarios is whole machine virtualization technologies. VMware Workstation, VMware View, Citrix XenDesktop, Microsoft Med-V, and many others offer you the capability to access a virtualized 32-bit operating system on your 64-bit machine. In general, you should use an environment that is as clean as possible. This will guarantee that all your packages include as many dependencies as possible, making them portable and robust. But it's not written in stone. If you are capturing an add-on to Microsoft Office, then Microsoft Office has to be locally installed in your capturing environment or the add-on installer would fail to run. You must design your capture environment to allow flexibility. Sometimes you capture on Windows XP, the next application might be captured on Windows 7 64-bit. The next day you'll capture on a machine having JRE installed, or Microsoft Office. The use of virtual machines is a must. Physical machines are supported but the hours spent on reverting to a clean state to start the capture of the next application makes it virtually useless. My capture environment is my MacBook Pro running VMware Fusion and several virtual machines such as Windows XP, Windows Vista, Windows 7, Windows 2003 Server, and of course Windows Server 2008. All VMs have several snapshots (states of the virtual machine) so I can easily jump back and forth between clean, Microsoft Office-installed and different service packs and browsers. Yes, it will require some serious disk space. I'm always low on free disk space. No matter how big the disks you buy are, your project folders and virtual machines will eat it all. I have two disks in my MacBook. One SSD disk, where I keep most of my virtual machines, and one traditional hard disk where I keep all my project folders. The best capture environments I've ever seen have been hosted on VMware vSphere and ESX. Using server hardware to run client operating systems make them fast as lightning. Snapshotting of your VMs take seconds, as well as reverting snapshots. Access to the virtual machines hosted on VMware ESX can be achieved using a console within the vSphere client or basic RDP. The only downside I can see to using an ESX environment is that you cannot do packaging offline, while traveling. The next logical question is if my capture machine should be a domain member or standalone, this depends, I always prefer to capture on standalone machines. This way I know that group policies will not mess with my capture process. No restrictions will be blocking me from doing what I need to do. But again, sometimes you can simply not capture an application without having access to a backend infrastructure. Then your capture machine must be on the corporate network and most of the time it means that it has to be a domain member. If possible, try putting the machine in a special Organizational Unit ( OU) where you limit the amount of restrictions. If at all possible, make sure you don't have antivirus installed on your capturing environment. I know that some enterprises have strict policies forcing even packaging machines to be protected by antivirus. But be careful. There is no way of telling what your antivirus may decide to do to your application's installation and the whole capture process. Most installer manuals clearly state to disable any antivirus during installation. They do that for a reason. Antivirus scanning logs and all that follows will also pollute your project folder. It will probably not break your package but I am a strong believer in delivering clean and optimized packages. So having an antivirus means you will have to spend some time cleaning up your project folders. Alternatively, you can include areas where the antivirus changes content in snapshot.ini, the Setup Capture exclusion list. Entry points and the data container An entry point is the doorway into the virtual environment for the end users. An entry point specifies what will be launched within the virtual environment. Mostly an entry point points to an executable, for example, winword.exe. But an entry point doesn't have to point to an executable. You can point an entry point to whatever kind of file you want, as long as the file type has a file association made to it. Whatever is associated to the file type will be launched within the virtual environment. If no file type association exists, you will get the standard operating system dialog box, asking you which application to open the file with. The name of the entry point must use an .exe extension. When the user double-clicks on an entry point, we are asking the operating system to execute the ThinApp runtime. Entry points are managed in Package.ini. You'll find them at the end of the Package.ini file. The data container is the file where ThinApp stores the whole virtual environment and the ThinApp runtime. There can only be one data container per project. The content in the data container is an exact copy of the representation of the virtual environment found in your project folder. The data in the data container is in read-only format. It's the packagers who change the content by rebuilding the project. An end user cannot change the content of the data container. An entry point can be a data container. Setup Capture will recommend not using an entry point as a data container if Setup Capture believes that the package will be large (200 MB-300 MB). The reason for this is that the icon of the entry point may take up to 20 minutes to be displayed. This is a feature of the operating system and there's nothing you can do about it. It's therefore better to store the data container in a separate file and keep your entry points small. Make sure the icons are displayed quickly. Setup Capture will force you to use a separate data container when the size is calculated to be larger than 1.5 GB. Windows has a size limitation for xecutable files. Windows will deny executing a .exe file larger than 2 GB. The name of the data container can be anything. You will not have to name it with the .dat extension. It doesn't have to have a file extension at all. If you're using a separate data container, you must keep the data container in the same folder as your entry points. Let's take a closer look at the data container and entry point section of Package.ini. You'll find the data container and entry points at the end of the Package.ini file. The following is an example Package.ini file from a virtualized Mozilla Firefox: [Mozilla Firefox.exe] Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe ;ChangeReadOnlyData to binPackage.ro.tvr to build with old versions(4.6.0 or earlier) of tools ReadOnlyData=Package.ro.tvr WorkingDirectory=%ProgramFilesDir%Mozilla Firefox FileTypes=.htm.html.shtml.xht.xhtml Protocols=FirefoxURL;ftp;http;https Shortcuts=%Desktop%;%Programs%Mozilla Firefox;%AppData%Microsoft Internet ExplorerQuick Launch [Mozilla Firefox (Safe Mode).exe] Disabled=1 Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe Shortcut=Mozilla Firefox.exe WorkingDirectory=%ProgramFilesDir%Mozilla Firefox CommandLine="%ProgramFilesDir%Mozilla Firefoxfirefox.exe"-safe-mode Shortcuts=%Programs%Mozilla Firefox A step-by-step explanation for the parameters is given as follows: [Mozilla Firefox.exe]   Within [] is the name of the entry point. This is the name the end user will see. Make sure to use .exe as your file extension. Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe The source parameter points to the target of the entry point, that is, what will be launched when the user clicks on the entry point. The source can either be a virtualized or physical file. The target will be launched within the virtual environment no matter where it lives. ReadOnlyData=Package.ro.tvr The ReadOnlyData indicates this entry point is in fact a data container as well. WorkingDirectory=%ProgramFilesDir%Mozilla Firefox This specifies the working directory for the executable launched. This is often a very important parameter. If you do not specify a working directory, the active working directory will be the location of your package. A lot of software depends on having their working directory set to the application's own folder in the program files directory. FileTypes=.htm.html.shtml.xht.xhtml This is used when registering the entry point. It specifies which file extensions should be associated with this entry point. The previous example registers .htm, .html, and so on to the virtualized Mozilla Firefox. Protocols=FirefoxURL;ftp;http;https This is used when registering the entry point. It specifies which protocols should be associated with this entry point. The previous example registers http, https, and so on to the virtualized Mozilla Firefox. Shortcuts=%Desktop%;%Programs%Mozilla Firefox The parameter Shortcuts is also used when registering your entry points. The Shortcuts parameter decides where shortcuts will be created. The previous example creates shortcuts to virtualized Mozilla Firefox on the Start menu in a folder called Mozilla Firefox, as well as a shortcut on the user's desktop. [Mozilla Firefox (Safe Mode).exe] Disabled=1 Disabled means this entry point will not be created during the build of your project. Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe Shortcut=Mozilla Firefox.exe Shortcut tells this ent;ry point what its data container is named. If you change the data container's name you will have to change the Shortcut parameter on all entry points using the data container. WorkingDirectory=%ProgramFilesDir%Mozilla Firefox CommandLine="%ProgramFilesDir%Mozilla Firefoxfirefox.exe"-safe-mode CommandLine will allow you to specify hardcoded parameters to the executable. It's the native parameters supported by the virtualized application that you use. Shortcuts=%Programs%Mozilla Firefox There are many more parameters related to entry points. The following are some more examples with descriptions: [Microsoft Office Enterprise 2007.dat] Source=%ProgramFilesDir%Microsoft OfficeOffice12OSA.EXE ;ChangeReadOnlyData to binPackage.ro.tvr to build with old versions(4.6.0 or earlier) of tools ReadOnlyData=Package.ro.tvr MetaDataContainerOnly=1 MetaDataContainer indicates that this is a separate data container. [Microsoft Office Excel 2007.exe] Source=%ProgramFilesDir%Microsoft OfficeOffice12EXCEL.EXE Shortcut=Microsoft Office Enterprise 2007.dat FileTypes=.csv.dqy.iqy.slk.xla.xlam.xlk.xll.xlm.xls.xlsb.xlshtml.xlsm. xlsx.xlt.xlthtml.xltm.xltx.xlw Comment=Perform calculations, analyze information, and visualize data in spreadsheets by using Microsoft Office Excel. Comment allows you to specify text to be displayed when hovering your mouse over the shortcut to the entry point. ObjectTypes=Excel.Addin;Excel.AddInMacroEnabled;Excel. Application;Excel.Application.12;Excel.Backup;Excel.Chart;Excel. Chart.8;Excel.CSV;Excel.Macrosheet;Excel.Sheet;Excel.Sheet.12;Excel. Sheet.8;Excel.SheetBinaryMacroEnabled;Excel.SheetBinaryMacroEnab led.12;Excel.SheetMacroEnabled;Excel.SheetMacroEnabled.12;Excel. SLK;Excel.Template;Excel.Template.8;Excel.TemplateMacroEnabled;Excel. Workspace;Excel.XLL This specifies the object types which will be registered to the entry point when registered. Shortcuts=%Programs%Microsoft Office StatusBarDisplayName=WordProcessor Users can change the name displayed in the ThinApp splash screen. In this example, WordProcessor will be displayed as the title. Icon=%ProgramFilesDir%Microsoft OfficeOffice12EXCEL.ico Icon allows you to specify an icon for your entry point. Most of the times ThinApp will display the correct icon without this parameter. You can point to an executable to use its built-in icons as well. You can specify a different icon set by applying 1 or 2 and so on to the icon path, for example, Icon=%ProgramFilesDir%Microsoft OfficeOffice12EXCEL.EXE,1 The most common entry points should be either cmd.exe or regedit.exe. You'll find them in all Package.ini files but they are disabled by default. Since cmd.exe and regedit.exe most likely weren't modified during Setup Capture, they are not part of the virtual environment. So the source will be the native cmd.exe and regedit.exe. These two entry points are the packagers' best friends. Using these entry points allows a packager to investigate the environment known to the virtualized application. What you can see using cmd.exe or regedit.exe is what the application sees. This is a great help when troubleshooting. If you package an add-on to a natively installed application, the typical example is packaging JRE and you want the local Internet Explorer to be able to use it. Creating an entry point within your Java package using native Internet Explorer as a source, is a perfect method of dealing with it. Now you can offer a separate shortcut to the user, allowing users to choose when to use native Java or when to use virtualized Java. ThinApp's isolation will allow virtualization of one Java version running on a machine with another version natively installed. The only problem with this approach is how you educate your users when to use which shortcut. ThinDirect, discussed later in this article, in the Virtualizing Internet Explorer 6 section, will allow you to automatically point the user to the right browser. There are many use cases for launching something natively within a virtualized environment. You may face troublesome Excel add-ons. Virtualizing them will protect against conflicts, but you must launch native Excel within the environment of the add-on for it to work. Here you could use the fact that many Excel add-ons use .xla files as the typical entry point to the add-on. Create your entry point using the .xla file as source and you will be able to launch any Excel version that is natively installed. When you use a non executable as your entry point source, remember that the name of your entry point must still be .exe. The following is an example of an entry point using a text file as source: [ReadMe.exe] Source=%Drive_C%Tempreadme.txt ReadOnlyData=Package.ro.tvr Running ReadMe.exe will launch whatever is associated to handle .txt files. The application will run within the virtualized environment of the package.   The project folder The project folder is where the packager spends most of his or her time. The capturing process is just a means to create the project folder. You could just as easily create your own project folder from scratch. I admit, to manually create a project folder representing a Microsoft Office installation would be far from easy but in theory it can be done. There is some default content in all project folders. Let's capture nothing and investigate what these are. During Setup Capture, to speed things up, disable the majority of the search locations. This way pre and post scans will take close to no time at all. Run Setup Capture. In the Ready to Prescan step, click on Advanced Scan Locations.... Exclude all but one location from the scanning, as shown in the following screenshot: Since we want to capture nothing, there is no point in scanning all locations. Normally you don't have to modify the advanced scan locations. After pressing Prescan, wait for Postscan to become available and click on it when possible, without modifying anything in your capturing environment. Accept CMD.EXE as your entry point and accept all defaults throughout the wizard. Your project folder will look like the following screenshot: The project folder of a capturing, bearing no changes, will still create folder macros and default isolation modes. Let's explore the defaults prepopulated by the Setup Capture wizard. This is the minimum project folder content that the Setup Capture will ever generate. As a packager you are expected to clean up unnecessary folders from the project folder, so your final project folder may very well contain a smaller number of folder macros. Folder macros are ThinApp's variables. %ProgramFilesDir% will be translated to C:Program Files on an English Windows installation but the same package running on a Swedish OS the %ProgramFilesDir% will point to C:Program. Folder macros are the key to ThinApp packages' portability. If we explore the filesystem part of the project folder, we'll see the default isolation modes prepopulated by Setup Capture. These are applied as defaults no matter what default filesystem isolation mode you choose during the Setup Capture wizard. This confuses some people. I'm often told that a certain package is using WriteCopy or Merged as the isolation mode. Well that's just the default used when no other isolation mode is specified. A proper project folder should have isolation modes specified on all locations of importance, basically making the default isolation mode of no importance. The prepopulated isolation modes are there to make sure most applications run out of the box ThinApped. You are expected to change these to suit your application and environment. Let's look at some examples of default isolation modes. %AppData%, the location where most applications store user settings, is by default using WriteCopy. This is to make sure that you sandbox all user settings. %SystemRoot% and %SystemSystem% have WriteCopy as their default isolation modes, allowing a virtualized application to see the operating system files without allowing it to modify C:Windows and C:WindowsSystem32. %SystemSystem%spool representing C:WindowsSystem32Spool has Merged as its default. This way print jobs will be spooled to the native location, allowing the printer to pick up the print job. %Desktop% (user's desktop folder) and %Personal% (user's document folder) have Merged by default. When ThinApp generates the project folder, it uses the following logic to decide which isolation mode to prepopulate other locations with. The same logic is used within the registry as well. Modified locations will get WriteCopy as the isolation mode New locations will get Full as their isolation mode
Read more
  • 0
  • 0
  • 5000

article-image-troubleshooting-and-gotchas-oracle-vm-manager-212
Packt
08 Oct 2009
4 min read
Save for later

Troubleshooting and Gotchas in Oracle VM Manager 2.1.2

Packt
08 Oct 2009
4 min read
As more and more users start to explore and use Oracle VM Manager, more troubleshooting and tweaks will come up. This is by no way an exhaustive list and is also not intended to be. Please do participate as much as possible in forums and contribute your tips and tricks with the community. Oracle VM Manager login takes too much time I have faced this issue very often and normally if you are unlucky you ought to get this type of error while installing. For instance this error message says nothing about the memory issue: Failed at "Could not get DeploymentManager".This is typically the result of an invalid deployer URI format beingsupplied, the target server not being in a started state or incorrectauthentication details being supplied.More information is available by enabling logging -- please see theOracle Containers for J2EE Configuration and Administration Guide fordetails.FailedPlease see /var/log/ovm-manager/ovm-manager.log for more information.Deploying application failed.Please check your environment, and re-run the script:/bin/sh scripts/deployApp.shAborting installation. Please check the environment and rerunrunInstaller.sh. But when you upgrade your VM Manager OS with more memory you'll be able to continue with the installation. Sometimes, you may also get all kinds of errors, such as the following one: Internal Exception: java.lang.OutOfMemoryError: Java heap space And they clearly point to the memory issue. This suggests that your OC4J may need more memory. Let's run the following commands to check the log information: cat  /var/log/ovm-manager/oc4j.log | grep "heap" If your OC4J ran out of memory you would typically get that heapsize error. To fix this go back to the console and examine the values of the following OC4J_JVM_ARGS function in the /opt/oc4j/bin/oc4j configuration file: Edit the following OC4J_JVM_ARGS="-XX:PermSize=256m -XX:MaxPermSize=512m function and give more memory to the OC4J. Save the information and quit: Restart the service OC4J: service oc4j stopservice oc4j start HVM guest creation fails Normally there are many actions and functionalities within Oracle VM Manager that require the host to be truly HVM-aware, which means that 64-bit (preferably) Oracle VM Servers must be running with hardware virtualization support on the chipset level. Having said that, both Intel and AMD support it and it is highly unlikely that you will come across new machines that do not support that. However, always check the compatibility within a specific family and check whether the support is turned on or off. Nonetheless, you could be using some reusable older hardware that may or may not support HW-assist virtualization. If you are confronted with the following message: "Error: There is no server supporting hardware virtualization in the selected server pool. " Then you'll have a reason to worry and check your hardware, and carry out the following commands on the VM Server that does not allow you to create a HVM: Cat /proc/cpuinfo | grep –E 'vmx|smx' Use the preceding command if your hardware is HVM-aware, then you should get some reply as shown in the following screenshot: If you don't get a response, then you might have a problem. For instance we pick up another VM Server which we for sure know does not have a HVM support or HW-assist virtualization: Also ensure that the virtualization support is enabled at the HW level in the BIOS. Then run the following commands to see if the Operating System supports HVM: As you have seen in the preceding screenshot, we then quickly logged into the VM Server which we knew does not support HVM and did not get a reply from the 172.22.202.111 VM Server. Whereas, the x64 bit version with built-in, BIOS-enabled HVM support returns the values in the form of xen_caps. xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 So if your CPU does not support HVM, use the PVM (Paravirtualized Method) to create your VM.
Read more
  • 0
  • 0
  • 4800

Packt
27 Jan 2014
5 min read
Save for later

Understanding Citrix®Provisioning Services 7.0

Packt
27 Jan 2014
5 min read
(For more resources related to this topic, see here.) The following diagram provides a high-level view of the basic Provisioning Services infrastructure and clarifies how Provisioning Services components might appear within the datacenter post installation and implementation: Provisioning Service License server The License Server either should be installed within the shared infrastructure or an existing Citrix license server can be selected. However, we have to ensure the Provisioning Service license is configured in your existing Citrix Enterprise License servers. A License Server can be selected when the Provisioning Service Configuration Wizard is run on a planned server. All Provisioning Servers within the farm must be able to communicate with the License Server. Provisioning Service Database server The database stores all system configuration settings that exist within a farm. Only one database can exist within a provisioning service farm. We can choose an existing SQL Server database or install an SQL Server in cluster for High Availability from a redundancy business continuities perspective. The Database server can be selected when the Provisioning Service Configuration Wizard runs on a planned server. All Provisioning Servers within the farm must be able to communicate with the Database server, and only one database can exist within a Provisioning Service farm Provisioning Service Admin Console Citrix Provisioning Service Admin Console is a tool that is used to control your Provisioning Services implementation. After logging on to the console, we can select the farm that we want to connect to. Our role determines what we can look at in the console and operate in the Provisioning Service farm. Shared storage service Citrix Provisioning Service requires shared storage for vDisks that are accessible by all of the users in a network. They are intended for file storage and allowing simultaneous access by multiple users without the need to replicate files to their machines' vDisk. The supported shared storages are SAN, NAS, iSCSI, and CIFS. Active Directory Server Citrix Provisioning service requires Microsoft's Active Directory. It provides authentication and authorization mechanisms as well as a framework, within which other related services can be deployed. Microsoft Active Directory is an LDAP-compliant database that contains objects. The most commonly used objects are users, computers, and groups Network services Dynamic Host Control Protocol (DHCP) is used for the purpose of getting IP addresses for servers and systems. Trivial File Transfer Protocol (TFTP) is used for automated transfer of boot configuration files between servers and a system in a network. Preboot Execution Environment (PXE) is a standard used for client/server interface that allows networked computers that boot remotely to boot locally instead. System requirements Citrix Provisioning Service can be installed with following requirements: Citrix Provisioning Server Requirement Description Operation system Windows 2012: Standard, Essential, and Datacenter editions; Windows 2008 R2; Windows 2008 R2 SP1: Standard, Enterprise, and DataCenter editions; and all editions of Windows 2008 (32 or 64-bit) Processor Intel or AMD x86 or x64 compatible 2 GHz / 3 GHz (preferred) / 3.5 GHz Dual Core / HT or an equal one for growing capacity fulfiller Memory 2 GB RAM; 4 GB (greater than 250 vDisks) Hard disk To determine IOPS needed along RAID Level, please plan your sizing based on the following formula: Total Raw IOPS = Disk Speed IOPS x # of Disks Functional IOPS = ((Total Raw IOPS * Write %)/RAID Penalty ) + (Total Raw IOPS*Read %) For more, please refer to http://support.citrix.com/servlet/KbServlet/download/24559-102-647931/ Network adapter IP assignment to servers should be static. 1 GB is recommended for less than 250 target devices. If you are planning for more than 250 devices, Dual 1 GB is recommended. For High Availability, please have two NICs for redundancy purposes. Pre-requisite software components Microsoft .NET 4.0 and Microsoft Powershell 3.0 loaded on a fresh OS The Infrastructure components required are described as follows: Requirement Description Supported database Microsoft SQL 2008, Microsoft SQL 2008 R2, and Microsoft SQL 2012 Server (32-bit or 64-bit editions) databases can be used for the Provisioning ServicesDB sizing. Please refer to http://msdn.microsoft.com/en-us/library/ms187445.aspx. For HA Planning, please refer to http://support.citrix.com/proddocs/topic/provisioning-7/pvs-installtask1-plan-6-0.html. Supported hypervisor XenServer 6.0, Microsoft SCVMM 2012 SP1 with Hyper-V 3.0; SCVMM 2012 with Hyper-V 2.0, VMware ESX 4.1, ESX 5, or ESX 5 Update 1; vSphere 5.0, 5.1, 5.1 Update 1; along with Physical Devices for 3D Pro Graphics (Blade Servers, Windows Server OS machines, and Windows Desktop OS machines with XenDesktop VDA installed). Provisioning Console Hardware requirement: Processor 2 GHz, Memory 2 GB ,Hard Disk 500 MB Supported Operating Systems: all editions of Windows Server 2008 (32-bit or 64- bit); Windows Server 2008 R2: Standard, DataCenter, and Enterprise editions; Windows Server 2012: Standard, Essential, and Datacenter editions; Windows 7 (32-bit or 64-bit); Windows XP Professional (32-bit or 64-bit); Windows Vista (32-bit or 64-bit): Business, Enterprise, and Ultimate (retail licensing); and all editions of Windows 8 (32-bit or 64-bit). Pre-Requisite Software: MMC 3.0, Microsoft .NET 4.0, and Windows PowerShell 2.0 In case we are using Provisioning Services, we would require XenDesktop and, NET 3.5 SP1, and in the event that we are using Provisioning Services then we would require SCVMM 2012 SP1 and PowerShell 3.0. Supported ESD Apply only in case VDisk Update Management is used; ESD supports WSUS Server-3.0 SP2 and Microsoft System Center Configuration Management 2007 SP2, 2012, and 2012 SP1 Supported target device Supported Operating Systems: all editions of Windows 8 (32 or 64-bit); Windows 7 SP1 (32 bits or 64 bits): Enterprise, Professional, and Ultimate (Support alone in Private Mode); Windows XP Professional SP3 32-bit and Windows XP Professional SP2 64-bit; Windows Server 2008 R2 SP1: Standard, DataCenter, and Enterprise editions; Windows Server 2012: Standard, Essential, and Datacenter editions. Summary This article has thus covered the several components that make up a Citrix Provisioning Services farm and the system requirements that need to be met to run the software. Resources for Article: Further resources on this subject: Introduction to XenConvert [article] Citrix XenApp Performance Essentials [article] Getting Started with XenApp 6 [article]
Read more
  • 0
  • 0
  • 4746
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-faq-virtualization-and-microsoft-app-v
Packt
24 Jan 2011
8 min read
Save for later

FAQ on Virtualization and Microsoft App-V

Packt
24 Jan 2011
8 min read
Getting Started with Microsoft Application Virtualization 4.6 Q: What is the need for virtualization? A: With virtual environments we rapidly start gaining the agility, scalability, cost-saving, and the security that almost any business today requires. Following are some advantages: Reduces infrastructure downtime by several hours Saves time and resources that is spend deploying/providing operating systems to users Saves troubleshooting time for application installations Q: How do cloud service models assist virtualization? A: The cloud service model is all around us, which presents to us several new ways of thinking about technology: Software as a Service (SaaS or S+S): Delivering applications over the network without any local installations or maintenance. Platform as a Service (PaaS): Providing solutions, like an Active Directory solution, as a service, avoiding the deployment tasks. Infrastructure as a Service (IaaS): Supplying computer infrastructure as a service. Instead of companies thinking about buying new hardware and the maintenance costs that implies, the infrastructure is provided (typically in virtual machines) as they need it. Q: Where do we stand today with regards to virtualizastion? A: Fortunately, today's demand regarding virtualization is incredibly high, which is why the possibilities and offerings are even higher. We can virtualize servers, appliances, desktops, and applications, achieving presentation and profile virtualization; you name it and there's probably already a bunch of products and technologies you can use to virtualize it. Application virtualization is still one of the emerging platforms but is increasing rapidly in the IT world. More and more of the dynamic aspects of isolating and scaling the applications deployment are being implemented. And Microsoft's App-V represents one of the strongest technologies we can rely on. Q: How does virtualization achieve faster and dynamic deployments? A: Handling server or desktop deployments is always a painful thing to do, requiring hours of deployment, tuning, and troubleshooting; all of these aspects are inherited in any operating system lifecycle. Having virtual machines as baselines would reduce OS deployment from several hours to a few minutes. The desktop virtualization concept provides the end user the same environment as using a local desktop computer but working with remote computer resources. Taking this strategy will enhance the provisioning of desktop environments, more resources can be added on demand, and the deployment will no longer depend on specific hardware. Building virtual machines templates ready to go, self-service portals to provision virtual machines for our power users whenever they need a virtual environment to test an application; these are some of the other features that can be included using a virtualization platform. Q: How does virtualization achieve cost savings? A: Listed below are two major factors that acheive cost saving: Lower power consumption: Large datacenters also include large electricity consumption; removing the physical layer from your servers will translate yearly to a nice reduced number in electrical bills. This is no small matter; most of the capacity and costs planning for implementing virtualization, also includes the "power consumption" variable. It won't be long until "Green Datacenters" and "Green IT" will be a requirement for every mid-size and large business. Hardware cost savings: Before thinking about probably needing expensive servers to host your entire infrastructure let me ask you this, did you know that the average hardware resources usage is around 5% to 7%? That means we are currently wasting around 90% of the money invested in that hardware. Virtualization will optimize and protect your investment; we can guarantee that the consolidation of your servers will be not only effective but also efficient.   Q: How does virtualization improve efficiency? A: There's a common scenario in several organizations where there are users that only depend on, for example, the Office suite for their work; but the cost of applying a different hardware baseline to them, that fits their needs exactly, is extremely high. That is why efficiency also plays an important variable in your desktop-using desktop virtualization you can be certain that you are not over-or under-resourcing any end-user workstation. You can easily provide all the necessary resources to a user as long for as they need them. Q: How does it achieve scalable and easy-to-manage platforms? A: A new contingency layer for every machine, taking a snapshot of an operating system, is a concept that didn't appear before virtualization existed. Whenever you introduce a change in your platform (like a new service pack release) there's always a risk that things won't be just as fine as they were before. Having a quick, immediate, and safe restore point of a server/desktop could represent a cost-saving solution. Virtual machines and snapshot possibilities will give you the necessary features to manage and easily maintain your labs for testing updates or environment changes, even the facilities to add/remove memory, CPUs, hard drives, and other devices to a machine in just a few seconds. Q: How does virtualization enhance backup and recovery? A: Virtual environments will let you redesign your disaster recovery plan and minimize any disruption to the services you are providing. The possibilities around virtual machine's hot backups and straightforward recoveries will give you the chance to arrange and define different service level agreements (SLAs) with your customers and company. The virtualization model offers you the possibility to remove the hardware dependencies of your roles, services, and applications; a hardware failure can present only a minor issue in the continuity of your business, simply by moving the virtual machines to different physical servers without major disruptions. Q: How is application deployment incompatibility issue addressed? A: Inserting a virtualized environment into our applications deployment will reduce the time invested in maintaining and troubleshooting operating system and applications incompatibilities. Allowing the applications to run in a virtualized and isolated environment every time they are deployed removes possible conflicts with other applications. It is also a common scenario for most organizations to face incompatibility issues with their business applications whenever there's a change—new operating system, new hardware, or even problems with the development of the application that starts generating issues with particular environments. You can say goodbye to those problems, facilitating real-time and secure deployments of applications that are decoupled from tons of requirements. Q: What is Application Virtualization? A: As virtual machines that work abstracting the hardware layer from physical servers, application virtualization abstracts the application and its dependencies from the operating system, effectively isolating the application from the OS and other applications. Application Virtualization, in general terms, represents a set of components and tools that remove the complexity of deploying and maintaining applications for desktop users; preserving only a small footprint of the operating system. Getting more specific, Application Virtualization is a process for packaging (or virtualizing) an application and the environment in which the application works, and distributing this package to end users. The use of this package (which can contain more than one application) is completely decoupled from the common requirements (like the installation and uninstallation processes) attached to applications. The Technical Overview of Application Virtualization offered by Microsoft represents a fine graphic explanation about how normal applications interact with the operating system and their components; and how virtualized applications do the same. Take a look at http://www.microsoft.com/systemcenter/appv/techoverview.mspx. Q: What are the drawbacks of a normal business application scenario? A: The three major aspects are: Special configurations every time that is deployed. Customizing files or setting special values within the application configuration environment. It is also interconnected with other applications (for example, Java Runtime Environment, a local database engine, or some other particular requirement). It demands several hours every week to support end users deployments and troubleshooting configurations. Application Virtualization offer us the possibility to guarantee that end users always have the same configuration deployed, no matter when or where, as you only need to configure it once and then wrap up the entire set of applications into one package. Q: How does Application Virtualization differ from running normal applications? A: In standard OS environments, applications install their settings onto the host operating system, hard-coding the entire system to fit that application's needs. Other applications' settings can be overwritten, possibly causing them to malfunction or break. Here's a common example of how two applications co-exist in the same operating system, and if these applications share some registry values the application's (or even operating system's) usability could be compromised. With Application Virtualization, each application brings down its own set of configurations on-demand, and executes in a way such that it only sees its own settings. Each virtual application is able to read and write information in their application profile and can access operating system settings in the registry or DLLs, but cannot change them.
Read more
  • 0
  • 0
  • 4292

article-image-article-vmware-view-5-desktop-virtualization-vcenter-vdesktop
Packt
15 Jun 2012
8 min read
Save for later

VMware View 5 Desktop Virtualization

Packt
15 Jun 2012
8 min read
Core components of VMware View This book assumes a familiarity with server virtualization, more specifically, VMware vSphere (sometimes referred to as ESX by industry graybeards). Therefore, this article will focus on: The VMware vCenter Server The types of View Connection Server Agent and client software vCenter Server VMware vCenter is a required component of a VMware View solution. This is because the View Connection Server interacts with the underlying Virtual Infrastructure (VI) through vCenter Web Service (typically over port 443). vCenter is also responsible for the complementary components of a VMware View solution provided by the underlying VMware vSphere, including VMotion and DRS (used to balance the virtual desktop load on the physical hosts). When an end customer purchases VMware View bundles, VMware vCenter is automatically included and does not need to be purchased via a separate Stock Keeping Unit (SKU). In the environments leveraging vSphere for server virtualization, vCenter Server is likely to already exist. To ensure a level set on the capabilities that VMware vCenter Server provides, the key terminologies are listed as follows: vMotion: It is the ability to live migrate a running virtual machine from one physical server to another with no downtime. DRS: It is the vCenter Server capability that balances virtual machines across physical servers participating in the same vCenter Server cluster. Cluster: It is a collection of physical servers that have access to the same networks and shared storage. The physical servers participating in a vCenter cluster have their resources (for example, CPU, memory, and so on) logically pooled for virtual machine consumption. HA: It is the vCenter Server capability that protects against the failure of a physical server. HA will power up virtual machines that reside on the failed physical server on available physical servers in the same cluster. Folder: It is a logical grouping of virtual machines, displayed within the vSphere Client. vSphere Client: It is the client-side software used to connect to vCenter Servers (or physical servers running vSphere) for management, monitoring, configuration, and other related tasks. Resource pool: It is a logical pool of resources (for example, CPU, memory, and so on). The virtual machines (or the groups of virtual machines) residing in the same resource pool will share a predetermined amount of resources. Designing a VMware View solution often touches on typical server virtualization design concepts such as the proper cluster design. Owing to this overlap in design concepts between server virtualization and VDI, many server virtualization engineers apply exactly the same principles from one solution to the other. The first misstep that a VDI architect can take is that VDI is not server virtualization and should not be treated as such. Server virtualization is the virtualization of server operating systems. While it is true that VDI does use some server virtualization (for the connection infrastructure, for example), there are many concepts that are new and critical to understand for success. The second misstep a VDI architect can make is in understanding the pure scale of some VDI solutions. For the average server virtualization administrator with no VDI in their environment, he/she may be tasked with managing a dozen physical servers with a few hundred virtual machines. The authors of this book have been involved in VDI solutions involving tens of thousands of vDesktops, well beyond the limits of a traditional VMware vSphere design. VDI is often performed on a different scale. The concepts of architectural scaling are covered later in this book, but many of the scaling concepts revolve around the limits of VMware vCenter Server. It should be noted that VMware vCenter Server was originally designed to be the central management point for the enterprise server virtualization environments. While VMware continues to work on its ability to scale, designing around VMware vCenter server will be important. So why do we need VMware vCenter in the first place, for the VDI architect? VMware vCenter is the gateway for all virtual machine tasks in a VMware View solution. This includes the following tasks: The creation of virtual machine folders to organize vDesktops The creation of resource pools to segregate physical resources for different groups of vDesktops The creation of vDesktops The creation of snapshots VMware vCenter is not used to broker the connection of an end device to a vDesktop. Therefore, an outage of VMware vCenter should not impact inbound connections to already-provisioned vDesktops as it will prevent additional vDesktops from being built, refreshed, or deleted. Because of vCenter Server's importance in a VDI solution, additional steps are often taken to ensure its availability even beyond the considerations made in a typical server virtualization solution. Later in this book, there is a question asking whether an incumbent vCenter Server should be used for an organization's VDI or whether a secondary vCenter Server infrastructure should be built. View Connection Server View Connection Server is the primary component of a VMware View solution; if VMware vCenter Server is the gateway for management communication to the virtual infrastructure and the underlying physical servers, the VMware View Connection Server is the gateway that end users pass through to connect to their vDesktop. In classic VDI terms, it is VMware's broker that connects end users with workspaces (physical or virtual). View Connection Server is the central point of management for the VDI solution and is used to manage almost the entire solution infrastructure. However, there will be times when the architect will need to make considerations to vCenter cluster configurations, as discussed later in this book. In addition, there may be times when the VMware View administrator will need access to the vCenter Server. The types of VMware View Connection Servers There are several options available when installing the VMware View Connection Server. Therefore, it is important to understand the different types of View Connection Servers and the role they play in a given VDI solution. Following are the three configurations in which View Connection Server can be installed: Full: This option installs all the components of View Connection Server, including a fresh Lightweight Directory Access Protocol (LDAP) instance. Security: This option installs only the necessary components for the View Connection portal. View Security Servers do not need to belong to an Active Directory domain (unlike the View Connection Server) as they do not access any authentication components (for example, Active Directory). Replica: This option creates a replica of an existing View Connection Server instance for load balancing or high availability purposes. The authentication/ LDAP configuration is copied from the existing View Connection Server. Our goal is to design the solutions that are highly available for our end customers. Therefore, all the designs will leverage two or more View Connection Servers (for example, one Full and one Replica). The following services are installed during a Full installation of View Connection Server: VMware View Connection Server VMware View Framework Component VMware View Message Bus Component VMware View Script Host VMware View Security Gateway Component VMware View Web Component VMware VDMDS VMware VDMDS provides the LDAP directory services. View Agent View Agent is a component that is installed on the target desktop, whether physical (seldom) or virtual (almost always). View Agent allows the View Connection Server to establish a connection to the desktop. View Agent also provides the following capabilities: USB redirection: It is defined as making a USB device—that is connected locally—appear to be connected to vDesktop Single Sign-On (SSO): It is done by using intelligent credential handling, which requires only one secured and successful authentication login request, as opposed to logging in multiple times (for example, at the connection server, vDesktop, and so on) Virtual printing via ThinPrint technology: It is the ability to streamline printer driver management through the use of ThinPrint (OEM) PCoIP connectivity: It is the purpose-built VDI protocol made by Teradici and used by VMware in their VMware View solution Persona management: It is the ability to manage a user profile across an entire desktop landscape; the technology comes via the recovery time objective (RTO) acquisition by VMware View Composer support: It is the ability to use linked clones and thin provisioning to drastically reduce operational efforts in managing a mid-to-large-scale VMware View environment View Client View Client is a component that is installed on the end device (for example, the user's laptop). View Client allows the device to connect to a View Connection Server, which then directs the device to an available desktop resource. Following are the two types of View Clients: View Client View Client with Local Mode These separate versions have their own unique installation bits (only one may be installed at a time). View Client provides all of the functionality needed for an online and connected worker. If Local Mode will be leveraged in the solution, View Client with Local Mode should be installed. VMware View Local Mode is the ability to securely check out a vDesktop to a local device for use in disconnected scenarios (for example, in the middle of the jungle). There is roughly an 80 MB difference in the installed packages (View Client with Local Mode being larger). For most scenarios, 80 MB of disk space will not make or break the solution as even flash drives are well beyond an 80 MB threshold. In addition to providing the functionality of being able to connect to a desktop, View Client talks to View Agent to perform the following tasks: USB redirection Single Sign-On
Read more
  • 0
  • 0
  • 4174

article-image-your-first-step-towards-hyper-v-replica
Packt
11 Oct 2013
12 min read
Save for later

Your first step towards Hyper-V Replica

Packt
11 Oct 2013
12 min read
(For more resources related to this topic, see here.) The Server Message Block protocol When an enterprise starts to build a modern datacenter, the first thing that should be done is to set up the storage. With the introduction of Windows Server 2012, a new improved version of the Server Message Block (SMB) protocol is introduced. The SMB is a file sharing protocol. This new version is 3.0 and is designed for modern datacenters. It allows administrators to create file shares and deploy critical systems on them. This is really good, because now administrators have to deal with file shares and security permissions, instead of complex connections to storage arrays. The idea is to set up one central SMB file-sharing server and attach the underlying storage to it. This SMB server initiates connection to the underlying storage. The logical disks created on the storage are attached to this SMB server. Then different file shares are created on it with different access permissions. These file shares can be used by different systems, such as Hyper-V storage space for virtual machine files, MS SQL server database files, Exchange Server database files, and so on. It is an advantage, because all of the data is stored on one location, which means easier administration of data files. It is important to say that this is a new concept and is only available with Windows Server 2012. It comes with no performance degradation on critical systems, because SMB v3.0 was designed for this type of data traffic. Setting up security permissions on SMB file shares SMB file shares contain sensitive data files whether they are virtual machines or SQL server database files, proper security permissions need to be applied to them in order to ensure that only authorized users and machines have access to them. Because of this, SMB File Sharing server has to be connected to the LAN part of the infrastructure as well. Security permissions are read from an Active Directory server. For example, if Hyper-V hosts have to read and write on a share, then only the computer accounts of those hosts need permissions on that share, and no one else. Another example is, if the share holds MS SQL server database files, then only the SQL Server computer accounts and SQL Server service account need permissions on that share. Migration of virtual machines Virtual Machine High Availability is the reason why failover clusters are deployed. High availability means that there is no system downtime or there is minimal accepted system downtime. This is different from system uptime. A system can be up and running but it may not be available. Hyper-V hosts in modern datacenters run many virtual machines, depending on the underlying hardware resources. Each of these systems is very important to the consumer. Let's say that a Hyper-V hosts malfunctions at some bank, and let's say that this host, hosts several critical systems and one of them may be the ATM system. If this happens, the users won't be able to use the ATMs. This is where Virtual Machine High Availability comes into picture. It is achieved through the implementation of failover cluster. A failover cluster ensures that when a node of the cluster becomes unavailable, all of the virtual machines on that node will be safely migrated to another node of the same cluster. Users can even set rules to specify to which host the virtual machines failover should go. Migration is also useful when some maintenance tasks should be done on some of the nodes of the cluster. The node can safely be shut down and all of the virtual machines, or at least the most critical, will be migrated to another host. Configuring Hyper-V Replica Enterprises tend to increase their system availability and deliver end user services. There are various ways how this can be done, such as making your virtual machines highly available, disaster recovery methods, and back up of critical systems. In case of system malfunction or disasters, the IT department needs to react fast, in order to minimize system downtime. Disaster recovery methods are valuable to the enterprise. This is why it is imperative that the IT department implements them. When these methods are built in the existing platform that the enterprise uses and it is easy to configure and maintain, then you have a winning combination. This is a suitable scenario for Hyper-V Replica to step up. It is easy to configure and maintain, and it is integrated with the Hyper-V 3.0, which comes with Windows Server 2012. This is why Hyper-V Replica is becoming more attractive to the IT departments when it comes to disaster recovery methods. In this article, we will learn what are the Hyper-V Replica prerequisites and configuration steps for Hyper-V Replica in different deployment scenarios. Because Hyper-V Replica can be used with failover clusters, we will learn how to configure a failover cluster with Windows Server 2012. And we will introduce a new concept for virtual machine file storage called SMB. Hyper-V Replica requirements Before we can start with the implementation of Hyper-V Replica, we have to be sure we have met all the prerequisites. In order to implement Hyper-V Replica, we have to install Windows Server 2012 on our physical machines. Windows Server 2012 is a must, because Hyper-V Replica is functionality available only with that version of Windows Server. Next, you have to install Hyper-V on each of the physical machines. Hyper-V Replica is a built-in feature of Hyper-V 3.0 that comes with Windows Server 2012. If you plan to deploy Hyper-V on non-domain servers, you don't require an Active Directory Domain. If you want to implement a failover cluster on your premise, then you must have Active Directory Domain. In addition, if you want your replication traffic to be encrypted, you can use self-signed certificates from local servers or import a certificate generated from a Certificate Authority (CA). This is a server running Active Directory Certificate Services, which is a Windows Server Role that should be installed on a separate server. Certificates from such CAs are imported to Hyper-V Replica-enabled hosts and associated with Hyper-V Replica to encrypt traffic generated from a primary site to a replica site. A primary site is the production site of your company, and a replica site is a site which is not a part of the production site and it is where all the replication data will be stored. If we have checked and cleared all of these prerequisites, then we are ready to start with the deployment of Hyper-V Replica. Virtual machine replication in Failover Cluster environment Hyper-V Replica can be used with Failover Clusters, whether they reside in the primary or in the replica site. You can have the following deployment scenarios: Hyper-V host to a Failover Cluster Failover Cluster to a Failover Cluster Failover Cluster to a Hyper-V node Hyper-V Replica configuration when Failover Clusters are used is done with the Failover Cluster Management console. For replication to take place, the Hyper-V Replica Broker role must be installed on the Failover Clusters, whether they are in primary or replica sites. The Hyper-V Replica Broker role is installed like any other Failover Cluster roles. Failover scenarios In Hyper-V Replica there are three failover scenarios: Test failover Planned failover Unplanned failover Test failover As the name says, this is only used for testing purposes, such as health validation and Hyper-V Replica functionality. When test failover is performed, there is no downtime on the systems in the production environment. Test failover is done at the replica site. When test failover is in progress, a new virtual machine is created which is a copy of the virtual machine for which you are performing the test failover. It is easily distinguished because the new virtual machine has Test added to the name. It is safe for the Test Virtual Machine to be started because there is no network adapter on it. So no one can access it. It serves only for testing purposes. You can log in on it and check the application consistency. When you have finished testing, right-click on the virtual machine and choose Stop Test Failover, and then the Test virtual machine is deleted. Planned failover Planned failover is the safest and the only type that should be performed. Planned failover is usually done when Hyper-V hosts have to be shut down for various reasons such as transport or maintenance. This is similar to Live Migration. You make a planned failover so that you don't lose virtual machine availability. The first thing you have to do is check whether the replication process for the virtual machine is healthy. To do this, you have to start the Hyper-V Management console in the primary site. Choose the virtual machine, and then at the bottom, click on the Replication tab. If the replication health status is Healthy, then it is fine to do the planned failover. If the health status doesn't show Healthy, then you need to do some maintenance until it says Healthy. Unplanned failovers Unplanned failover is used only as a last resort. It always results in data loss because any data that has not been replicated is lost during the failover. Although planned failover is done at the primary site, the unplanned failover is done at the replica site. When performing unplanned failover, the replica virtual machine is started. At that moment Hyper-V checks to see if the primary virtual machine is on. If it is on, then the failover process is stopped. If the primary virtual machine is off, then the failover process is continued and the replica virtual machine becomes the primary virtual machine. What is virtualization? Virtualization is a concept in IT that has its root back in 1960 when mainframes were used. In recent years, virtualization became more available because of different user-friendly tools, such as Microsoft Hyper-V, were introduced to customers. These tools allow the administrator to configure and administer a virtualized environment easily. Virtualization is a concept where a hypervisor, which is a type of middleware, is deployed on a physical device. This hypervisor allows the administrator to deploy many virtual servers that will execute its workload on that same physical machine. In other words, you get many virtual servers on one physical device. This concept gives better utilization of resources and thus it is cost effective. Hyper-V 3.0 features With the introduction of Windows Server 2008 R2, two new concepts regarding virtual machine high availability were introduced. Virtual machine high availability is a concept that allows the virtual machine to execute its workload with minimum downtime. The idea is to have a mechanism that will transfer the execution of the virtual machine to another physical server in case of node malfunctioning. In Windows Server 2008 R2, a virtual machine can be live migrated to another Hyper-V host. There is also quick migration, which allows multiple migrations from one host to another host. In Windows Server 2012, there are new features regarding Virtual Machine Mobility. Not only can you live migrate a virtual machine but you can also migrate all of its associated fi les, including the virtual machine disks to another location. Both mechanisms improve high availability. Live migration is a functionality that allows you to transfer the execution of a virtual machine to another server with no downtime. Previous versions of Windows Server lacked disaster recovery mechanisms. Disaster recovery mechanism is any tool that allows the user to configure policy that will minimize the downtime of systems in case of disasters. That is why, with the introduction of Windows Server 2012, Hyper-V Replica is installed together with Hyper-V and can be used in clustered and in non-clustered environments. Windows Failover Clustering is a Windows feature that is installed from the Add Roles and Features Wizard from Server Manager. It makes the server ready to be joined to a failover cluster. Hyper-V Replica gives enterprises great value, because it is an easy to implement and configure a Business Continuity and Disaster Recovery (BCDR) solution. It is suitable for Hyper-V virtualized environments because it is built in the Hyper-V role of Windows Server 2012. The outcome of this is for virtual machines running at one site called primary site to be easily replicated to another backup site called replica site, in case of disasters. The replication between the sites is done over an IP network, so it can be done in LAN environments or across WAN link. This BCDR solution provides efficient and periodical replication. In case of disaster it allows the production servers to be failed over to a replica server. This is very important for critical systems because it reduces downtime of those systems. It also allows the Hyper-V administrator to restore virtual machines to a specific point in time regarding recovery history of a certain virtual machine. Security considerations Restricting access to Hyper-V is very important. You want only authorized users to have access to the management console of Hyper-V. When Hyper-V is installed, a local security group on the server is created. It is named Hyper-V Administrators. Every user that is member of this group can access and configure Hyper-V settings. Another way to increase security of Hyper-V is to change the default port numbers of Hyper-V Authentication. By default, Kerberos uses port number 80, and Certificate Authentication uses port number 443. Certificated also encrypts the traffic generated from primary to replica site. And at last, you can create a list of authorized servers from which replication traffic will be received. Summary There are new concepts and useful features that make the IT administrators' life easier. Windows Server 2012 is designed for enterprises that want to deploy modern datacenters with state-of-the-art capabilities. The new user interface, the simplified configuration, and all of the built-in features are what that makes Windows Server 2012 appealing to the IT administrators. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Choosing the right flavor of Debian (Simple) [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article]
Read more
  • 0
  • 0
  • 4101

article-image-hyper-v-building-blocks-creating-your-microsoft-virtualization-platform
Packt
05 Feb 2015
6 min read
Save for later

Hyper-V building blocks for creating your Microsoft virtualization platform

Packt
05 Feb 2015
6 min read
In this article by Peter De Tender, the author of Mastering Hyper-V, we will talk about the building blocks for creating your virtualization platform through Hyper-V. We need to clearly define a detailed list of required server hardware, storage hardware, physical and virtual machine operating systems, and anything else we need to be able to build our future virtualization platform. These components are known as the Hyper-V building blocks, and we describe each one of them in the following sections. (For more resources related to this topic, see here.) Physical server hardware One of the first important components when building a virtualization platform is the physical server hardware. One of the key elements to check is the Microsoft certified hardware and software supportability and compatibility list. This list gives a detailed overview of all tested and certified server brands, server types, and their corresponding configuration components. While it is not a requirement to use this kind of machine, we can only recommend it, based on our own experience. Imagine you have a performance issue with one of your applications running inside a VM, being hosted on non-supported hardware, using non-supported physical NICs, and you're not getting decent support from your IT partner or Microsoft on that specific platform, as the hardware is not supported. The landing page for this compatibility list is http://www.windowsservercatalog.com. After checking the compatibility of the server hardware and software, you need to find out which system resources are available for Hyper-V. The following table shows the maximum scaling possibilities for different components of the Hyper-V platform (the original source is Microsoft TechNet Library article at http://technet.microsoft.com/en-us/library/jj680093.aspx.) System Resource Maximum number   Windows 2008 R2 Windows Server 2012 (R2) Host Logical processors on hardware 64 320 Physical memory 1 TB 4 TB Virtual processors per host 512 1,024 Virtual machine Virtual processors per virtual machine 4 64 Memory per virtual machine 64 GB 1 TB Active virtual machines 384 1,024 Virtual disk size 2 TB 64 TB Cluster Nodes 16 64 Virtual machines 1,000 4,000 Physical storage hardware Next to the physical server component, another vital part of the virtualization environment is the storage hardware. In the Hyper-V platform, multiple kinds of storage are supported, that is DAS, NAS, and/or SAN: Direct Attached Storage (DAS): This is directly connected to the server (think of disk which is located inside the server chassis). Network Attached Storage (NAS): This is the storage provided via the network and presented to the Hyper-V server or virtual machines as file shares. This disk type is file-based access. Server 2012 and 2012 R2 make use of SMB 3.0 as file-sharing protocol, which allows us to use plain file shares as virtual machine storage location Storage Area Network (SAN): This is also network-based storage, but relies on block-based access. The volumes are presented as local disks to the host. Popular protocols within SAN environments are iSCSI and Fibre Channel. The key point of consideration when sizing your disk infrastructure is providing enough storage, at the best performance available, and preferably high availability as well. Depending on the virtual machine's required resources, the disk subsystem can be based on high-performant / expensive SSD disks (solid-state drives), performant / medium-priced SAS disks (serial attached SCSI), or slower but cheaper SATA (serial ATA) disks. Or it could even be a combination of all these types. Although a bit outside of Hyper-V as such, one technology that is configured and used a lot in combination with Hyper-V Server 2012 R2, is Storage Spaces. Storage Spaces is new as of Server 2012, and can be considered as a storage virtualization subsystem. Storage Spaces are disk volumes built on top of physical storage pools, which is in fact just a bunch of physical disks (JBOD). A very important point to note is that the aforementioned network-based SAN and NAS storage solutions cannot be a part of Storage Spaces, as it is only configurable for DAS storage. The following schema diagram provides a good overview of the Storage Spaces topology, possibilities, and features: Physical network devices It's easy to understand that your virtual platform is dependent on your physical network devices such as physical (core) switches and physical NICs in the Hyper-V hosts. When configuring Hyper-V, there are a few configurations to keep into consideration. NIC Teaming NIC Teaming is the configuration of multiple physical network interface cards into a single team, mainly used for high availability or higher bandwidth purposes. NIC Teaming as such is no technology of Hyper-V, but Hyper-V can make good use of this operating system feature. When configuring a NIC team, the physical network cards are bundled and presented to the host OS as one or more virtual network adapter(s). Within Hyper-V, two basic sets of algorithms exist where you can choose from during the configuration of Hyper-V networking: Switch-independent mode: In this configuration, the teaming is configured regardless of the switches to which the host is connected. The main advantage in this configuration is the fact the teaming can be configured to use multiple switches (for example, two NICs in the host are connected to switch 1 and 2 NICs are configured to use switch 2). Switch-dependent mode: In this configuration, the underlying switch is part of the teaming configuration; this automatically requires all NICs in the team to be connected to the same switch. NIC Teaming is managed through the Server Manager / NIC Teaming interface or by using PowerShell cmdlets. Depending on your server hardware and brand, the vendor might provide you with specific configuration software to achieve the same. For example, the HP Proliant series of servers allows for HP Team configuration, which is managed by using a specific HP Team tool. Network virtualization Within Hyper-V 2012 R2, network virtualization not only refers to the virtual networking connections that are used by the virtual machines but also refers to the technology that allows for true network isolation to the different networks in which virtual machines operate. This feature set is very important for hosting providers, who run different virtual machines for their customers in an isolated network. You have to make sure that there is no connection possible between the virtual machines from customer A and the virtual machines from customer B. That's exactly the main purpose of network virtualization. Another possible way of configuring network segmentation is by using VLANs. However, this also requires VLAN configuration to be done on the physical switches, where the described network virtualization completely runs inside the virtual network switch of Hyper-V. Server editions and licensing The last component that comprises the Hyper-V building blocks is the server editions and licensing of the physical and virtual machines operating system. Summary In this article, we looked at the various building blocks for building a virtualization platform using Hyper-V.
Read more
  • 0
  • 0
  • 3935
article-image-installing-app-v-sequencer-client-and-streaming-server
Packt
10 Jan 2011
3 min read
Save for later

Installing the App-V Sequencer, Client and Streaming Server

Packt
10 Jan 2011
3 min read
Getting Started with Microsoft Application Virtualization 4.6 Virtualize your application infrastructure efficiently using Microsoft App-V Publish, deploy, and manage your virtual applications with App-V Understand how Microsoft App-V can fit into your company. Guidelines for planning and designing an App-V environment. Step-by-step explanations to plan and implement the virtualization of your application infrastructure Installing the App-V Sequencer After reviewing the requirements and recommendations you can see that the App-V Sequencer installation is pretty straightforward. The procedure is as follows: Once you run the installation file, you should get the notification for missing requirement, in this case Microsoft Visual C++ 2005 SP1 Redistributable Package (x86). Click on Install. On the first wizard page, click on Next. Accept the License Terms and click on Next. Select the installation path for the App-V Sequencer binaries. Click on Next. Click on Install and the installation process will start. After the installation completes, you can automatically launch the application where you can see the new and refreshing interface. Installing the App-V Client The installation of the App-V Client component is also very simple and intuitive. The only consideration before starting the installation is that should already have the proper cache size defined. Once you start the installation, a few prerequisites will be installed. On the first page of the wizard click on Next. Accept the License Terms and click on Next. Select Custom setup type and click on Next. Accept or modify the installation path for the App-V Desktop Client. Verify that the data locations used by the App-V Desktop Client, including the drive letter that will be used, are the same as the ones selected for the App-V Sequencer. Click on Next. Now you can select the cache size used by the client to store the loaded applications.The default is the maximum size of 6 GB (6144 MB) or you can use the Use free disk space threshold option, where you can set the value for minimum hard disk space available. Click on Next. On this page you can set the behavior of the Runtime Package. The only recommended option to change from the default selection is marking the On Publishing Refresh on Automatically Load Application. The Application Source Root option, here left blank (default), is used when you want to override the streaming location of the .sft files (this location is set in the .osd of the App-V package). If you set a path in the Application Source Root, the applications will look for the SFT in that location instead of the one they are receiving in the OSD. This option is another alternative when you are using slow links to avoid transmitting large amounts of data. Also take note that you can use the auto-load options. In this example Automatically load previously used applications has been selected. On the next page you can configure the server you are receiving the packages from and the communication method used. In this case, the server's name is appv-server and the type of communication is Application Virtualization Server, using the RTSP 554 protocol. Click on Next. On the last page, just click on Install. After the wizard completes, you can use the App-V Client Management Console to verify the Publishing Servers options.
Read more
  • 0
  • 0
  • 3883

article-image-designing-and-building-horizon-view-60-infrastructure
Packt
22 Oct 2014
18 min read
Save for later

Designing and Building a Horizon View 6.0 Infrastructure

Packt
22 Oct 2014
18 min read
This article is written by Peter von Oven, the author of VMware Horizon View Essentials. In this article, we will start by taking a closer look at the design process. We will now look at the reference architecture and how we start to put together a design, building out the infrastructure for a production deployment. Proving the technology – from PoC to production In this section, we are going to discuss how to approach a VDI project. This is a key and very important piece of work that needs to be completed in the very early stages and is somewhat different from how you would typically approach an IT project. Our starting point is to focus on the end users rather than the IT department. After all, these are the people that will be using the solution on a daily basis and know what tools they need to get their jobs done. Rather than giving them what you think they need, let's ask them what they actually need and then, within reason, deliver this. It's that old saying of don't try and fit a square peg into a round hole. No matter how hard you try, it's just never going to fit. First and foremost we need to design the technology around the user requirements rather than building a backend infrastructure only to find that it doesn't deliver what the users require. Assessment Once you have built your business case and validated that against your EUC strategy and there is a requirement for delivering a VDI solution, the next stage is to run an assessment. It's quite fitting that this book is entitled "Essentials", as this stage of the project is exactly that, and is essential for a successful outcome. We need to build up a picture of what the current environment looks like, ranging from looking at what applications are being used to the type of access devices. This goes back to the earlier point about giving the users what they need and the only way to find that out is to conduct an assessment. By doing this, we are creating a baseline. Then, as we move into defining the success criteria and proving the technology, we have the baseline as a reference point to demonstrate how we have improved current working and delivered on the business case and strategy. There are a number of tools that can be used in the assessment phase to gather the information required, for example, Liquidware Labs Stratusphere FIT or SysTrack from Lakeside Software. Don't forget to actually talk to the users as well, so you are armed with the hard-and-fast facts from an assessment as well as the user's perspective. Defining the success criteria The key objective in defining the success criteria is to document what a "good" solution should look like for the project to succeed and become production-ready. We need to clearly define the elements that need to function correctly in order to move from proof of concept to proof of technology, and then into a pilot phase before deploying into production. You need to fully document what these elements are and get the end users or other project stakeholders to sign up to them. It's almost like creating a statement of work with a clearly defined list of tasks. Another important factor is to ensure that during this phase of the project, the criteria don't start to grow beyond the original scope. By that, we mean other additional things should not get added to the success criteria or at least not without discussion first. It may well transpire that something key was missed; however, if you have conducted your assessment thoroughly, this shouldn't happen. Another thing that works well at this stage is to involve the end users. Set up a steering committee or advisory panel by selecting people from different departments to act as sponsors within their area of business. Actively involve them in the testing phases, but get them on board early as well to get their input in shaping the solution. Too many projects fail when an end user tries something that didn't work. However, the thing that they tried is not actually a relevant use case or something that is used by the business as a critical line of business application and therefore shouldn't derail the project. If we have a set of success criteria defined up front that the end users have signed up to, anything outside that criteria is not in scope. If it's not defined in the document, it should be disregarded as not being part of what success should look like. Proving the technology Once the previous steps have been discussed and documented, we should be able to build a picture around what's driving the project. We will understand what you are trying to achieve/deliver and, based upon hard-and-fast facts from the assessment phase, be able to work on what success should look like. From there, we can then move into testing some form of the technology should that be a requirement. There are three key stages within the testing cycle to consider, and it might be the case that you don't need all of them. The three stages we are talking about are as follows: Proof of concept (PoC) Proof of technology (PoT) Pilot In the next sections, we are briefly going to cover what each of these stages mean and why you might or might not need them. Proof of concept A proof of concept typically refers to a partial solution, typically built on any old hardware kicking about, that involves a relatively small number of users usually within the confines of the IT department acting in business roles, to establish whether the system satisfies some aspect of the purpose it was designed for. Once proven, one or two things happen. Firstly nothing happens as it's just the IT department playing with technology and there wasn't a real business driver in the first place. This is usually down to the previous steps not having been defined. In a similar way, by not having any success criteria, it will also fail, as you don't know exactly what you are setting out to prove. The second outcome is that the project moves into a pilot phase that we will discuss in a later section. You could consider moving directly into this phase and bypassing the PoC altogether. Maybe a demonstration of the technology would suffice, and using a demo environment over a longer period would show you how the technology works. Proof of technology In contrast to the PoC, the objective of a proof of technology is to determine whether or not the proposed solution or technology will integrate into your existing environment and therefore demonstrate compatibility. The objective is to highlight any technical problems specific to your environment, such as how your bespoke systems might integrate. As with the PoC, a PoT is typically run by the IT department and no business users would be involved. A PoT is purely a technical validation exercise. Pilot A pilot refers to what is almost a small-scale roll out of the solution in a production-style environment that would target a limited scope of the intended final solution. The scope may be limited by the number of users who can access the pilot system, the business processes affected, or the business partners involved. The purpose of a pilot is to test, often in a production-like environment, whether the system is working, as it was designed while limiting business exposure and risk. It will also touch real users so as to gauge the feedback from what would ultimately become a live, production solution. This is a critical step in achieving success, as the users are the ones that have to interact with the system on a daily basis, and the reason why you should set up some form of working group to gather their feedback. That would also mitigate the project from failing, as the solution may deliver everything the IT department could ever wish for, but when it goes live and the first user logs on and reports a bad experience or performance, you may as well not be bothered. The pilot should be carefully scoped, sized, and implemented. We will discuss this in the next section. The pilot phase In this section, we are going to discuss the pilot phase in a bit more detail and break it down into three distinct stages. These are important, as the output from the pilot will ultimately shape the design of your production environment. The following diagram shows the workflow we will follow in defining our project: Phase 1 – pilot design The pilot infrastructure should be designed on the same hardware platforms that the production solution is going to be deployed, for example, the same servers and storage. This takes into account any anomalies between platforms and configuration differences that could affect things such as scalability or more importantly performance. Even at pilot stage, the design is absolutely key, and you should make sure you take into account the production design even at this stage. Why? Basically because many pilot solutions end up going straight into production and more and more users get added above and beyond those scoped for the pilot. It's great going live with the solution and not having to go back and rebuild it, but when you start to scale by adding more users and applications, you might have some issues due to the pilot sizing. It may sound obvious, but often with a successful pilot, the users just keep on using it and additional users get added. If it's only ever going to be a pilot, that's fine, but keep this in mind and ask the question; if you are planning on taking the pilot straight into production design it for production. It is always useful to work from a prerequisite document to understand the different elements that need consideration in the design. Key design elements include: Hardware sizing (servers – CPU, memory, and consolidation ratios) Pool design (based on user segmentation) Storage design (local SSD, SAN, and acceleration technologies) Image creation (rebuild from scratch and optimize for VDI) Network design (load balancing and external access) Antivirus considerations Application delivery (delivering virtually versus installing in core image) User profile management Floating or dedicated desktop assignments Persistent or non-persistent desktop builds (linked clone or full clone) Once you have all this information, you can start to deploy the pilot. Phase 2 – pilot deployment In the deployment phase of the pilot, we are going to start building out the infrastructure, deploying the test users, building the OS images, and then start testing. Phase 3 – pilot test During the testing phase, the key thing during this stage is to work closely with the end users and your sponsors, showing them the solution and how it works, closely monitoring the users, and assessing the solution as it's being used. This allows you to keep in contact with the users and give them the opportunity to continually provide real-time feedback. This also allows you to answer questions and make adjustments and enhancements on the fly rather than wait to the end of the project and then to be told it didn't work or they just simply didn't understand something. This then leads us onto the last section, the review. Phase 4 – pilot review This final stage sometimes tends to get forgotten. We have deployed the solution, the users have been testing it, and then it ends there for whatever reason. However, there is one very important last thing to do to enable the customer to move to production. We need to measure the user experience or the IT department's experience against the success criteria we set out at the start of this process. We need to get customer sign off and agreement that we have successfully met all the objectives and requirements. If this is not the case, we need to understand the reasons why. Have we missed something in the use case, have the user requirements changed, or is it simply a perception issue? Whatever the case, we need to cycle round the process again. Go back to the use case, understand and reevaluate the user requirements, (what it is that is seemingly failing or not behaving as expected), and then tweak the design or make the required changes and get them to test the solution again. We need to continue this process until we get acceptance and sign off; otherwise, we will not get to the final solution deployment phase. When the project has been signed off after a successful pilot test, there is no reason why you cannot deploy the technology in production. Now that we have talked about how to prove the technology and successfully demonstrated that it delivers against both our business case and user requirements, in the next sections, we are going to start looking at the design for our production environment. Designing a Horizon 6.0 architecture We are going to start this section by looking at the VMware reference architecture for Horizon View 6.0 before we go into more detail around the design considerations, best practice, and then sizing guidelines. The pod and block reference architecture VMware has produced a reference architecture model for deploying Horizon View, with the approach being to make it easy to scale the environment by adding set component pieces of infrastructure, known as View blocks. To scale the number of users, you add View blocks up to the maximum configuration of five blocks. This maximum configuration of five View blocks is called a View pod. The important numbers to remember are that each View block supports up to a maximum of 2,000 users, and a View pod is made up of up to five View blocks, therefore supporting a maximum of 10,000 users. The View block contains all the infrastructure required to host the virtual desktop machines, so appropriately sized ESXi hosts, a vCenter Server, and the associated networking and storage requirements. We will cover the sizing aspects later on in this article. The following diagram shows an individual View block: Apart from having a View block that supports the virtual desktop machines, there is also a management block for the supporting infrastructure components. The management block contains the management elements of Horizon View, such as the connection servers and security servers. These will also be virtual machines hosted on the vSphere platform but using separate ESXi hosts and vCenter servers from those being used to host the desktops. The following diagram shows a typical View management block: The management block contains the key Horizon View components to support the maximum configuration of 10,000 users or a View pod. In terms of connection servers, the management block consists of a maximum of seven connection servers. This is often written as 5 + 2, which can be misleading, but what it means is you can have five connection servers and two that serve as backups to replace a failed server. Each connection server supports one of the five blocks, with the two spare in reserve in the event of a failure. As we discussed previously, the View Security Servers are paired with one of the connection servers in order to provide external access to the users. In our example diagram, we have drawn three security servers meaning that these servers are configured for external access, while the others serve the internal users only. In this scenario, the View Connection Servers and View Security Servers are deployed as virtual machines, and are therefore controlled and managed by vCenter. The vCenter Server can run on a virtual machine, or you can use the vCenter Virtual Appliance. It can also run on a physical Windows Server, as it's just a Windows application. The entire infrastructure is hosted on a vSphere cluster that's separate, from that being used to host the virtual desktop machines. There are a couple of other components that are not shown in the diagram, and those are the databases required for View such as the events database and for View Composer. If we now look at the entire Horizon View pod and block architecture for up to 10,000 users, the architecture design would look something like the following diagram: One thing to note is that although a pod is limited to 10,000 users, you can deploy more than one pod should you need an environment that exceeds the 10,000 users. Bear in mind though that the pods do not communicate with each other and will effectively be completely separate deployments. As this is potentially a limitation in the scalability, but more so for disaster recovery purposes, where you need to have two pods across two sites for disaster recovery, there is a feature in Horizon View 6.0 that allows you to deploy pods across sites. This is called the Cloud Pod Architecture (CPA), and we will cover this in the next section. The Cloud Pod Architecture The Cloud Pod Architecture, also referred to as linked-mode View (LMV) or multidatacenter View (MDCV), allows you to link up to four View pods together across two sites, with a maximum number of supported users of up to 20,000. There are four key features available by deploying Horizon View using this architecture: Scalability: This hosts more than 10,000 users on a single site Multidatacenter support: This supports View across more than one data center Geo roaming: This supports roaming desktops for users moving across sites DR: This delivers resilience in the event of a data center failure Let's take a look at the Cloud Pod Architecture in the following diagram to explain the features and how it builds on the pod and block architecture we discussed previously: With the Cloud Pod Architecture, user information is replicated globally and the pods are linked using the View interpod API (VIPA)—the setup for which is command-line-based. For scalability, with the Cloud Pod Architecture model, you have the ability to entitle users across pools on both different pods and sites. This means that, if you have already scaled beyond a single pod, you can link the pods together to allow you to go beyond the 10,000 user limit and also administer your users from a single location. The pods can, apart from being located on the same site, also be on two different sites to deliver a mutlidatacenter configuration running as active/active. This also introduces DR capabilities. In the event of one of the data centers failing or losing connectivity, users will still be able to connect to a virtual desktop machine. Users don't need to worry about what View Connection Server they need to use to connect to their virtual desktop machine. The Cloud Pod Architecture supports a single namespace with access via a global URL. As users can now connect from anywhere, there are some configuration options that you need to consider as to how they access their virtual desktop machine and from where it gets delivered. There are three options that form part of the global user entitlement feature: Any: This is delivered from any pod as part of the global entitlement Site: This is delivered from any pod from the same site the user is connecting from Local: This is delivered only from the local pod that the user is connected to It's not just the users that get the global experience; the administrators can also be segregated in this way so that you can deliver delegated management. Administration of pods could be delegated to the local IT teams on a per region/geo basis, with some operations such as provisioning and patching performed locally on the local pods or maybe it's so that local language support can be delivered. It is only global policy that is managed globally, typically from an organizations global HQ. Now that we have covered some of the high-level architecture options, you should now be able to start to look at your overall design, factoring in locations and the number of users. In the next section, we will start to look at how to size some of these components. Sizing the infrastructure In this section, we are going to discuss the sizing of the components previously described in the architecture section. We will start by looking at the management blocks containing the connection servers, security servers, and then the servers that host the desktops before finishing off with the desktops themselves. The management block and the block hosting the virtual desktop machines should be run on separate infrastructure (ESXi hosts and vCenter Servers); the reason being due to the different workload patterns between servers and desktops and to avoid performance issues. It's also easier to manage, as you can determine what desktops are and what servers are, but more importantly it's also the way in which the products are licensed. With vSphere for desktop that comes with Horizon View, it only entitles you to run workloads that are hosting and managing the virtual desktop infrastructure. Summary In this article, you learned how to design a Horizon 6.0 architecture. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] Setting up of Software Infrastructure on the Cloud [Article] Introduction to Veeam® Backup & Replication for VMware [Article]
Read more
  • 0
  • 0
  • 3660

article-image-integration-system-center-operations-manager-2012-sp1
Packt
17 May 2013
9 min read
Save for later

Integration with System Center Operations Manager 2012 SP1

Packt
17 May 2013
9 min read
(For more resources related to this topic, see here.) This article provides tips and techniques to allow administrators to integrate Operations Manager 2012 with Virtual Machine Manager 2012 to monitor the health and performance of virtual machine hosts and their virtual machines, as well as to use the Operations Manager reporting functionality. In a hybrid hypervisor environment (for example, Hyper-V, VMware), using Operations Manager management packs ( MPs ) (for example, Veeam MP), you can monitor the Hyper-V hosts and the VMware hosts, which allow you to use only the System Center Console to manage and monitor the hybrid hypervisor environment. You can also monitor the health and availability of the VMM infrastructure, management, database, and library servers. The following screenshot will show you the diagram views of the virtualized environment through the Operations Manager: Installing System Center Operations Manager 2012 SP1 This recipe will guide you through the process of installing a System Center Operations Manager for the integration with VMM. Operations Manager has an integrated product and company knowledge for proactive tuning. It also allows the user to compute the OS, applications, services, and out-of-the-box network monitoring, reporting, and many more features extensibility through management packs, thus providing a cross-platform visibility. The deployment used on this recipe assumes a small environment with all components being installed on the same server. For datacenters and enterprise deployments, it is recommended to distribute the features and services across multiple servers to allow for scalability. For a complete design reference and complex implementation of SCOM 2012, follow up the Microsoft Operations Manager deployment guide available at http://go.microsoft.com/fwlink/?LinkId=246682. When planning, use Operations Guide for System Center 2012—Operations Manager (http://go.microsoft.com/fwlink/p/?LinkID=207751) to determine the hardware requirements. Getting ready Before starting, check out the system requirements and design planning for System Center Operations Manager 2012 SP1 at http://technet.microsoft.com/en-us/library/jj656654.aspx My recommendation is to deploy on a Windows Server 2012 and the SQL Server 2012 SP1 version. How to do it... Carry out the following steps to install Operations Manager 2012 SP1: Browse to the SCOM installation folder and click on Setup. Click on Install. On the Select the features to install page, select the components that apply to your environment, and then click on Next as shown in the following screenshot: The recommendation is to have a dedicated server, but it all depends on the size of the deployment. You can select all of the components to be installed on the same server for a small deployment. Type in the location where you'd install Operations Manager 2012 SP1, or accept the default location and click on Next. The installation will check if your system has passed all of the requirements. A screen showing the issues will be displayed if any of the requirements are not met, and you will be asked to fix and verify it again before continuing with the installation, as shown in the following screenshot: If all of the prerequisites are met, click on Next to proceed with the setup. On the Specify an installation option page, if this is the first Operations Manager, select the Create the first Management Server in a new management group option and provide a value in the Management group name field. Otherwise, select the Add a management server to an existing management group option as shown in the following screenshot: Click on Next to continue, accept the EULA, and click on Next. On the Configure the operational database page, type the server and instance name of the server and the SQL Server port number. It is recommended to keep the default values in the Database name, Database size (MB), Data file folder, and Log file folder boxes. Click on Next. The installation account needs DB owner rights on the database. On the SQL Server instance for Reporting Services page, select the instance where you want to host the Reporting Services (SSRS). Make sure the SQL Server has the SQL Server Full-Text Search and Analysis server component installed. On the Configure Operations Manager accounts page, provide the domain account credentials (for example, labsvc-scom) for the Operations Manager services. You can use a single domain account. For account requirements, see the Microsoft Operations Manager deployment guide at http://go.microsoft.com/fwlink/?LinkId=246682. On the Help improve System Center 2012 – Operations Manager page, select the desired options and click on Next. On the Installation Summary page, review the options and click on Install, and then on click on Close. The Operations Manager console will open. How it works... When deploying SCOM 2012, it is important to consider the placement of the components. Work on the SCOM design before implementing it. See the OpsMgr 2012 Design Guide available at http://blogs.technet.com/b/momteam/archive/2012/04/13/ opsmgr-2012-design-guide.aspx. On the Configure Operational Database page, if you are installing the first management server, a new operational database will be created. If you are installing additional management servers, an existing database will be used. On the SQL Server instance for Reporting Services page, make sure you have previously configured the Reporting Services at SQL setup using the Reporting Services Configuration Manager tool, and that the SQL Server Agent is running. During the OpsMgr setup, you will be required to provide the Management Server Action Account credentials and the System Center Configuration service and System Center Data Access service account credentials too. The recommendation is to use a domain account so that you can use the same account for both the services. The setup will automatically assign the local computer Administrators group to the Operations Manager administrator's role. The single-server scenario combines all roles onto a single instance and supports the following services: monitoring and alerting, reporting, audit collection, agentless-exception management, and data. If you are planning to monitor the network, it is recommended to move the SQL Server tempdb database to a separate disk that has multiple spindles. There's more... To confirm the health of the management server, carry out the following steps: In the OpsMgr console, click on the Administration workspace. In Device Management, select Management Servers to confirm that the installed server has a green check mark in the Health State column. See also The Deploying System Center 2012 – Operations Manager article available at http://technet.microsoft.com/en-us/library/hh278852.aspx Installing management packs After installing Operations Manager, you need to install some management packs and agents on the Hyper-V servers and on the VMM server. This recipe will guide you through the installation, but first make sure you have installed the Operations Manager Operations console on the VMM management server. You need to import the following management packs for the VMM 2012 SP1 integration: Windows Server operating system Windows Server 2008 operating system (Discovery) Internet Information Services 2003 Internet Information Services 7 Internet Information Services library SQL Server Core Library Getting ready Before you begin, make sure the correct version of PowerShell is installed, that is, PowerShell v2 for SC 2012 and PowerShell v3 for SC2012 SP1. How to do it... Carry out the following steps to install the required MPs in order to integrate with VMM 2012 SP1: In the OpsMgr console, click on the Administration workspace on the bottom-left pane. On the left pane, right-click on Management Packs and click on Import Management Packs. In the Import Management Packs wizard, click on Add, and then click on Add from catalog. In the Select Management Packs from Catalog dialog box, for each of the following management packs, repeat the steps 5 to 7: Windows Server operating system Windows Server 2008 operating system (Discovery) Internet Information Services 2003 Internet Information Services 7 Internet Information Services library SQL Server Core Library There are numerous management packs for Operations Manager. You can use this recipe to install other OpsMgr MPs from the catalog web service. You can also download the MPs from the Microsoft System Center Marketplace, which contains the MPs and documentation from Microsoft and some non-Microsoft companies. Save them to a shared folder and then import. See http://systemcenter.pinpoint. microsoft.com/en-US/home. In the Find field, type in the management pack to search in the online catalog and click on Search. The Management packs in the catalog list will show all of the packs that match the search criterion. To import, select the management pack, click on Select, and then click on Add as shown in the following screenshot: In the View section, you can refine the search by selecting, for example, to show only those management packs released within the last three months. The default view lists all of the management packs in the catalog. Click on OK after adding the required management packs. On the Select Management Packs page, the MPs will be listed with either a green icon, a yellow icon, or a red icon. The green icon indicates that the MP can be imported. The yellow information icon means that it is dependent on other MPs that are available in the catalog, and you can fix the dependency by clicking on Resolve. The red error icon indicates that it is dependent on other MPs, but the dependent MPs are not available in the catalog. Click on Import if all management packs have their icon statuses as green. On the Import Management Packs page, the progress for each management pack will be displayed. Click on Close when the process is finished. How it works... You can import the management packs available for Operations Manager using the following: The OpsMgr console: You can perform the following actions in the Management Packs menu of the Administration workspace: Import directly from Microsoft's online catalog Import from disk/share Download the management pack from the online catalog to import at a later time The Internet browser: You can download the management pack from the online catalog to import at a later time, or to install on an OpsMgr that is not connected to the Internet While using the OpsMgr console, verify whether all management packs show a green status. Any MP displaying the yellow information icon or the red error icon in the import list will not be imported. If there is no Internet connection on the OpsMgr, use an Internet browser to locate and download the management pack to a folder/share. Then copy the management pack to the OpsMgr server and use the option to import from disk/share. See also The Installing System Center Operations Manager 2012 SP1 recipe Visit Microsoft System Center Marketplace available at http://go.microsoft.com/fwlink/?LinkId=82105
Read more
  • 0
  • 0
  • 3388
article-image-creating-image-profile-cloning-existing-one
Packt
19 Jul 2013
5 min read
Save for later

Creating an Image Profile by cloning an existing one

Packt
19 Jul 2013
5 min read
(For more resources related to this topic, see here.) How to do it The following procedure will guide you through the steps required to clone a predefined ESXi Image Profile available from an ESXi Offline Bundle. It is a four step process: Verifying the existence of a Software Depot in the current session. Adding a Software Depot. Listing available Image Profiles. Cloning an Image Profile to form a new one. Verifying the existence of a Software Depot in the current session To verify whether there are any existing Software Depots defined in the current PowerCLI session, issue the following command: $DefaultSoftwareDepots Note that the command has not returned any values. Meaning, there are no Software Depots defined in the current session. If the needed Software Depot was already added then the command output will list the depot. In that case, you can skip step 2, Add a Software Depot, and start with step 3, List available Image Profiles. Adding a Software Depot Before you add a Software Depot, make sure that you have the Offline Bundle saved on to your local disk. The Offline Bundle can be downloaded from VMware's website or from the OEM's website. The bundle can either be an ESXi Image or a device driver bundle. We already have the Offline Bundle downloaded to the C:AutoDeploy-VIBS directory. Now, let's add this to the current PowerCLI session. To add the downloaded Software Depot, issue the following command: Add-EsxSoftwareDepot -DepotUrl C:AutoDeploy-VIBSESXi500-201111001.zip Once the Software Depot has been successfully added to the PowerCLI session, the command $DefaultSoftwareDepots should list the newly added Software Depot. You could also just issue the command Get-EsxSoftwareDepot to list all the added depots (Offline Bundles). Listing available Image Profiles Once the Software Depot has been added, the next step will be to list all the currently available Image Profiles from the depot by issuing the following command: Get-EsxImageProfile We see that there are two image profiles that the ESXi Offline Bundle offers. One is an ESXi Image, with no VMware Tools ISOs bundled with it, and the other is the standard image, with the VMware Tools ISOs bundled with it. Cloning an Image Profile to form a new one Now that we know there are two Image Profiles available, the next step will be to clone a needed Image Profile to form a new one. This is done by using the New-ESXImageProfile cmdlet. The cmdlet can be supplied with the name of the Image Profile as an argument. However, in most cases remembering the names of the Image Profiles available would be difficult. So the best way to work around this difficulty is to define an array variable to hold the names of the Image Profiles and then the array elements (Image Profile names) can be easily and individually addressed in the command. In this example, we will be using a user defined array variable $profiles to hold the output of the command Get-EsxImageProfile. The following expression will save the output of the Get-ESXImageProfile command to a variable $profiles. $profiles = Get-EsxImageProfile The $profiles variable now holds the two Image Profile names as array elements [0] and [1] sequentially. The following command can be issued to clone the array element [1] ESXi-5.1.10-799733-standard to form a new Image Profile, with a user defined name Profile001. New-EsxImageProfile -CloneProfile $profiles[1] -Name "Profile001" -Vendor VMware Once the command has been successfully executed, you can issue the Get-EsxImageProfile command to list the newly created Image Profile. How it works The PowerCLI session will have a list of Image Profiles available from the added Offline Bundle. During the process of creating a new Image profile, you verify whether a Software Depot is already added to the PowerCLI session using the $DefaultSoftwareDepots command. If there are no Software Depots added then the command will silently exit to the PowerCLI prompt. If there are Software Depots added then it would list the depots added showing the path to its XML file. This is referred to as a depot URL. The process of adding the Software Depot is pretty straightforward. First you need to make sure that you have downloaded the needed Offline Bundles to the server where you have PowerCLI installed. In this case it was downloaded and saved to the C:AutoDeploy-VIBs folder. Once the Offline Bundle is downloaded and saved to an accessible location, you can then issue the command Add-EsxSoftwareDepot to add the Offline Bundle as a depot to the PowerCLI session. Once the software has been added, you can then list all the Image Profiles available from the Offline Bundle. Then the chosen Image Profile is cloned to form a new Image Profile, which can then be customized by adding/removing VIBs. It can then be published as an Offline Bundle or an ISO. Summary We saw that all the predefined Image Profiles were available from an Offline Bundle that were read-only. To customize such Image Profiles, you needed to clone them to form new Image Profiles. We learned how to create a new Image Profile by cloning an existing one. Resources for Article : Further resources on this subject: Supporting hypervisors by OpenNebula [Article] Integration with System Center Operations Manager 2012 SP1 [Article] VMware View 5 Desktop Virtualization [Article]
Read more
  • 0
  • 0
  • 3343

article-image-vmware-vrealize-operations-performance-and-capacity-management
Packt
08 May 2015
4 min read
Save for later

VMware vRealize Operations Performance and Capacity Management

Packt
08 May 2015
4 min read
Virtualization is what allows companies like Dropbox and Spotify to operate internationally with ever-growing customer bases. From virtualizing desktops, applications, and operating systems to creating highly-available platforms that enable developers to quickly host operating systems and entire content delivery networks, this book centers on the tools, techniques, and platforms that administrators and developers use to decouple and utilize hardware and infrastructure resources to power applications and web services. Key pointers vCenter, vSphere, VMware, VM, Virtualization, SDDC Counters, key counters, metric groups, vRealize, ESXi Cluster, Datastore, Datastore Cluster, Datacenter CPU, Network, Disk, Storage, Contention, Utilization, Memory vSwitch, vMotion, Capacity Management, Performance Management, Dashboards, vC Ops What the book covers Content-wise, the book is split into two main parts. The first part provides the foundation and theory. The second part provides the solutions and sample use cases. It aims to clear up the misunderstandings that customers have about SDDC. It explains why a VM is radically different from a physical server, and hence a virtual data center is fundamentally different from a physical data center. It then covers the aspects of management that are affected. It also covers the practical aspects of this book, as they show how sample solutions are implemented. The chapters in the book provide both performance management and capacity management. How the book differs Virtualization is one of the biggest shifts in IT history. Almost all large enterprises are embarking on a journey to transform the IT department into a service provider. VMware vRealize Operations Management is a suite of products that automates operations management using patented analytics and an integrated approach to performance, capacity, and configuration management. vCenter Operations Manager is the most important component of this suite that helps Administrators to maintain and troubleshoot your VMware environment as well as your physical environment. Written in a light and easy-to-follow language, the book differs in a way as it covers the complex topic of managing performance and capacity when the datacentre is software defined. It sets the foundation by demystifying deep rooted misunderstanding on virtualization and virtual machine. How will the book help you Master the not-so-obvious differences between a physical server and a virtual machine that customers struggle with during management of virtual datacentre Educate and convince your peers on why and how performance and capacity management change in virtual datacentre Correct many misperceptions about virtualization Know how your peers operationalize their vRealize Operations Master all the key metrics in vSphere and vRealize Operations Be confident in performance troubleshooting with vSphere and vRealize Operations See real-life examples of how super metric and advance dashboards make management easier Develop rich, custom dashboards with interaction and super metrics Unlearn the knowledge that makes performance and capacity management difficult in SDDC Master the counters in vCenter and vRealize Operations by knowing what they mean and their interdependencies Build rich dashboards using a practical and easy-to-follow approach supported with real-life examples Summary The book would teach how to get the best out of vCenter Operations in managing performance and capacity in a Software-Defined datacenter. The book starts by explaining the difference between a Software-Defined datacentre and classic physical datacentre, and how it impacts both architecture and operations. From this strategic view, the book then zooms into the most common challenge, which is performance management. The book then covers all the key counters in both vSphere and vRealize Operations, explains their dependency, and provides practical guidance on values you should expect in a healthy environment. At the end, the book puts the theory together and provides real-life examples created together with customers. This book is an invaluable resource for those embarking on a journey to master Virtualization. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 3110
Modal Close icon
Modal Close icon