Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-windows-azure-mobile-services-implementing-push-notifications-using
Packt
13 Jan 2014
8 min read
Save for later

Windows Azure Mobile Services - implementing push notifications

Packt
13 Jan 2014
8 min read
Understanding Push Notification Service flow The following procedure illustrates Push Notification Service (PNS) flow from establishing a channel to receiving a notification: The mobile device establishes a channel with the PNS and retrieves its handle (URI). The device registers its handle with a backend service (in our case, a table in our Mobile Service). A notification request can be made by another service, an admin system, and so on, which calls the backend service (in our case, an API). The service makes a request to the correct PNS for every device handle. The PNS notifies the device. Setting up Windows Store apps Visual Studio 2013 has a new wizard, which associates the app with the store in order to obtain a push notifications URI. Code is added to the app to interact with the service that will be updated to have a Channels table. This table has an Insert script to insert the channel and ping back a toast notification upon insert. The following procedure takes us through using the wizard to add a push channel to our app: Right-click on the project, and then navigate to Add | Push Notification. Follow the wizard and sign in to your store account (if you haven't got one, you will need to create one). Reserve an app name and select it. Then, continue by clicking on Next. Click on Import Subscriptions... and the Import Windows Azure Subscriptions dialog box will appear. Click on Download subscription file. Your default browser will be launched and the subscriptions file will be automatically downloaded. If you are logged into the portal, this will happen automatically; otherwise, you'll be prompted to log in. Once the subscription file is downloaded, browse to the downloaded file in the Import Windows Azure Subscriptions dialog box and click on Import. Select the subscription you wish to use, click on Next, and then click on Finish in the final dialog box. In the Output window in Visual Studio, you should see something like the following: Attempting to install 'WindowsAzure.MobileServices' Successfully installed NuGet Package 'WindowsAzure.MobileServices' Successfully added 'push.register.cs' to the project Added field to the App class successfully Initialization code was added successfully Updated ToastCapable in the app manifest Client Secret and Package SID were updated successfully on the Windows Azure Mobile Services portal The 'channels' table and 'insert.js' script file were created successfully Successfully updated application redirect domain Done We will now see a few things have been done to our project and service: The Package.StoreAssociation.xml file is added to link the project with the app on the store. Package.appxmanifest is updated with the store application identity. Add a push.register.cs class in servicesmobile services[Your Service Name], which creates a push notifications channel and sends the details to our service. The server explorer launches and shows us our service with a newly created table named channels, with an Insert method that inserts or updates (if changed) our channel URI. Then, it sends us a toast notification to test that everything is working. Run the app and check that the URI is inserted into the table. You will get a toast notification. Once you've done this, remove the sendNotifications(item.channelUri); call and function from the Insert method. You can do this in Visual Studio via the Server Explorer console. I've modified the script further to make sure the item is always updated, so when we send push notifications, we can send them to URIs that have been recently updated so that we are targeting users who are actually using the application (channels actually expire after 30 days too, so it would be a waste of time trying to push to them). The following code details these modifications: function insert(item, user, request) { var ct = tables.getTable("channels"); ct.where({ installationId: item.installationId }).read({ success: function (results) { if (results.length > 0) { // always update so we get the updated date var existingItem = results[0]; existingItem.channelUri = item.channelUri; ct.update(existingItem, { success: function () { request.respond(200, existingItem); } }); } else { // no matching installation, insert the record request.execute(); } } }) } I've also modified the UploadChannel method in the app so that it uses a Channel model that has a Platform property. Therefore, we can now work out which PNS provider to use when we have multiple platforms using the service. The UploadChannel method also uses a new InsertChannel method in our DataService method (you can see the full code in the sample app). The following code details these modifications: public async static void UploadChannel() { var channel = await Windows.Networking.PushNotifications. PushNotificationChannelManager. CreatePushNotificationChannelForApplicationAsync(); var token = Windows.System.Profile.HardwareIdentification. GetPackageSpecificToken(null); string installationId = Windows.Security.Cryptography. CryptographicBuffer.EncodeToBase64String(token.Id); try { var service = new DataService(); await service.InsertChannel(new Channel() { ChannelUri = channel.Uri, InstallationId = installationId, Platform = "win8" }); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.ToString()); } }  Setting up tiles To implement wide or large square tiles, we need to create the necessary assets and define them in the Visual Assets tab of the Package.appxmanifest editor. This is shown in the following screenshot:  Setting up badges Windows Store apps support badge notifications as well as tile and toast. However, this requires a slightly different configuration. To implement badge notifications, we perform the following steps: Create a 24 x 24 pixel PNG badge that can have opacity, but must use only white color. Define the badge in the Badge Logo section of the Visual Assets tab of the Package.appxmanifest editor. Add a Background Tasks declaration in the Declarations tab of the Package.appxmanifest editor, select Push notification, and enter a Start page, as shown in the following screenshot: Finally, in the Notifications tab of the Package.appxmanifest editor, set Lock screen notifications to Badge. This is shown in the following screenshot: To see the badge notification working, you also need to add the app to the lock screen badge slots in Lock Screen Applications | Change PC Settings | Lock Screen.  Setting up Windows Phone 8 apps Visual Studio 2012 Express for Windows Phone doesn't have a fancy wizard like Visual Studio 2013 Express for Windows Store. So, we need to configure the channel and register it with the service manually. The following procedure sets up the notifications in the app by using the table that we created in the preceding Setting up Windows Store apps section: Edit the WMAppManifest.xml file to enable ID_CAP_IDENTITY_DEVICE, which allows us to get a unique device ID for registering in the Channels table, and ID_CAP_PUSH_NOTIFICATION, which allows push notifications in the app. These options are available in the Capabilities tab, as shown in the following screenshot: To enable wide tiles, we need to check Support for large Tiles (you can't see the tick unless you hover over it, as there is apparently a theming issue in VS!) and pick the path of the wide tile we want to use (by default, there is one named FlipCycleTileLarge.png under Tiles in the Assets folder). This is shown in the following screenshot: Next, we need to add some code to get the push channel URI and send it to the service: using Microsoft.Phone.Info; using Microsoft.Phone.Notification; using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Text; using System.Threading.Tasks; using TileTapper.DataServices; using TileTapper.Models; namespace TileTapper.Helpers { public class ChannelHelper { // Singleton instance public static readonly ChannelHelper Default = new ChannelHelper(); // Holds the push channel that is created or found private HttpNotificationChannel _pushChannel; // The name of our push channel private readonly string CHANNEL_NAME = "TileTapperPushChannel"; private ChannelHelper() { } public void SetupChannel() { try { // Try to find the push channel this._pushChannel = HttpNotificationChannel.Find(CHANNEL_NAME); // If the channel was not found, then create a new // connection to the push service if (this._pushChannel == null ) { this._pushChannel = new HttpNotificationChannel(CHANNEL_NAME); this.AttachEvents(); this._pushChannel.Open(); // Bind channel for Tile events this._pushChannel.BindToShellTile(); // Bind channel for Toast events this._pushChannel.BindToShellToast(); } else this.AttachEvents(); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.ToString()); } } private void AttachEvents() { // Register for all the events before attempting to // open the channel this._pushChannel.ChannelUriUpdated + = async (s, e) => { // Register URI with service await this.Register(); }; this._pushChannel.ErrorOccurred += (s, e) => { System.Diagnostics.Debug.WriteLine(e.ToString()); }; } private async Task Register() { try { var service = new DataService(); await service.InsertChannel(new Channel() { ChannelUri = this._pushChannel.ChannelUri.AbsoluteUri, InstallationId = this.GetDeviceUniqueName(), Platform = "wp8" }); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.ToString()); } } // Note: to get a result requires // ID_CAP_IDENTITY_DEVICE // to be added to the capabilities of the WMAppManifest // this will then warn users in marketplace private byte[] GetDeviceUniqueID() { byte[] result = null; object uniqueId; if (DeviceExtendedProperties.TryGetValue("DeviceUniqueId", out uniqueId)) result = (byte[])uniqueId; return result; } private string GetDeviceUniqueName() { byte[] id = this.GetDeviceUniqueID(); string idEnc = Encoding.Unicode.GetString(id, 0, id.Length); string deviceID = HttpUtility.UrlEncode(idEnc); return deviceID; } } } This is a singleton class that holds an instance of the HttpNotificationChannel object, so that channel URI changes can be captured and sent up to our service. The two methods at the end of the code snippet, GetDeviceUniqueID and GetDeviceUniqueName, will give a unique device identifier for the channels table. Now that we have the code to manage the channel, we need to call the SetupChannel method in the App.xaml.cs launching method as shown in the following code snippet: private void Application_Launching(object sender, LaunchingEventArgs e) { TileTapper.Helpers.ChannelHelper.Default.SetupChannel(); }
Read more
  • 0
  • 0
  • 3052

article-image-creating-your-first-virtual-machine-ubuntu-linux-part-2
Packt
15 Apr 2010
6 min read
Save for later

Creating Your First Virtual Machine: Ubuntu Linux (Part 2)

Packt
15 Apr 2010
6 min read
Running your Ubuntu Linux VM This is going to be the most entertaining section of the article: you'll get to play with your brand-new Ubuntu Linux virtual machine! If you haven't used Linux before, I'd definitely recommend that you browse through the Ubuntu documentation at https://help.ubuntu.com/9.10/index.html. Time for action – running Ubuntu Linux The best way to test your new virtual machine is experimenting, so let's get on with it! Open VirtualBox (in case you closed it after the last section's exercise), select your UbuntuVB virtual machine, and click on Start to turn it on: Ubuntu will start to boot in your virtual machine. Eventually, the Ubuntu logo will show up along with the progress bar and, after a few seconds (or minutes, depending on your hardware), the Ubuntu login screen will show up. Click inside the virtual machine screen to capture the mouse and keyboard, type the username you assigned in the installation process, and hit Enter to continue. Now type the password for your username, and hit Enter again. Ubuntu will start to load. When finished, you'll see the Ubuntu GNOME Desktop screen: One of the first things you'll notice is the Update Manager dialog. This dialog shows up when your Ubuntu system needs software updates. Click on Install Updates to start the updating process. Normally, the Update Manager will ask for your administrator password. Type it, press Enter, or click on OK and then wait for the Update Manager to finish its job so you can work with your Ubuntu system fully updated. If the Update Manager asks you to restart your Ubuntu system after updating, click on the Restart Now button, and wait for your Ubuntu virtual machine to reboot. What just happened? Isn't it cool to have a little Ubuntu system running inside your real PC? Just like a pregnant mother feeling her baby's first movements! Well, not as touching, but you get the point, right? Ubuntu is one of the friendliest Linux distributions available. That's why I decided to use it for this article's exercises. Now let's go and test the Internet connection on your new Ubuntu virtual machine! Web browsing with Mozilla Firefox One of the best things about the Ubuntu Desktop edition is that you can use Mozilla Firefox out of the box. And the Ubuntu Update Manager keeps it updated automatically for you! Time for action – web browsing in your Ubuntu VM You have your virtual machine installed. What's next? Let's surf the web! After all, what could be more important than that? Open the Applications menu on your Ubuntu virtual machine, and select Internet | Firefox Web Browser from the menu: The Mozilla Firefox window will show the Ubuntu Start Page. Type virtualbox.org on the address bar and press Enter: The VirtualBox homepage should appear as an indication that you have Internet access in your virtual machine. You can close Mozilla Firefox now. If you cannot connect to Internet from your virtual machine, check your host's network settings. If you can connect from your host, try using another virtual network adapter type in your virtual machine to see if the problem disappears. What just happened? Well, this exercise is not really hard, right? But this is a cool way to test if your new virtual machine has Internet enabled by default. Later on, we'll talk about the different settings related to virtual network interfaces and VirtualBox. You can also know if your virtual machine can connect to Internet through the Ubuntu Update Manager because it will issue a warning if it cannot access the Ubuntu software sources. For now, it's good to know we can surf the web! Now let's see how you can do some real work inside your Ubuntu VM… Using OpenOffice.org in your virtual machine Ok, we have Internet enabled on our Ubuntu virtual machine; what else could we ask for? How about some word processing, a spreadsheet, and some presentations, for starters? I know it's boring, but some of us also use VirtualBox to work! Time for action – using OpenOffice.org Ubuntu comes with OpenOffice.org, the open source productivity suite that has proven to be an effective alternative to MS Office for Linux users. Now let's try it out on your new Ubuntu virtual machine... Open the Applications menu on your Ubuntu virtual machine, and select Office | OpenOffice.org Word Processor from the menu: The Untitled 1 – OpenOffice.org Writer window will appear. You can use OpenOffice Writer as if you were on a real machine: Now go to the Applications menu again, and this time select the Office | OpenOffice.org Spreadsheet option. The Untitiled 2 – OpenOffice.org Calc window will show up, overlapping the Writer window. You can also work with it as in a real PC: And now, go back to the Application menu, and select the Office | OpenOffice.org Presentation option. The Presentation Wizard screen will show up. Select the Empty Presentation option, click on Next twice, and then click on Create to continue. The Untitled 3 – OpenOffice.org Impress window will show up, overlapping the other two windows: Now you can close all the application windows inside your virtual machine. What just happened? How about that? A complete office productivity suite inside your main PC! And Internet access too! So, if you always wanted to learn about Linux or any other operating system but were afraid of messing up your main PC, VirtualBox has come to your rescue! Now let's see how to turn off your virtual machine… Have a go hero – trying out Ubuntu One: your personal cloud Now that you have an Ubuntu virtual machine, you would likely benefit from trying out the Ubuntu One service, where you can back up, store, sync, and share your data with other Ubuntu One users. And the best of all, it's free! To open an account, select Applications | Internet | Ubuntu One, and follow the instructions on screen. Have a go hero – sharing information between your VM and your host PC Use your Ubuntu One account to transfer some files between your virtual machine and your host PC. If you're using Windows, you can work with the Ubuntu One web interface at http://one.ubuntu.com. Shutting down your virtual machine I know you're thinking, "Geez, I can't believe this guy! He's actually going to spend an entire subsection of this article just to show us how to shutdown a virtual machine! Aw, come on!" Now it's my turn: Remember we're talking about a virtual machine here, not a real PC! You need to consider several things before shutting this baby down!
Read more
  • 0
  • 0
  • 2994

article-image-xen-virtualization-work-mysql-server-ruby-rails-and-subversion
Packt
22 Oct 2009
7 min read
Save for later

Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion

Packt
22 Oct 2009
7 min read
Base Appliance Image We will use an Ubuntu Feisty domain image as the base image for creating these appliances. This image should be made as sparse and small as possible, and free of any cruft. A completely stripped down version of Linux with only the bare necessities would be a great start. In this case, we will not need any graphical desktop environments, so we can completely eliminate software packages like the X11 and any window manager like Gnome or KDE. Once we have a base image, we can back it up and then start using it for creating Xen appliances. In this article we will use an Ubuntu Feisty domain as the base image. Once this domain image is ready we are going to update it and clean it up a little bit so it can be our base. Edit the sources list for apt and add in other repositories that we will need to get software packages we will need when creating these appliances. Update your list of software. This will connect to the apt repositories and get the latest list of packages. We will do the actual update in the next step. Upgrade the distribution to ensure that you have the latest versions of all the packages. Automatically clean the image so all unused packages are removed. This will ensure that the image stays free of cruft.   Now we have the base appliance image ready, we will use it to create some Xen appliances. You can make a backup of the original base image and every time you create an appliance you can use a copy as the starting point or template. The images are nothing but domU images, which are customized for running only specific applications. You start them up and run them like ay other Xen guest domains. MySQL Database Server MySQL is one of the most popular open-source databases in the world. It is a key component of the LAMP architecture – (Linux Apache MySQL and PHP). It is also very easy to get started with MySQL and is one of the key factors driving its adoption across the enterprise. In this section we will create a Xen appliance that will run a MySQL database server and also provide the ability to automatically backup the database on a given schedule. Time for Action – Create our first Xen appliance We will use our base Ubuntu Feisty domain image, and add MySQL and other needed software to it. Please ensure that you have updated your base image to the latest versions of the repositories and software packages before creating this appliance. Install mysql-server using apt. Once it is installed, Ubuntu will automatically start the database server. So before we make our other changes, stop MySQL. Edit the /etc/mysql/my.cnf and comment out the line for the bind-address parameter. This will ensure that MySQL will accept connections from external machines and not just the localhost. Start a mysql console session to test that everything is installed and working correctly. Next we will install the utility for doing the automated backups. In order to do that we will first need to install the wget utility for transferring files. This is not a part of the base Ubuntu Feisty installation. Download the automysqlbackup script from the website. Copy this script to wherever you like, maybe /opt. Create a link to this location so it’s easy to do future updates. # cp automysqlbackup.sh.2.5 /opt# ln -s automysqlbackup.sh.2.5 automysqlbackup.sh Edit the script and modify the parameters at the top of the script to match your environment. Here are the changes to be made in our case. # Username to access the MySQL server e.g. dbuserUSERNAME=pchaganti# Username to access the MySQL server e.g. passwordPASSWORD=password# Host name (or IP address) of MySQL server e.g localhostDBHOST=localhost# List of DBNAMES for Daily/Weekly Backup e.g. "DB1 DB2 DB3"DBNAMES="all"# Backup directory location e.g /backupsBACKUPDIR="/var/backup/mysql"# Mail setupMAILCONTENT="quiet" Schedule this backup script to be run daily by creating a crontab entry for it, in the following format. 45 5 * * * root  /opt/automysqlbackup.sh >/dev/null 2>&1 Now we have a MySQL database server with automatic daily backups as a nice reusable Xen appliance. What just happened? We created our first Xen appliance! It is running the open-source MySQL database server along with an automated backup of the database as per the given schedule. This image is essentially a domU image and it can be uploaded along with its configuration file to a repository somewhere, and can be used by anyone in the enterprise or elsewhere with their Xen server. You can either start up the domain manually as and when you need it or set it up to boot automatically when your xend server starts. Ruby on Rails Appliance Ruby on Rails is one of the hottest web development frameworks around. It is simple to use and you can use all the expressive power of the Ruby language. It provides a great feature set and has really put the Ruby language on the map. Ruby on Rails is gaining rapid adoption across the IT landscape and for a wide variety of web applications. In this section, we are going to create a Rails appliance that contains Ruby, Rails, and the Mongrel cluster for serving the Rails application and nginx web server for the static content. This appliance gives you a great starting point for your explorations into the world of Ruby on Rails and can be an excellent learning resource. Time for Action – Rails on Xen We will use our base Ubuntu Feisty domain image and add Rails and other needed software to it. Please ensure that you have updated your base image to the latest versions of the repositories and software packages before creating this appliance. Install the packages required for compiling software on an Ubuntu system. This is required as we will be compiling some native extensions. Once the image is done, you can always remove this package if you want to save space. Install Ruby and other packages that are needed for it. Download the RubyGems package from RubyForge. We will use this to install any Ruby libraries or packages that we will need, including Rails. Now install Rails. The first time when you run this command on a clean Ubuntu Feisty system, you will get the following error. Ignore this error and just run the command once again and it will work fine. This will install Rails and all of its dependencies. Create a new Rails application. This will create everything needed in a directory named xenbook. $ rails xenbook  Change into the directory of the application that we created in the previous step and start the server up. This will start Ruby’s built-in web server, webrick by default. Launch a web browser and navigate to the web page for our xenbook application. We have everything working for a simple Rails install. However, we are using webrick, which is a bit slow. So let’s install the Mongrel server and use it with Rails. We will actually install mongrel_cluster that will let us use a cluster of Mongrel processes for serving up our Rails application.
Read more
  • 0
  • 0
  • 2985
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-user-and-group-management-oracle-vm-manager-212
Packt
05 Oct 2009
3 min read
Save for later

User and Group Management: Oracle VM Manager 2.1.2

Packt
05 Oct 2009
3 min read
This function is only available to administrators, so use this role prudently. During the installation of the Oracle VM Manager, a default admin account is created. And with this admin's account we can go about managing the users and groups. Managing Users Here it is possible to create new users, delete older or unwanted ones, assign different roles to those users, reset user password, and so on. Let's break it up into a few topics and have a look at it. Creating a User Viewing or editing details Changing a role Deleting a User Creating a User To create a User, perform the following: On the Administrator's page click User tab and then click on the Create button: Enter the necessary information such as: Username (avoid using user, manager, and administrator as username) Password Retype Password First Name Last Name Valid Email address We can select the account status, it could either be locked or unlocked and is only accessible when it's unlocked. We can lock an account for security reasons by using the status Locked. We can grant the following roles to this newly created user—User, Manager, or Administrator. Then select the Server Pools for this user and also select the group to which this user should belong to. Click on the Confirm button to confirm the information and we will get this information: As we can see in the preceding screenshot this is a plain user and has no groups or servers assigned to it. However this was unlocked and was granted a User role. Viewing or editing a User Now let's view the User we just created. Click on the User tab on the Administrator page: Click on the Show link to view the Server Pools that the user is allowed to use: We can now edit account details such as change email address, change account status, and so on. Let's change the User's email address: Modifying the account status to either locked or unlocked: Changing the role: Next, add the User to the Server Pool: Removing a User from groups or Server Pools: Changing a User's role Lets change regular Users' role to Administrator: On the Administrator's page, select the newly created User and click on the Edit button. Select the role and click Apply to effectively assign the role to the User: Once applied, we will be presented with the following screen: Deleting User To delete a User, we need to do the following on the Administrator's page. We can carry out a search and then select the User that we want to delete. Click on the Delete button and confirm the User you want to delete:
Read more
  • 0
  • 0
  • 2949

article-image-availability-management
Packt
28 Oct 2013
29 min read
Save for later

Availability Management

Packt
28 Oct 2013
29 min read
(For more resources related to this topic, see here.) Reducing planned and unplanned downtime Whether we are talking about a highly available and critically productive environment or not, any planned or unexpected downtime means financial losses. Historically, solutions that could provide high availability and redundancy were costly and complex. With the virtualization technologies available today, it becomes easier to provide higher levels of availability for environments where they are needed. With VMware products, and vSphere in particular, it's possible to do the following things: Have higher availability that is independent of hardware, operating systems, or applications Choose a planned downtime for many maintenance tasks, and shorten or eliminate them Provide automatic recovery in case of failure Planned downtime Planned downtime usually happens during hardware maintenance, firmware or operating system updates, and server migrations. To reduce the impact of planned downtime, IT administrators are forced to schedule small maintenance windows outside the working hours. vSphere makes it possible to dramatically reduce the planned downtime. With vSphere, IT administrators can perform many maintenance tasks at any point in time as it allows downtime elimination for many common maintenance operations. This is possible mainly because workloads in a vSphere can be dynamically moved between different physical servers and storage resources without any service interruption. The main availability capabilities that are built into vSphere allow the use of HA and redundancy features, and are as follows: Shared storage: Storage resources such as Fibre Channel, iSCSI, Storage Area Network (SAN), or Network Access Storage (NAS) help eliminate the single points of failure. SAN mirroring and replication features can be used to have fresh copies of the virtual disk at disaster recovery sites. Network interface teaming: This feature provides tolerance for individual network card failures. Storage multipathing: This helps to tolerate storage path failures. vSphere vMotion® and Storage vMotion functionalities allow the migration of VMs between ESXi hosts and their underlying storage without service interruption, as shown in the following figure: In other words, vMotion is the live migration of VMs between ESXi hosts, and Storage vMotion is the live migration of VMs between storage LUNs. In both cases, VM retains its network and disk connection. With vSphere 5.1 and the later versions, it's possible to combine vMotion with Storage vMotion into a single migration that simplifies administration. The entire process takes less than two seconds on a GB network. vMotion keeps track of the ongoing memory transaction while memory and system states get copied to the target host. Once copying is done, vMotion suspends the source VM, copies the transactions that happened during the process to the target host, and resumes the VM on the target host. This way, vMotion ensures transaction integrity. vSphere requirements for vMotion vSphere requirements for vMotion are as follows: All the hosts must have the following features: They should be correctly licensed for vMotion Have access to the shared storage Use a GB Ethernet adapter for vMotion, preferably a dedicated one The VMkernel port group is configured for vMotion with the same name (the name is case sensitive) Have access to the same subnets Must be members of all the vSphere distributed switches that VMs use for networking Use jumbo frames for best vMotion performance All the virtual machines that need to be vMotioned must have the following features: Shouldn't use raw disks if migration between storage LUNs is needed Shouldn't use devices that are not available on the destination host (for example, a CD drive or USB devices not enabled for vMotion) Should be located on a shared storage resource Shouldn't use devices connected from the client computer Migration with vMotion Migration with vMotion happens in three stages: vCenter server verifies that the existing VM is in a stable state and that the CPU on the target host is compatible with the CPU this VM is currently using vCenter migrates VM state information such as memory, registers, and network connections to the target host The virtual machine resumes its activities on the new host VMs with snapshots can be vMotioned regardless of their power state as long as their files stay on the same storage. Obviously, this storage has to be accessible for both the source and destination hosts. If migration involves moving configuration files or virtual disks, the following additional requirements apply: Both the source and destination hosts must be of ESX or ESXi version 3.5 or later All the VM files should be kept in a single directory on a shared storage resource To vMotion a VM in vCenter, right-click on a VM and choose Migrate… as shown in the following screenshot: This opens a migration wizard where you can select whether it's going to migrate between hosts or storage or both. The Change hostoption is the standard vMotion, and Change datastore is the Storage vMotion. As you can see, the Change both host and datastore option is not available because this VM is currently running. As mentioned earlier, vSphere 5.1 and later support vMotion and Storage vMotion in one transaction. In the next steps, you are able to choose the destination as well as the priority for this migration. Multiple VMs can be migrated at the same time if you make multiple selections in the Virtual Machines tab for the host or the cluster. VM vMotion is widely used to perform host maintenance such as upgrading the ESX operating system, memory, or any other configuration changes. When maintenance is needed on a host, all the VMs can be migrated to other hosts and this host can be switched into the maintenance mode. This can be accomplished by right-clicking on the host and selecting Enter Maintenance Mode. Unplanned downtime Environments, especially critical ones, need to be protected from any unplanned downtime caused by possible hardware or application failures. vSphere has important capabilities that can address this challenge and help to eliminate unplanned downtime. These vSphere capabilities are transparent to the guest operating system and any applications running inside the VMs; they are also a part of the virtual infrastructure. The following features can be configured for VMs in order to reduce the cost and complexity of HA. More detail on these features will be given in the following sections of this article. High availability (HA) vSphere's HA is a feature that allows a group of hosts connected together to provide high levels of availability for VMs running on these hosts. It protects VMs and their applications in the following ways: In case of ESX server failure, it restarts VMs on the other hosts that are members of the cluster In case of guest OS failure, it resets the VM If application failure is detected, it can reset a VM With vSphere HA, there is no need to install any additional software in a VM. After vSphere HA is configured, all the new VMs will be protected automatically. The HA option can be combined with vSphere DRS to protect against failures and to provide load balancing across the hosts within a cluster The advantages of HA over traditional failover solutions are listed in the VMware article at http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.avail.doc%2FGUID-CB46CEC4-87CD-4704-A9AC-058281CFD8F8.html. Creating a vSphere HA cluster Before HA can be enabled, a cluster itself needs to be created. To create a new cluster, right-click on the datacenter object in the Hosts and Clusters view and select New Cluster... as shown in the following screenshot: The following prerequisites have to be considered before setting up a HA cluster: All the hosts must be licensed for vSphere HA. ESX/ESXi 3.5 hosts are supported for vSphere HA with the following patches installed; these fix an issue involving file locks: ESX 3.5: patch ESX350-201012401-SG and prerequisites ESXi 3.5: patch ESXe350-201012401-I-BG and prerequisites At least two hosts must exist in the cluster. All the hosts' IP addresses need to be assigned statically or configured via DHCP with static reservations to ensure address consistency across host reboots. At least one network should exist that is shared by all the hosts, that is, a management network. It is best practice to have at least two. To ensure VMs can run on any host, all the hosts should also share the same datastores and virtual networks. All the VMs must be stored on shared, and not local, storage. VMware tools must be installed for VM monitoring to work. Host certificate checking should be enabled. Once all of the requirements have been met, vSphere HA can be enabled in vCenter under the cluster settings dialog. In the following screenshot, it appears as PRD-CLUSTER Settings: Once HA is enabled, all the cluster hosts that are running and are not in maintenance mode become a part of HA. HA settings The following HA settings can also be changed at the same time: Host monitoring status is enabled by default Admission control is enabled by default Virtual machine options (restart priority is Medium by default and isolation response by default is set to Leave powered on) VM monitoring is disabled by default Datastore heartbeating is selected by vCenter by default More details on each of these settings can be found in the following sections of this article. Host monitoring status When a HA cluster is created, an agent is uploaded to all the hosts and configured to communicate with other agents within the cluster. One of the hosts becomes the master host, and the rest become slave hosts. There is an election process to choose the master host, and the host that mounts more datastores has an advantage in this election. In cases where we have a tie, the host with the lexically-highest Managed Object ID(MOID) is chosen. MOID, also called MoRef ID, is a value generated by vCenter for each object: host, datastore, VM, and so on. It is guaranteed to be unique across the infrastructure managed by this particular vCenter server. When it comes to the election process for choosing the master host, a host with ID 99 will have higher priority than a host with ID 100. If a master host fails or becomes unavailable, a new election process is initiated. Slave hosts monitor whether the VMs are running locally and report to the master host. In its turn, the master host communicates with vCenter and monitors other hosts for failures. Its main responsibilities are listed as follows: Monitoring the state of the slave hosts and in case of failure, identifying which VMs must be restarted Monitoring the state of all the protected VMs and restarting them in case of failure Managing a list of hosts and protected VMs Communicating with vCenter and reporting the cluster's health state Host availability monitoring is done through a network heartbeat exchange, which happens every second by default. In cases where we lose network heartbeats with a host, before declaring it as failed, the master host checks whether this host communicates with any of the existing datastores using datastore heartbeats and responds to pings sent to its management IP address or not. The master host detects the following types of host failure: Type of failure Network heartbeats ICMP ping Datastore heartbeats Lost connectivity to the master host - + + Network isolation - - + Failure - - - If host failure is detected, the host's VMs will be restarted on other hosts. Host network isolation happens when a host is running but doesn't see any traffic from vSphere HA agents, which means that it's disconnected from the management network. Isolation is handled as a special case of failure in VMware HA. If a host becomes network isolated, the master host continues to monitor this host and the VMs running on it. Depending on the isolation settings chosen for individual VMs, some of them may be restarted on another host. The master host has to communicate with vCenter, therefore, it can't be in the isolation mode. Once that happens, a new master host will be elected. When network isolation happens, certain hosts are not able to communicate with vCenter, which may result in configuration changes not having effect on certain parts of the infrastructure. If a network infrastructure is configured correctly and has redundant network paths, isolation should happen rarely. Datastore heartbeating Datastore heartbeating was introduced in vSphere 5. In the previous versions of vSphere, once a host became unreachable through the management network, HA always initiated VM restart, even if the VMs were still running. This, of course, created unnecessary downtime and additional stress to the host. Datastore heartbeating allows HA to make a distinction between hosts that are isolated or partitioned and hosts that have failed, which adds more stability to the way HA works. vCenter server selects a list of datastores for heartbeat verification to maximize the number of hosts that can be verified. It uses a selection algorithm designed to select datastores that are connected to the highest number of hosts. This algorithm attempts to choose datastores that are hosted on different storage arrays or NFS servers. It also prefers VMFS-formatted LUNs over NFS-hosted datastores. vCenter selects datastores for heartbeating in the following scenarios: When HA is enabled If a new datastore is added If the accessibility to a datastore changes By default, two datastores are selected. This is the minimum amount of datastores needed. It can be changed using the das.heartbeatDsPerHost parameter under Advanced Settings for up to five datastores. The PRD-CLUSTER Settings dialog box can be used to verify or change the datastores selected for heartbeating, as shown in the following screenshot: It is recommended, however, to let vCenter choose the datastores. Only the datastores that are mounted to more than one host are available in the list. Datastore heartbeating leverages the existing VMFS filesystem locking mechanism. There is a so-called heartbeat region that exists on each datastore and is updated as long as the lock on a file exists. A host updates the datastore's heartbeat region if it has at least one file opened on this volume. HA creates a file for datastore heartbeating purposes only to make sure there is at least one file opened on a volume. Each host creates its own file and HA to be able to determine whether an unresponsive host still has connection to a datastore, and simply checks whether the heartbeat region has been updated or not. By default, an isolation response is triggered after 5 seconds for the master host and after approximately 30 seconds if the host was a slave in vSphere 5. This time difference occurs because of the fact that if the host was a slave, it would need to go through the election process to determine whether there are any other hosts that exist, or whether the master host is simply down. This election starts within 10 seconds after the slave host has lost its heartbeats. If there is no response for 15 seconds, the HA agent on this host elects itself as the master. The isolation response time can be increased using the das.config.fdm.isolationPolicyDelaySec parameter under Advanced Settings. This is, however, not recommended as it increases the downtime when a problem occurs. If a host becomes a master in a cluster with more than one host and has no slaves, it continuously starts checking whether it's in the isolation mode or not. It keeps doing so until it becomes a master with slaves or connects to a master as a slave. At this point, the host will ping its isolation address to determine whether the management network is available again. By default, the isolation address is a gateway configured for the management network. This option can be changed using the das.isolationaddress[X] parameter under Advanced Settings. [X] takes values from 1 to 10 and allows configuration of multiple isolation addresses. Additionally, the das.usedefaultisolationaddress parameter can be used to indicate whether the default gateway address should be used as an isolation address or not. This parameter should be set to False if the default gateway is not configured to respond to the ICMP ping packets. Generally, it's recommended to have one isolation address for each management network. If this network uses redundant paths, the isolation address should always be available under normal circumstances. In certain cases, a host may be isolated, that is, not accessible via the management network but still able to receive election traffic. This host is called partitioned. Have a look at the following figure to gain more insight about this: When multiple hosts are isolated but can still communicate with each other, it's called a network partition. This can happen for various reasons; one of them is when a cluster spans multiple sites over a metropolitan area network. This is called the stretched cluster configuration. When a cluster partition occurs, one subset of hosts is able to communicate with the master while the other is not. Depending on the isolation response selected for VMs, they may be left running or restarted. When a network partition happens, the master election process is initiated within the subset of hosts that loses its connection to the master. This is done to make sure that the host failure or isolation results in appropriate action on the VMs. Therefore, a cluster will have multiple masters; each one in a different partition as long as the partition exists. Once the partition is resolved, the masters are able to communicate with each other and discover the multiplicity of master hosts. Each time this happens, one of them becomes a slave. The hosts' HA state is reported by vCenter through the Summary tab for each host as shown in the following screenshot: This is done under the Hosts tab for cluster objects as shown in the following screenshot: Running (Master) indicates that HA is enabled and the host is a master host. Connected (Slave) means that HA is enabled and the host is a slave host. Only the running VMs are protected by HA. Therefore, the master host monitors the VM's state and once it changes from powered off to powered on, the master adds this VM to the list of protected machines. Virtual machine options Each VM's HA behavior can be adjusted under vSphere HA settings or in the Virtual Machine Options option found in the PRD-CLUSTER Settings page as shown in the following screenshot: Restart priority The restart priority setting determines which VMs will be restarted first after the host failure. The default setting is Medium. Depending on the applications running on a VM, it may need to be restarted before other VMs, for example, if it's a database, a DNS, or a DHCP server. It may be restarted after others if it's not a critical VM. If you select Disabled, this VM will never be restarted if there is a host failure. In other words, HA will be disabled for this VM. Isolation response The isolation response setting defines HA actions against a VM if its host loses connection to the management network but is still running. The default setting is Leave powered on. To be able to understand why this setting is important, imagine the situation where a host loses connection to the management network and at the same time or shortly afterwards, to the storage network as well—a so-called split-brain situation. In vSphere, only one host can have access to a VM at a time. For this purpose, the .vmdk file is locked and there is an additional .lck file present in the same folder where .vmdk file is stored. As HA is enabled, VMs will fail over to another host, however, their original instances will keep running on the old host. Once this host comes out of isolation, we will end up with two copies of VMs. Therefore, the isolated host will not have access to the .vmdk file as it's locked. In vCenter, however, this VM will look as if it is flipping between two hosts. With the default settings, the original host is not able to reacquire the disk locks and will be querying the VM. HA will send a reply instead which allows the host to power off the second running copy. If the Power Off option is selected for a VM under the isolation response settings, this VM will be immediately stopped when isolation occurs. This can cause inconsistency with the filesystem on a virtual drive. However, the advantage of this is that VM restart on another host will happen more quickly, thus reducing the downtime. The Shut down option attempts to gracefully shut down a VM. By default, HA waits for 5 minutes for this to happen. When this time is over, if the VM is not off yet, it will be powered off. This timeout is controlled by the das.isolationshutdowntimeout parameter under the Advanced Settings option. VM must have VMware tools installed to be able to shut down gracefully. Otherwise, the shutdown option is equivalent to power off. VM monitoring Under VM Monitoring, the monitoring settings of individual applications can be adjusted as shown in the following screenshot: The default setting is Disabled. However, VM and Application Monitoring can be enabled so that if the VM heartbeat (VMware tool's heartbeat) or its application heartbeat is lost, the VM is restarted. To avoid false positives, the VM monitoring service also monitors VM's I/O activity. If a heartbeat is lost and there was no I/O activity (by default during the last 2 minutes), VM is considered as unresponsive. This feature allows you to power cycle nonresponsive VMs. I/O interval can be changed under the advanced attribute settings (for more details, check the HA Advanced attributes table later in this section). Monitoring sensitivity can be adjusted as well. Sensitivity means the time interval between the loss of heartbeats and restarting of the VM. The available options are listed in the table from the VMware documentation article available at http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.avail.doc_50%2FGUID-62B80D7A-C764-40CB-AE59-752DA6AD78E7.html. To avoid repeating VM resets by default, they will be restarted only three times during the reset period. This can be changed in the Custom mode as shown in the following screenshot: In order to be able to monitor applications within a VM, they need to support VMware application monitoring. Alternatively, you can download the appropriate SDK and set up customized heartbeats for the application that needs to be monitored. Under Advanced Options, the following vSphere HA behaviors can be set. Some of them have already been mentioned in sections of this article. The following screenshot shows the Advanced Options (vSphere HA) window where advanced HA options can be added and set to specific values: vSphere HA advanced options are listed in the article from VMware documentation available at http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.avail.doc_50%2FGUID-E0161CB5-BD3F-425F-A7E0-B F83B005FECA.html. This mentioned article also lists the options that are not supported in vCenter 5. You will get an error message if you try to add one of them. Also, the options will be deleted after being upgraded from the previous versions. Admission control Admission control ensures there are sufficient resources available to provide failover protection as and when VM resource reservations are kept. Admission control is available for the following: Hosts Resource pools vSphere HA Admission control can only be disabled for vSphere HA. The following screenshot shows PRD-CLUSTER Settings with the option to disable admission control: Examples of actions that may not be permitted because of insufficient resources are as follows: Power on a VM Migrate a VM to another host, cluster, or resource pool Increase CPU or memory reservation for a VM Admission control policies There are three possible types of admission control policies available for HA configuration that are as follows: Host failure cluster tolerates: When this option is chosen, HA ensures that a specified number of hosts can fail, but sufficient resources will still be available to accommodate all the VMs from these hosts. The decision to either allow or deny an operation is based on the following calculations: Slot size: A hypothetical VM that has the largest amount of memory and CPU that is assigned to an existing VM in the environment. For example, for the following VMs, the slot size will be 4 GHz and 6 GB: VM CPU RAM VM1 4 GHz 2 GB VM2 2 GHz 4 GB VM3 1 GHz 6 GB Host capacity: It gives the number of slots each host can hold based on the resources available for VMs; not the total host memory and CPU. For example, for the previous slot size, the host capacity will be as given in the following table: Host CPU RAM Slots Host1 4 GHz 128 GB 1 Host2 24 GHz 6 GB 1 Host3 8 GHz 14 GB 2 Cluster failover capacity: This gives the number of hosts that can fail before there aren't enough slots left to accommodate all the VMs. For example, for previous hosts with 1 host failure policy, the failover capacity is 2 slots. In case of Host3 failure (host with larger capacity), the cluster is left with only two slots. But if the current failover capacity is less than the allowed limit, admission control disallows the operation. For example, if we are running two VMs and need to power the third one, it will be denied as the cluster capacity is two and it may not be able to accommodate three VMs. This option is probably not the best one for an environment that has VMs with significantly more of resources assigned than the rest of the VMs. The Host failure cluster tolerates option can be used when all cluster hosts are sized pretty much equally. Otherwise, if you use this option, then excessive capacity is reserved such that the cluster tolerates the largest host failure. When this option is used, VM reservations should be kept similar across the cluster as well. Because vCenter uses the slot sizes model to calculate capacity, and the slot size is based on the largest reservation, having VMs with a large reservation will again result in additional unnecessary capacity being reserved. Percentage of cluster resources: With this policy enabled, HA ensures that a specified percentage of resources are reserved for failover across all the hosts. It also checks that there are at least two hosts available. The calculation happens as follows: The total resource requirement for all the running VMs is calculated. For example, for three VMs in the previous table, the total requirement will be 7 GHz and 12 GB. The total available host resources are calculated. For the previous example, the total is 34 GHz and 148 GB. The current CPU and memory failover capacity for the cluster is calculated as follows: CPU: (1-7/34)*100%=79% RAM: (1-12/148)*100%=92% If the current CPU and memory capacity is less than allowed, the operation is denied. With such different hosts from the example, the CPU and RAM capacity should be configured carefully to avoid a situation when, for example, the host with most amount of RAM fails and the other hosts are not able to accommodate all the VMs because of memory resources. Therefore, RAM should be configured at 87 percent based on the two smallest hosts (#2 and #3) and not 30% based on the number of hosts in the environment: [1-(6+14)/148]*100%=87% In other words, if the host with 128 GB fails, we need to make sure that the total resources needed by the VMs are less than the sum of 6 GB and 14 GB, which is only 13 percent of the total cluster's 148 GB. Therefore, we need to make sure that in all instances, the VMs use only 13 percent of the RAM or that the cluster has 87 percent of RAM that is free. Specified failover hosts: With this policy enabled, HA keeps the chosen failover hosts reserved, doesn't allow the powering on or migrating of any VMs to this host, and restarts VMs on this host only when failure occurs. If for some reason, it's not possible to use a designated failover host to restart the VMs, HA will restart them on other available hosts. It is recommended to use the Percentage of cluster resources reserved option in most cases. This option offers more flexibility in terms of host and VM sizing than other options. HA security and logging vSphere HA configuration files for each host are stored on the host's local storage and are protected by the filesystem permissions. These files are only available to the root user. For security reasons, ESXi 5 hosts log HA activity only to syslog. Therefore, logs are placed at a location where syslog is configured to keep them. Log entries related to HA are prepended with fdm, which stands for fault domain manager. This is what the vSphere HA ESX service is called. Older versions of ESXi write HA activity to fdm logfiles in /var/log/vmware/fdm stored on the local disk. There is also an option to enable syslog logging on these hosts. Older ESX hosts are able to save HA activity only in the fdm local logfile in /var/log/vmware/. HA agent logging configuration also depends on the ESX host version. For ESXi 5 hosts, the logging options that can be configured via the Advanced Options tab under HA are listed in the article under the logging section available at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2033250. The das.config.log.maxFileNum option causes ESXi 5 hosts to maintain two copies of the logfiles: one is a file created by the Version 5 logging mechanism, and the other one is maintained by the pre-5.0 logging mechanism. After any of these options are changed, HA needs to be reconfigured. The following table provides log capacity recommendations according to VMware for environments of different sizes based on the requirement to keep one week of history: Size Minimum log capacity per host in MB 40 VMs in total with 8 VMs per host 4 375 VMs in total with 25 VMs per host 35 1,280 VMs in total with 40 VMs per host 120 3,000 VMs in total with 512 VMs per host 300 These are just recommendations; additional capacity may be needed depending on the environment. Increasing the log capacity involves specifying the number of rotations together with the file size as well as making sure there is enough space on the storage resource where the logfiles are kept. The vCenter server uses the vpxuser account to connect to the HA agents. When HA is enabled for the first time, vCenter creates this account with a random password and makes sure the password is changed periodically. The time period for a password change is controlled by the VirtualCenter.VimPasswordExpirationInDays parameter that can be set under the Advanced Settings option in vCenter. All communication between vCenter and HA agents, as well as agent-to-agent traffic, is secured with SSL. Therefore, for vSphere HA, it's necessary that each host has verified SSL certificates. New certificates require HA to be reconfigured. It will also be reconfigured automatically if a host has been disconnected before the certificate is replaced. SSL certificates are also used to verify election messages so if there is a rogue agent running, it will only be able to affect the host it's running on. This issue, if it occurs, is reported to the administrator. HA uses TCP/8182 and UDP/8182 ports for communication between agents. These ports are opened and closed automatically by the host's firewall. This helps to ensure that these ports are open only when they are needed. Using HA with DRS When vSphere HA restarts VMs on a different host after a failure, the main priority is the immediate availability of VMs. Based on CPU and memory reservations, HA determines which host to use to power the VMs on. This decision is based, of course, on the available capacity of the host. It's quite possible that after all the VMs have been restarted, some hosts become highly loaded while others are relatively lightly loaded. DRS is the load balancing and failover solution that can be enabled in vCenter for better host resource management. vSphere HA, together with DRS, is able to deliver automatic failover and load balancing solutions, which may result in a more balanced cluster. However, there are a few things to consider when it comes to using both features together. In a cluster with DRS, HA, and the admission control enabled; VMs may not be automatically evacuated from a host entering the maintenance mode. This occurs because of resources reserved for VMs that need to be restarted. In this case, the administrator needs to migrate these VMs manually. Some VMs may not fail over because of resource constraints. This can happen in one of the following cases: HA admission control is disabled and DPM is enabled, which may result in insufficient capacity available to perform failover as some hosts may be in the standby mode and therefore, fewer hosts would be available. VM to host affinity rules limit hosts where certain VMs can be placed. Total resources are sufficient but fragmented across multiple hosts. In this case, these resources can't be used by the VMs for failover. DPM is in the manual mode that requires an administrator's confirmation before a host can be powered on from the standby mode. DRS is in the manual mode, and an administrator's confirmation may be needed so that the migration of VMs can be started. What to expect when HA is enabled HA only restarts a VM if there is a host failure. In other words, it will power on all the VMs that were running on a failed host placed on another member of the cluster. Therefore, even with HA enabled, there will still be a short downtime for VMs that are running on faulty hosts. In fast environments, however, VM reboot happens quickly. So if you are using some kind of monitoring system, it may not even trigger an alarm. Therefore, if a bunch of VMs have been rebooted unexpectedly, you know there was an issue with one of the hosts and can review the logs to find out what the issue was. Of course, if you have set up vCenter notifications, you should get an alert. If you need VMs to be up all the time even if the host goes down, there is another feature that can be enabled called Fault Tolerance.
Read more
  • 0
  • 0
  • 2757

Packt
14 Jan 2014
3 min read
Save for later

Installing Virtual Desktop Agent – server OS and desktop OS

Packt
14 Jan 2014
3 min read
(For more resources related to this topic, see here.) You need to allow your Windows master image to communicate with your XenDesktop infrastructure. You can accomplish this task by installing Virtual Desktop Agent. In this latest release of the Citrix platform, VDA has been redeployed in three different versions: desktop operating systems, server operating systems, and Remote PC, a way to link an existing physical or virtual machine to your XenDesktop infrastructure. Getting ready You need to install and configure the described software with domain administrative credentials within both the desktop and server operating systems. How to do it... In the following section, we are going to explain the way to install and configure the three different types of Citrix Virtual Desktop Agents. Installing VDA for a server OS machine Connect to the server OS master image with domain administrative credentials. Mount the Citrix XenDesktop 7.0 ISO on the server OS machine by right-clicking on it and selecting the Mount option Browse the mounted Citrix XenDesktop 7.0 DVD-ROM, and double-click on the AutoSelect.exe executable file. On the Welcome screen, click on the Start button to continue. On the XenDesktop 7.0 menu, click on the Virtual Delivery Agent for Windows Server OS link, in the Prepare Machines and Images section. In the Environment section, select Create a master image if you want to create a master image for the VDI architecture (MCS/PVS). Or enable a direct connection to a physical or virtual server. After completing this step, click on Next. In the Core Components section, select a valid location to install the agent; then flag the Citrix Receiver component; and click on the Next button. In the Delivery Controller section, select Do it manually from the drop-down list in order to manually configure Delivery Controller; type a valid controller FQDN; and click on the Add button, as shown in the following screenshot. To continue with the installation, click on Next. To verify that you have entered a valid address, click on the Test connection...button. In the Features section flag, choose the optimization options that you want to enable, and then click on Next to continue, as shown in the following screenshot: In the Firewall section, select the correct radio button to open the required firewall ports automatically if you're using the Windows Firewall, or manually if you've got a firewall other than that on board. After completing this action, click on the Next button as shown in the following screenshot: If the options in the Summary screen are correct, click on the Install button to complete the installation procedure. In order to complete the procedure, you'll need to restart the server OS machine several times.
Read more
  • 0
  • 0
  • 2708
article-image-extending-oracle-vm-management
Packt
08 Oct 2009
3 min read
Save for later

Extending Oracle VM Management

Packt
08 Oct 2009
3 min read
The following topics were covered in the first part of this article series i.e (Oracle VM Management) Getting started with the Oracle VM Manager Managing Servers and Server Pools Let's continue from where we had left in the previous part of the article. Oracle VM Management: Managing VM Servers and Repositories There must be at least one physical server in the Server Pool that we have created. There are many things you can do with the VM Servers in the Server Pool such as changing the configurations or role or function of the server, restarting it, shutting it down, monitoring its performance, or even deleting it. The Server Pools are elastic and can adapt flexibly to the increase or decrease in the demand of workloads. It is possible to expand the pool with Oracle VM S5:42 PM 7/16/2009 servers and also possible to transfer the workloads or VMs to the VM Servers that are most capable of handling the workloads by throwing the available 4-core resources such as CPU, RAM, storage, and network capacity to the VMs. There is also a possibility of adding more Utility Servers to strengthen the capacity of the Server Pool and thus letting the Server Master handle the workload by assigning the server available to carry out the task. There can only be one Server Pool Master. However, there are basic tasks to perform before we can add the extra servers to the resource pool such as identifying them by their IP address and see if they are available to fulfill tasks as Oracle VM Server or Server Pool Master. Also we will need the Oracle VM Agent password to add them to the IntraCloud farm. Let's move on and start managing the servers. In this section, we will cover the following: How to add a Server Editing Server information Restart, shutdown, and deleting Servers How to add a Server In order to add Utility Servers or Oracle VM Servers to the array of the Oracle VM environment we will need to carry out the following actions: Click on the Add Server link on the Server Page: Search and select a Server Pool and then click Next. Enter the necessary information for Oracle VM parameters: Confirm the information, after testing the connection obviously, and you are done. However, ensure that the Oracle VM Servers are unique while registering in order to avoid any duplication of IP accounts. Editing Server information In order to update information on an existing Oracle VM Server, click on Edit. We can alternatively also click on the General Information tab. To monitor the performance of the Oracle VM Server we can click on the Monitor tab, where we get real time access to CPU, memory, and storage usage:
Read more
  • 0
  • 0
  • 2678

article-image-understanding-view-storage-related-features
Packt
18 Dec 2013
6 min read
Save for later

Understanding View storage-related features

Packt
18 Dec 2013
6 min read
(For more resources related to this topic, see here.) View Storage Accelerator View Storage Accelerator enables View to use the vSphere CBRC feature first introduced with vSphere 5.0. CBRC uses up to 2 GB of RAM on the vSphere host as a read-only cache for View desktop data. CBRC can be enabled for both full clone and linked clone desktop pools, with linked clone desktops having the additional option of caching only the operating system (OS) disk or both the OS disk and the persistent data disk. When the View desktops are deployed and at configured intervals after that, CBRC analyzes the View desktop VMDK and generates a digest file that contains hash values for each block. When the View desktop performs a read operation, the CBRC filter on the vSphere host reviews the hash table and requests the smallest block required to complete the read request. This block and its associated hash key chunk are then placed in the CBRC cache on the vSphere host. Since the desktop VMDK contents are hashed at the block level, and View desktops are typically based on similar master images, CBRC can reuse cached blocks for subsequent read requests for data with the same hash value. Due to this, CBRC is actually a deduplicated cache. The following figure shows the vSphere CBRC Filter as it sits in between the host CBRC cache, and the View desktop digest and VMDK files: Since desktop VMDK contents are subject to change over time, View generates a new digest file of each desktop on a regular schedule. By default, this schedule is every 7 days, but that value can be changed as needed using the View Manager Admin console. Digest generation can be I/O intensive, so this operation should not be performed during periods of heavy desktop use. View Storage Accelerator provides the most benefit during storm scenarios, such as desktop boot storms, user logon storms, or any other read-heavy desktop I/O operation initiated by a large number of desktops. As such, it is unlikely that View Storage Accelerator will actually reduce primary storage needs, but instead will ensure that desktop performance is maintained during these I/O intensive events. Additional information about View Storage Accelerator is available in the VMware document View Storage Accelerator in VMware View 5.1 (http://www.vmware.com/files/pdf/techpaper/vmware-view-storage-accelerator-host-cachingcontent-based-read-cache.pdf). The information in the referenced document is still current, even if the version of View it references is not. Tiered storage for View linked clones To enable a more granular control over the storage architecture of linked clone desktops, View allows us to specify dedicated datastores for each of the following disks: User persistent data disk OS disk (which includes the disposable data disk, if configured) Replica disk It is not necessary to separate each of these disks, but in the following two sections we will outline why we might consider doing so. User persistent data disk The optional linked clone persistent data disk contains user personal data, and its contents are maintained even if the desktop is refreshed or recomposed. Additionally, the disk is associated with an individual user within View, and can be attached to a new View desktop if ever required. As such, an organization that does not back up their linked clone desktops may at the very least consider backing up the user persistent data disks Due to the potential importance of the persistent data disks, organizations may wish to apply more protections to them than they would to the rest of the View desktop. View storage tiering is one way we can accomplish this, as we could place these disks on storage that has additional protections on it, such as replication to a secondary location, or even regular storage snapshots. These are just a sampling of the reasons an organization may want to separate the user persistent data disks. Data replication or snapshots are typically not required for linked clone OS disks or replica disks as View does not support the manual recreation of linked clone desktops in the event of a disaster. Only the user persistent data disks can be reused if the desktop needs to be recreated from scratch. Replica disks One of the primary reasons an organization would want to separate linked clone replica disks onto dedicated datastores has to do with the architecture of View itself. When deploying a linked clone desktop pool, if we do not specify a dedicated datastore for the replica disk, View will create a replica disk on every linked clone datastore in the pool. The reason we may not want a replica disk on every linked clone datastore has to do with the storage architecture. Since replica disks are shared between each desktop, their contents are often among the first to be promoted into any cache tiers that exist, particularly those within the storage infrastructure. If we had specified a single datastore for the replica, meaning that only one replica would be created, the storage platform would only need to cache data from that disk. If our storage array cache was not capable of deduplication, and we had multiple replica disks, that same array would now be required to cache the content from several View replica disks. Given that the amount of cache on most storage arrays is limited, the requirement to cache more replica disk data than is necessary due to the View linked clone tiering feature may exhaust the cache and thus decrease the array's performance. Using View linked clone tiering we can reduce the amount of replica disks we need, which may reduce the overall utilization of the storage array cache, freeing it up to cache other critical View desktop data. As each storage array architecture is different, we should consult vendor resources to determine if this is the optimal configuration for the environment. As mentioned previously, if the array cache is capable of deduplication, this change may not be necessary. VMware currently supports up to 1,000 desktops per each replica disk, although View does not enforce this limitation when creating desktop pools. Summary In this article, we have discussed the native features of View that impact the storage design, and how they are typically used. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article] Use Of ISO Image for Installation of Windows8 Virtual Machine [Article]
Read more
  • 0
  • 0
  • 2449

article-image-so-what-microsoft-hyper-v-server-2008-r2
Packt
16 Sep 2013
11 min read
Save for later

So, what is Microsoft © Hyper-V server 2008 R2?

Packt
16 Sep 2013
11 min read
(For more resources related to this topic, see here.) Welcome to the world of virtualization. On the next pages we will explain in simple terms what virtualization is, where it comes from, and why this technology is amazing. So let's start. The concept of virtualization is not really new; as a matter of fact it is in some ways an inheritance of the mainframe world. For those of you who don't know what a mainframe, is here is a short explanation: A mainframe is a huge computer that can have from several dozen up to hundreds of processors, tons of RAM, and enormous storage space. Think of it as the super computers that international banks are using, or car manufacturers, or even aerospace entities. These monster computers have a "core" operating system (OS), which helps in creating a logical partition of the resources to assign it to a smaller OS. In other words, the full hardware power is somehow divided into smaller chunks that have a specific purpose. As you can imagine, there are not too many companies which can afford this kind of equipment, and this is one of the reasons why the small servers became so popular. You can learn more about mainframes on the Wikipedia page at http://en.wikipedia.org/wiki/Mainframe_computer. Starting in the 80s, small servers (mainly based on Intel© and/or AMD© processors) became quite popular, and almost anybody could buy a simple server. But mid-sized companies began to increase the number of servers. In later years the power provided by new servers was enough to fulfill the most demanding applications, and guess what, even to support virtualization. But you will be wondering, what is virtualization? Well the virtualization concept, even if a bit bizarre, is to work as a normal application to the host OS, asking for CPU, memory, disk, network, to name the main four subsystems, but the application is creating hardware, virtualized hardware of course, that can be used to install a brand new OS. In the diagram that follows, you can see a physical server, including CPU, RAM, disk, and network. This server needs an OS on top, and from there you can install and execute programs such as Internet browsers, databases, spreadsheets, and of course a virtualization software. This virtualization software behaves the same way as any other application-it sends a request to the OS for a file stored on the disk, access to a web page, more CPU time; so for the host OS, is a standard application that demands resources. But within the virtualization application (also known as Hypervisor), some virtual hardware is created, in other words, some fake hardware is presented at the top end of the program. At this point we can start the OS setup on this virtual hardware, and the OS can recognize the hardware and use it as if it were real. So coming back to the original idea, virtualization is a technique, based on software, to execute several servers and their corresponding OSes on the same physical hardware. Virtualization can be implemented on many architectures, such as IBM© mainframes, many distributions of Unix© and Linux, Windows©, Apple©, and so on. We already mentioned that the virtualization is based on software, but there are two main kinds of software you can use to virtualize your servers. The first type of software is the one that behaves as any other application installed on the server and is also known as workstation or software-based virtualization. The second one is part of the kernel on the host OS, and is enabled as a service. This type of software is also called as hardware virtualization and it uses special CPU characteristics (as Data Execution Prevention or Virtualization Support), which we will discuss in the installation section. The main difference is the performance you can have when using either of the types. On the software/workstation virtualization, the request for hardware resources has to go from the application down to the OS into the kernel in order to get the resource. In the hardware solution, the virtualization software or hypervisor layer is built into the kernel and makes extensive usage of the CPU's virtualization capabilities, so the resource demand is faster and more reliable, as in Microsoft © Hyper-V Server 2008 R2. Reliability and fault tolerance By placing all the eggs in the same basket, we want to be sure that the basket is protected. Now think that instead of eggs, we have virtual machines, and instead of the basket, we have a Hyper-V server. We require that this server is up and running most of the time, rendering into reliable virtual machines that can run for a long time. For that reason we need a fault tolerant system, that is to say a whole system which is capable of running normally even if a fault or a failure arises. How can this be achieved? Well, just use more than one Hyper-V server. If a single Hyper-V server fails, all running VMs on it will fail, but if we have a couple of Hyper-V servers running hand in hand, then if the first one becomes unavailable, its twin brother will take care of the load. Simple, isn't it? It is, if it is correctly dimensioned and configured. This is called Live Migration. In a previous section we discussed how to migrate a VM from one Hyper-V server to another, but using this import/export technique causes some downtime in our VMs. You can imagine how much time it will take to move all our machines in case a host server fails, and even worse, if the host server is dead, you can't export your machines at all. Well, this is one of the reasons we should create a Cluster. As we already stated, a fault tolerant solution is basically to duplicate everything in the given solution. If a single hard disk may fail, then we configure additional disks (as it may be RAID 1 or RAID 5), if a NIC is prone to failure, then teaming two NICs may solve the problem. Of course, if a single server may fail (dragging with it all VMs on it), then the solution is to add another server; but here we face the problem of storage space; each disk can only be physically connected to one single data bus (consider this the cable, for simplicity), and the server must have its own disk in order to operate correctly. This can be done by using a single shared disk, as it may be a directly connected SCSI storage, a SAN (Storage Area Network connected by optical fiber), or the very popular NAS (Network Attached Storage) connected by NICs. As we can see in the preceding diagram, the red circle has two servers; each is a node within the cluster. When you connect to this infrastructure, you don't even see the number of servers, because in a cluster there are shared resources such as the server name, IP address, and so on. So you connect to the first available physical server, and in the event of a failure, your session is automatically transferred to the next available physical server. Exactly the same happens at the server's backend. We can define certain resources as shared to the cluster's resources, and then the cluster can administer which physical server will use the resources. For example, consider the preceding diagram, there are several iSCSI targets (Internet SCSI targets) defined in the NAS, and the cluster is accessing those according to the active physical node of the cluster, thus making your service (in this case, your configured virtual machines) highly available. You can see the iSCSI FAQ on the Microsoft web site (http://go.microsoft.com/fwlink/?LinkId=61375). In order to use a failover cluster solution, the hardware must be marked as Certified for Windows Server 2008 R2 and it has to be identical (in some cases the solution may work with dissimilar hardware, but the maintenance, operation, capacity planning, to name some, will increase thus making the solution more expensive and more difficult to possess). Also the full solution has to successfully pass the Hardware Configuration Wizard when creating the cluster. The storage solution must be certified as well, and it has to be Windows Cluster compliant (mainly supporting the SCSI-3 Persistent Reservations specification), and is strongly recommended that you implement an isolated LAN exclusively for storage purposes. Remember that to have a fault tolerant solution, all infrastructure devices have to be duplicated, even networks. The configuration wizard will let us configure our cluster even if the network is not redundant, but it will display a warning notifying you of this point. Ok, let's get to business. To configure a fault tolerant Hyper-V cluster, we need to use Cluster Shared Volumes, which, in simple terms, will let Hyper-V be a clustered service. As we are using a NAS, we have to configure both the ends—the iSCSI initiator (on the host server) and the iSCSI terminator (on the NAS). You can see this Microsoft Technet video at http://technet.microsoft.com/en-us/video/how-to-setup-iscsi-on-windows-server-2008-11-mins.aspx or read the Microsoft article for more information on how to configure iSCSI initiators at http://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx. To configure the iSCSI terminator on the NAS, please refer to the NAS manufacturer's documentation. Apart from the iSCSI disk configuration we have for our virtual machines, we need to provide a witness disk (known in the past as Quorum disk). This disk (using 1 GB will do the trick) is used to orchestrate and synchronize our cluster. Once we have our iSCSI disk configured and visible (you can check this by opening the Computer Management console and selecting Disk Management ) in one of our servers, we can proceed to configure our cluster. To install the Failover Clustering feature, we have to open the Server Manager console, select the Roles node on the left, then select Add Roles, and finally select the Failover Clustering role (this is very similar to the procedure we used when we installed the Hyper-V role in the Requirements and Installation section). We have to repeat this step for every node participating on the cluster. At this point we should have both the Failover Clustering role and the Hyper-V role set up in the servers, so we can open the Failover Cluster Manager console from the Administrative tools and validate our configuration. Check that Failover Cluster Manager is selected and on the center pane, select Validate Configuration (a right-click can do the trick as well). Follow all the instructions and run all of the tests until no errors are shown. When this step is completed, we can proceed to create our cluster. In the same Failover Cluster Manager console, in the center pane, select Create a Cluster (a right-click can do the trick as well). This wizard will ask you for the following: All servers that will participate in the cluster (a maximum of 16 nodes and a minimum of 1, which is useless, so better go for two servers): The name of the cluster (this name is how you will access the cluster and not the individual server names) The IP configuration for the cluster (same as the previous point): We still need to enable Cluster Shared Volumes. To do so, right-click the failover cluster, and then click Enable Cluster Shared Volumes. The Enable Cluster Shared Volumes dialog opens. Read and accept the terms and restrictions, and click OK. Then select Cluster Shared Volumes and under Actions(to the left), select Add Storage and select the disks (the iSCSI disks) we had previously configured. Now the only thing we have left, is to make the VM highly available, which we created in the Quick start – creating a virtual machine in 8 steps section (or any other VMs that you have created or any new VM you want to create, be imaginative!). The OS in the virtual machine can failover to another node without almost no interruption. Note that the virtual machine cannot be running in order to make it highly available through the wizard. In the Failover Clustering Manager console, expand the tree of the cluster we just created. Select Services and Applications. In the Action pane, select Configure a Service or Application. In the Select Service or Application page, click Virtual Machine and then click Next. In the Select Virtual Machine page, check the name of the virtual machine that you want to make highly available, and then click Next. Confirm your selection and then click Next again. The wizard will show a summary and the ability to check the report. And finally, under Services and Applications , right-click the virtual machine and then click Bring this service or application online. This action will bring the virtual machine online and start it.
Read more
  • 0
  • 0
  • 2303
article-image-tips-and-tricks-microsoft-application-virtualization-46
Packt
25 Jan 2011
7 min read
Save for later

Tips and Tricks on Microsoft Application Virtualization 4.6

Packt
25 Jan 2011
7 min read
  Getting Started with Microsoft Application Virtualization 4.6 Virtualize your application infrastructure efficiently using Microsoft App-V Publish, deploy, and manage your virtual applications with App-V Understand how Microsoft App-V can fit into your company. Guidelines for planning and designing an App-V environment. Step-by-step explanations to plan and implement the virtualization of your application infrastructure Advantage of sequencing process Sequencing represents the process where the App-V Sequencer monitors and captures the files and environment changes (like registry modifications) created by an application installation. Once the capturing process is complete, the sequencing process ends by building the App-V package ready to be delivered to clients by a streaming method or just using an MSI. The sequencing process, to achieve this, creates a virtual environment which is isolated from the operating system avoiding most conflicts with other applications or components existing on the client's operating system. Application Virtualization quick facts Here are some facts about Application Virtualization: The applications are not installed on clients, they are published. With Application Virtualization we can achieve the co-existence of incompatible applications like Microsoft Office 2007 and Microsoft Office 2010. Applications are installed only once on a reference computer, where the package is captured and prepared. You can capture a set of interconnected applications into a single package. The capturing process is in most cases a transparent process; which identifies the environment that the application requires to work, like files and registry keys. Application Virtualization offers you the possibility of centralized management. There is one point where we handle virtualized applications and the distributing behavior in our environment. Even though you can create a package of almost any software, not all applications can be virtualized. There are some examples that could be quite tricky to actually pack into one bundle. Applications that require high operating system integration can generate some known issues. App-V Management Server Sizing App-V Management Server can maintain 12,000 publishing refreshes per minute. If your requirements are higher, you need to set up different Management Servers where you can manually separate the applications to be distributed (remember, multiple App-V Management Servers can use the same database) or deploy your servers with load-balancing features (hardware or software load balancing). Implementing Dynamic Suite Composition (DSC) Dynamic Suite Composition gives us the possibility to use "one-to-many" scenarios, where you have one primary application with several secondaries. But Dynamic Suite Composition is not in charge of managing and controlling the interaction between all these applications. That's why you must be careful which applications you select as secondary, as not all are suited for this category. When you discuss implementing DSC in your organization you must always remember that DSC is only in charge of sharing the virtual environment between the App-V packages. Level of dependency supported in DSC In Microsoft Application Virtualization an important thing to note while using Dynamic Suite Composition is that a primary application can have more than one secondary application but only one level of dependency is supported. You cannot define a secondary package as dependent on another secondary package. Deploying 16-bit applications to 64-bit clients Microsoft App-V 4.6 includes, among several others, improvements and changes that allow the possibility to use and virtualize 64-bit applications and 64-bit operating system clients. But there's one disclaimer—sequencing and deploying 16-bit applications to 64-bit clients is not supported. This is a restriction in 64-bit operating systems, and not only for virtual applications. Sequencing and Deploying Application in Different Operating Systems Even though Microsoft officially requires the same operating system for sequencing and deployment, you can find several examples of applications that can work normally across different operating systems. SQL database size The size of the App-V database depends principally on application launches and retained reporting information. Microsoft provides a small equation to calculate the approximate growth of the database: (560 bytes per launch and shutdown) X (number of launches per day) X (user population) = Daily database growth. For example, 10,000 users who launch and shut down one application per hour every day, translates to 125 MB per day. Streaming Servers and network bandwidth RTSP/S does not include tools to limit the use of network bandwidth. This is why it is highly recommended that you only stream applications between networks with a high speed link. Even though for Streaming Servers the process of delivering applications does not translate to high processor or memory usage, using secure communications with RTSPS or HTTPS introduces a minimum overhead you should consider. App-V Client Cache The client cache is another option you can combine with the streaming strategy selected. Having a large cache on each client will translate to lower network usage. You should also evaluate this when you start sequencing applications—the App-V packages' size will let you estimate the proper amount of cache needed. Software licenses Application Virtualization is related to a significant matter in many organization—application licensing. Application Virtualization can also maintain a central point for software licenses, allowing you to keep track of the current licensing situation of all your applications. Using named licenses on each App-V package, you can guarantee that only users who have the appropriate license can run the application. And if we are using concurrent licenses for the application, the App-V license management will only let the application run the number of times that is permitted. But you must also be cautious with the acquired licenses - not all applications support virtualization. For example, there are some applications that depend on and are attached to some hardware components, like a MAC address. Virtualization support by the application vendor Not all applications are suitable for virtualization. Each App-V package generates their own virtual environment, but some applications require a high degree of integration with the operating system, making the virtualized application unstable or incapable of working. A good example is antivirus software. Installing the App-V Management Console on a different machine Installing the App-V Management Console on a different machine is possible but not simple. The App-V Team created a configuration guide to achieve this, which you can access at the official Microsoft App-V blog: http://blogs.technet.com/b/appv/archive/2009/04/21/app-v-4-5-remote-consoleconfiguration-guide.aspx. An online assessment tool to achieve Dynamic IT Once we run this wizard-like tool, we receive a complete report on how to optimize our infrastructure in areas like Identity and Access, Desktop, Device and Server Management, Security and Networking, Data Protection, and IT Process. Access the online tool at: http://www.microsoft.com/infrastructure/about/assessment-start.aspx. IT Compliance Management Series Guidelines oriented to IT governance, risk, and compliance requirements. Download the series from the Microsoft Download Center at http://www.microsoft.com/downloads/en/default.aspx. Windows Optimized Desktop Scenarios Solution Accelerator This is a guideline to achieving a proper plan and designing applications and operating systems in your organization. This accelerator will be useful when we start thinking in App-V. More information is available at http://www.microsoft.com/infrastructure/resources/desktop-accelerators.aspx. Infrastructure Planning and Design Guides for Virtualization Complete references for designing a virtualization strategy; you will find specialist guides for App-V, Remote Desktop Services (formerly known as Terminal Services), System Center Virtual Machine Manager, Windows Server Virtualization, Desktop Virtualization, and so on. More information is available at http://technet.microsoft.com/en-us/solutionaccelerators/ee395429.aspx.
Read more
  • 0
  • 0
  • 2077
Modal Close icon
Modal Close icon