Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-getting-started-with-vagrant
Timothy Messier
25 Sep 2014
6 min read
Save for later

Getting started with Vagrant

Timothy Messier
25 Sep 2014
6 min read
As developers, one of the most frustrating types of bugs are those that only happen in production. Continuous integration servers have gone a long way in helping prevent these types of configuration bugs, but wouldn't it be nice to avoid these bugs altogether? Developers’ machines tend to be a mess of software. Multiple versions of languages, random services with manually tweaked configs, and debugging tools all contribute to a cluttered and unpredictable environment. Sandboxes Many developers are familiar with sandboxing development. Tools like virtualenv and rvm were created to install packages in isolated environments, allowing for multiple versions of packages to be installed on the same machine. Newer languages like nodejs install packages in a sandbox by default. These are very useful tools for managing package versions in both development and production, but they also lead to some of the aforementioned clutter. Additionally, these tools are specific to particular languages. They do not allow for easily installing multiple versions of services like databases. Luckily, there is Vagrant to address all these issues (and a few more). Getting started Vagrant at its core is simply a wrapper around virtual machines. One of Vagrant's strengths is its ease of use. With just a few commands, virtual machines can be created, provisioned, and destroyed. First grab the latest download for your OS from Vagrant's download page (https://www.vagrantup.com/downloads.html). NOTE: For Linux users, although your distro may have a version of Vagrant via its package manager, it is most likely outdated. You probably want to use the version from the link above to get the latest features. Also install Virtualbox (https://www.virtualbox.org/wiki/Downloads) using your preferred method. Vagrant supports other virtualization providers as well as docker, but Virtualbox is the easiest to get started with. When most Vagrant commands are run, they will look for a file named Vagrantfile (or vagrantfile) in the current directory. All configuration for Vagrant is done in this file and it is isolated to a particular Vagrant instance. Create a new Vagrantfile: $ vagrant init hashicorp/precise64 This creates a Vagrantfile using the base virtual image, hashicorp/precise64, which is a stock Ubuntu 12.04 image provided by Hashicorp. This file contains many useful comments about the various configurations that can be done. If this command is run with the --minimal flag, it will create a file without comments, like the following: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" end Now create the virtual machine: $ vagrant up This isn’t too difficult. You now have a nice clean machine in just two commands. What did Vagrant up do? First, if you did not already have the base box, hashicorp/precise64, it was downloaded from Vagrant Cloud. Then, Vagrant made a new machine based on the current directory name and a unique number. Once the machine was booted it created a shared folder between the host's current directory and the guest's /vagrant. To access the new machine run: $ vagrant ssh Provisioning At this point, additional software needs to be installed. While a clean Ubuntu install is nice, it does not have the software to develop or run many applications. While it may be tempting to just start installing services and libraries with apt-get, that would just start to cause the same old clutter. Instead, use Vagrant's provisioning infrastructure. Vagrant has support for all of the major provision tools like Salt (http://www.saltstack.com/), Chef (http://www.getchef.com/chef/) or Ansible (http://www.ansible.com/home). It also supports calling shell commands. For the sake of keeping this post focused on Vagrant, an example using the shell provisioner will be used. However, to unleash the full power of Vagrant, use the same provisioner used for production systems. This will enable the virtual machine to be configured using the same provisioning steps as production, thus ensuring that the virtual machine mirrors the production environment. To continue this example, add a provision section to the Vagrantfile: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.provision "shell", path: "provision.sh" end This tells Vagrant to run the script provision.sh. Create this file with some install commands: #!/bin/bash # Install nginx apt-get -y install nginx Now tell Vagrant to provision the machine: $ vagrant provision NOTE: When running Vagrant for the first time, Vagrant provision is automatically called. To force a provision, call vagrant up --provision. The virtual machine should now have nginx installed with the default page being served. To access this page from the host, port forwarding can be set in the Vagrantfile: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.network "forwarded_port", guest: 80, host: 8080 config.vm.provision "shell", path: "provision.sh" end For the forwarding to take effect, the virtual machine must be restarted: $ vagrant halt && vagrant up Now in a browser go to http://localhost:8080 to see the nginx welcome page. Next steps This simple example shows how easy Vagrant is to use, and how quickly machines can be created and configured. However, to truly battle issues like "it works on my machine", Vagrant needs to leverage the same provisioning tools that are used for production servers. If these provisioning scripts are cleanly implemented, Vagrant can easily leverage them and make creating a pristine production-like environment easy. Since all configuration is contained in a single file, it becomes simple to include a Vagrant configuration along with code in a repository. This allows developers to easily create identical environments for testing code. Final notes When evaluating any open source tool, at some point you need to examine the health of the community supporting the tool. In the case of Vagrant, the community seems very healthy. The primary developer continues to be responsive about bugs and improvements, despite having launched a new company in the wake of Vagrant's success. New features continue to roll in, all the while keeping a very stable product. All of this is good news since there seem to be no other tools that make creating clean sandbox environments as effortless as Vagrant. About the author Timothy Messier is involved in many open source devops projects, including Vagrant and Salt, and can be contacted at tim.messier@gmail.com.
Read more
  • 0
  • 0
  • 9978

article-image-snapshots
Packt
25 Feb 2014
8 min read
Save for later

Snapshots

Packt
25 Feb 2014
8 min read
(For more resources related to this topic, see here.) Much ado about snapshots (Intermediate) Snapshots are a fantastic feature of VMware Fusion because they allow you to roll the VM back in time to a previously saved state. Using snapshots is easy, but understanding how they work is important. Now, first things first. A snapshot is not a backup, but rather a way to either safely roll back in time or to keep multiple configurations of an OS but share the same basic configuration. The latter is very handy when building websites. For example, you can have one snapshot with IE7, another with IE8, another with IE9, another with IE10, and so on. A backup is a separate copy of the entire VM and/or its contents ("Your VM" and "Your Data") on a different disk or backup service. A snapshot is about rolling back in time on the same machine. If you took the snapshot when we finished installing Windows 7 but before upgrading to Windows 8, you can easily switch back and forth between Windows 7 and 8 by simply restoring the state. Let's see how. Getting ready Firstly, the VM doesn't have to be running, but it can be. The snapshot feature is powerful enough to work even when the VM is still running, but goes much faster if the virtual machine is powered off or suspended. We can use the snapshot we took when we finished installing Windows 7. If you didn't take a snapshot at that time, you can go ahead and take a new snapshot now by clicking on the Take button from the snapshot window. How to do it... Snapshots are best taken when a VM is powered off. It doesn't have to be, but your computer will complete the "Take Snapshot" operation much faster if the VM is powered off or suspended. Both fully powered off and suspended tasks are much faster because the VM isn't in motion when the snapshot is taken, allowing the operation to finish at a single stroke. Otherwise, the way the snapshot mechanism works when a VM is running is that once it finishes, it has to now gather the new data that changed from when the snapshot operation started. So, if it took five minutes to take a snapshot, it then has to gather itself up to date for those five minutes. That might take a minute. After that, it has to go back again to gather that last minute. If that minute takes 20 seconds, it has to then gather those 20 seconds again. This is made worse with the more things you're doing within the virtual machine. So, get it done in one motion by suspending or powering off the VM first. Launching the snapshot window and examining the tree The following sequence of steps is used to initiate the snapshot process: Click on the Snapshots button in the VM window and have a look at the snapshot interface. In my example, the following was the view of my "tree" right after we finished installing Windows 7: When I finished upgrading to Windows 8, I took another snapshot. This allows me to go back in time to both a fresh Windows 7 and Windows 8 installation, as shown in the following screenshot: Restoring a snapshot Having a TARDIS or DeLorean might be more fun, but for the rest of us, we can go back in time by restoring a snapshot. Let's go back to Windows 7 from our Windows 8 VM. Follow these steps: In the Snapshot Manager window, simply double-click on the base disk at the top of the tree to restore it. It will ask about saving the current state. Choose Save when prompted asks as shown in the following screenshot. You can rename the snapshot at any time from this window by right-clicking on the name and clicking on Get Info. After a few seconds, depending on the speed of your Mac, the older version should now show as the Current State, as shown in the following screenshot. If the VM was running, it should just show up now as being Windows 7. If you see a spinning wheel in the upper-right corner, that's the "disk cleanup" activity working in the background. You can use the VM while it's doing this; however, it might be a bit slow on disk access while it's cleaning up the disks. If the VM is suspended or powered off when restoring, the operation is much faster because the VM isn't changing/running. With this technique, you can switch between Windows 7 and Windows 8 with ease. How it works... In Fusion, all of the VM's files, are stored under Documents | Virtual Machines by default. Your C: drive in Windows is actually a series of files on the Mac named in sequence, with a .vmdk extension inside the Virtual Machines folder, as shown in the following screenshot. You can view the files by right-clicking on the VM and clicking on Show Package Contents from the Virtual Machines folder in the Finder. When you create a VM, it starts with one virtual disk (called the base disk). This virtual disk, or VMDK, is broken up into 2 GB "chunks" by default, but it can be one big chunk if desired. So, for a 20 GB disk, you end up with about 10 or 11 .vmdk files. This is for easy transport with drives that don't support large drives (such as MS-DOS/FAT32-formatted drives), and you may also have a performance benefit in certain cases. When you take a snapshot, the currently active VMDK goes into read-only mode, and a new VMDK is created. All writes go to the new VMDK, and reads happen from the original VMDK when the bits are there. Fusion is smart enough to keep track of what files are where; so, when the VM is running, Fusion is reading all of the snapshots in the current state's chain. A .vmdk file is thus named <disk_name>-<snapshot_number>-<slice>.vmdk. So, in my example, Virtual Disk is my disk name. (I could have customized and specified something different by performing a Custom Virtual Machine operation at the beginning.) I have three disks: Virtual Disk, Virtual Disk-000001, and Virtual Disk-000003. This means I have two snapshots and a base disk. (I took one snapshot and deleted it, which is why there's no Virtual Disk-000002). Each of those disks are of 60 GB capacity, so there are 31 slices. (s001 to s031). Each file starts around 300 KB and can grow to just over 2 GB. You can see where things can start to get confusing now. It gets even better when you have snapshots that are based on snapshots. You can have multiple snapshots with a common parent, which introduces a new concept in Fusion, that is, snapshot trees. Snapshots are also a great way to make sure something new isn't going to destroy your VM. So, if you are about to install some software that might be risky, take a snapshot. It's easy to roll back if something goes wrong. There's more... Snapshots are complicated, but there's great material out there by the gurus behind Fusion itself about how they work on a more technical level. More information You can read more about snapshots from Eric Tung, one of the original developers of Fusion (the blog is a bit old, but still completely accurate with respect to how snapshots work) at http://blogs.vmware.com/teamfusion/2008/11/vmware-fusion-3.html. A great article by Eric to dispel some of the confusion around snapshots and how to use them is available at http://blogs.vmware.com/teamfusion/2008/11/bonustip-snaps.html. One thing to note is that the more snapshots you have, the more effort the Mac has to make to "glue" them all together when you're running your virtual machine. As a rule of thumb, don't take snapshots and keep them around forever if you don't intend on rolling back to them regularly. Also, each snapshot can grow to be the size of the entire C: drive in Windows. Use them when necessary, but be aware of their performance and disk-usage costs. Summary In this article, we studied about snapshots and their usage. It shows you how to use this to make one virtual machine with both Windows 7 and Windows 8. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [Article] Windows 8 with VMware View [Article] Securing vCloud Using the vCloud Networking and Security App Firewall [Article]
Read more
  • 0
  • 0
  • 9836

article-image-oracle-vm-management
Packt
16 Oct 2009
6 min read
Save for later

Oracle VM Management

Packt
16 Oct 2009
6 min read
Before we get to manage the VMs in the Oracle VM Manager, let's take a quick look at the Oracle VM Manager by logging into it. Getting started with Oracle VM Manager In this article, we will perform the following actions while exploring the Oracle VM Manager: Registering an account Logging in to Oracle VM Manager Create a Server Pool After we are done with the Oracle VM Manager installation, we will use one of the following links to log on to the Oracle VM Manager: Within the local machine: http://127.0.0.1:8888/OVS Logging in remotely: http://vmmgr:8888/OVS Here, vmmgr refers to the host name or IP address of your Oracle VM Manager host. How to register an account Registering of an account can be done in several ways. If, during the installation of Oracle VM Manager, we have chosen to configure the default admin account "admin", then we can use this account directly to log on to Oracle's IntraCloud portal we call Oracle VM Manager. We will explain later in detail about the user accounts and why we would need separate accounts for separate roles for fine-grained access control; something that is crucial for security purposes. So let's have a quick look at the three available options: Default installation: This option applies if we have performed the default installation ourselves and have gone ahead to create the account ourselves. Here we have the default administrator role. Request for account creation: Contacting the administrator of Oracle VM Manager is another way to attain an account with the privileges, such as administrator, manager, and user. Create yourself: If we need to conduct basic functions of a common user with operator's role such as creating and using virtual machines, or importing resources, we can create a new account ourselves. However, we will need the administrator to assign us the server pools and groups to our account before we can get started. Here by default we are granted a user role. We will talk more about roles later in this article. Now let's go about registering a new account with Oracle VM Manager. Once on the Oracle VM Manager Login page click on the Register link. We are presented with the following screen. We must enter a Username of our choice and a hard-to-crack password twice. Also, we have to fill in our First Name and Last Name and complete the registration with a valid email address. Click Next: Next, we need to confirm our account details by clicking on the Confirm button. Now our account will be created and a confirmation message is displayed on the Oracle VM Manager Login screen. It should be noted that we will need some Server Pools and groups before we can get started. We will have to ask the administrator to assign us access to those pools and groups. It's time now to login to our newly created account. Logging in to Oracle VM Manager Again we will need to either access the URL locally by typing http://127.0.0.1:8888/OVS or by typing the following: http://hostname:8888/ OVS. If we are accessing the Oracle VM Manager Portal remotely, replace the "hostname" with either the FQDN (Fully Qualified Distinguished Name) if the machine is registered in our DNS or just the hostname of the VM Manager machine. We can login to the portal by simply typing in our Username and Password that we just created. Depending on the role and the server pools that we have been assigned, we will be displayed with the tabs upon the screen as shown in the following table. To change the role, we will need to contact our enterprise domain administrator. Only administrators are allowed to change the roles of accounts. If we forget our password, we can click on Forgot Password and on submitting our account name, the password will be sent to the registered email address that we had provided when we registered the account. The following table discusses the assigned tabs that are displayed for each Oracle VM Manager roles:   Role Grants User Virtual Machines, Resources Administrator Virtual Machines, Resources, Servers, Server Pools, Administration Manager Virtual Machines, Resources, Servers, Server Pools   We can obviously change the roles by editing the Profile (on the upper-right section of the portal). As it can be seen in the following screenshot, we have access to the Virtual Machines pane and the Resources pane. We will continue to add Servers to the pool when logged in as admin. Oracle VM management: Managing Server Pool A Server Pool is logically an autonomous region that contains one or more physical servers and the dynamic nature of such pool and pools of pools makes what we call  an infinite Cloud infrastructure. Currently Oracle has its Cloud portal with Amazon but it is very much viable to have an IntraCloud portal or private Cloud where we can run all sorts of Linux and Windows flavors on our Cloud backbone. It eventually rests on the array of SAN, NAS, or other next generation storage substrate on which the VMs reside. We must ensure that we have the following prerequisites properly checked before creating the Virtual Machines on our IntraCloud Oracle VM. Oracle VM Servers: These are available to deploy as Utility Master, Server Master pool, and Virtual Machine Servers. Repositories: Used for Live Migration or Hot Migration of the VMs and for local storage on the Oracle VM Servers. FQDN/IP address of Oracle VM Servers: It is better to have the Oracle VM Servers known as OracleVM01.AVASTU.COM and OracleVM02.AVASTU. COM. This way you don't have to bother about the IP changes or infrastructural relocation of the IntraCloud to another location. Oracle VM Agent passwords: Needed to access the Oracle VM Servers. Let's now go about exploring the designing process of the Oracle VM. Then we will do the following systematically: Creating the Server Pool Editing Server Pool information Search and retrieval within Server Pool Restoring Server Pool Enabling HA Deleting a Server Pool However, we can carry out these actions only as a Manager or an Administrator. But first let's take a look at the decisions on what type of Server Pools will suit us the best and what the architectural considerations could be around building your Oracle VM farm.
Read more
  • 0
  • 0
  • 9730

article-image-virtual-machine-design
Packt
21 May 2014
8 min read
Save for later

Virtual Machine Design

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) Causes of virtual machine performance problems In a perfect virtual infrastructure, you will never experience any performance problems and everything will work well within the budget that you allocated. But should there be circumstances that happen in this perfect utopian datacenter you've designed, hopefully this section will help you to identify and resolve the problems easier. CPU performance issues The following is a summary list of some the common CPU performance issues you may experience in your virtual infrastructure. While the following is not an exhaustive list of every possible problem you can experience with CPUs, it can help guide you in the right direction to solve CPU-related performance issues: High ready time: When your ready time is above 10 percent, this could indicate CPU contention and could be impacting the performance of any CPU-intensive applications. This is not a guarantee of a problem; however, applications which are not nearly as sensitive can still report high values and perform well within guidelines. Whether your application is CPU-ready is measured in milliseconds to get percentage conversion; see KB 2002181. High costop time: The costop time will often correlate to contention in multi-vCPU virtual machines. Costop time exceeding 10 percent could cause challenges when vSphere tries to schedule all vCPUs in your multi-vCPU servers. CPU limits: As discussed earlier, you will often experience performance problems if your virtual machine tries to use more resources than have been configured in your limits. Host CPU saturation: When the vSphere host utilization runs at above 80 percent, you may experience host saturation issues. This can introduce performance problems across the host as the CPU scheduler tries to assign resources to virtual machines. Guest CPU saturation: This is experienced on high utilization of vCPU resources within the operating system of your virtual machines. This can be mitigated, if required, by adding additional vCPUs to improve the performance of the application. Misconfigured affinity: Affinity is enabled by default in vSphere; however, if manually configured to be assigned to a specific physical CPU, problems can be encountered. This can often be experienced when creating a VM with affinity settings and then cloning the VM. VMware advises against manually configuring affinity. Oversizing vCPUs: When assigning multiple vCPUs to a virtual machine, you would want to ensure that the operating system is able to take advantage of the CPUs, threads, and your applications can support them. The overhead associated with unused vCPUs can impact other applications and resource scheduling within the vSphere host. Low guest usage: Sometimes poor performance problems with low CPU utilization will help identify the problem existing as I/O or memory. This is often a good guiding indicator that your CPU being underused can be caused by additional resources or even configuration. Memory performance issues Additionally, the following list is a summary of some common memory performance issues you may experience in your virtual infrastructure. The way VMware vSphere handles memory management, there is a unique set of challenges with troubleshooting and resolving performance problems as they arise: Host memory: Host memory is both a finite and very limited resource. While VMware vSphere incorporates some creative mechanisms to leverage and maximize the amount of available memory through features such as page sharing, memory management, and resource-allocation controls, there are several memory features that will only take effect when the host is under stress. Transparent page sharing: This is the method by which redundant copies of pages are eliminated. TPS, enabled by default, will break up regular pages into 4 KB chunks for better performance. When virtual machines have large physical pages (2 MB instead of 4 KB), vSphere will not attempt to enable TPS for these as the likelihood of multiple 2 MB chunks being similar is less than 4 KB. This can cause a system to experience memory overcommit and performance problems may be experienced; if memory stress is then experienced, vSphere may break these 2 MB chunks into 4 KB chunks to allow TPS to consolidate the pages. Host memory consumed: When measuring utilization for capacity planning, the value of host memory consumed can often be deceiving as it does not always reflect the actual memory utilization. Instead, the active memory or memory demand should be used as a better guide of actual memory utilized as features such as TPS can reflect a more accurate picture of memory utilization. Memory over-allocation: Memory over-allocation will usually be fine for most applications in most environments. It is typically safe to have over 20 percent memory allocation especially with similar applications and operating systems. The more similarity you have between your applications and environment, the higher you can take that number. Swap to disk: If you over-allocate your memory too high, you may start to experience memory swapping to disk, which can result in performance problems if not caught early enough. It is best, in those circumstances, to evaluate which guests are swapping to disk to help correct either the application or the infrastructure as appropriate. For additional details on vSphere Memory management and monitoring, see KB 2017642. Storage performance issues When it comes to storage performance issues within your virtual machine infrastructure, there are a few areas you will want to pay particular attention to. Although most storage-related problems you are likely to experience will be more reliant upon your backend infrastructure, the following are a few that you can look at when identifying if it is the VM's storage or the SAN itself: Storage latency: Latency experienced at the storage level is usually expressed as a combination of the latency of the storage stack, guest operating system, VMkernel virtualization layer, and the physical hardware. Typically, if you experience slowness and are noticing high latencies, one or more aspects of your storage could be the cause. Three layers of latency: ESXi and vCenter typically report on three primary latencies. These are Guest Average Latency (GAVG), Device Average Latency (DAVG), and Kernel Average Latency (KAVG). Guest Average Latency (GAVG): This value is the total amount of latency that ESXi is able to detect. This is not to say that it is the total amount of latency being experienced but is just the figure of what ESXi is reporting against. So if you're experiencing a 5 ms latency with GAVG and a performance application such as Perfmon is identifying a storage latency of 50 ms, something within the guest operating system is incurring a penalty of 45 ms latency. In circumstances such as these, you should investigate the VM and its operating system to troubleshoot. Device Average Latency (DAVG): Device Average Latency tends to focus on the more physical of things aligned with the device; for instance, if the storage adapters, HBA, or interface is having any latency or communication backend to the storage array. Problems experienced here tend to fall more on the storage itself and less so as a problem which can be easily troubleshooted within ESXi itself. Some exceptions to this being firmware or adapter drivers, which may be introducing problems or queue depth in your HBA. More details on queue depth can be found at KB 1267. Kernel Average Latency (KAVG): Kernel Average Latency is actually not a specific number as it is a calculation of "Total Latency - DAVG = KAVG"; thus, when using this metric you should be wary of a few values. The typical value of KAVG should be zero, anything greater may be I/O moving through the kernel queue and can be generally dismissed. When your latencies are 2 ms or consistently greater, this may indicate a storage performance issue with your VMs, adapters, and queues should be reviewed for bottlenecks or problems. The following are some KB articles that can help you further troubleshoot virtual machine storage: Using esxtop to identify storage performance issues (KB1008205) Troubleshooting ESX/ESXi virtual machine performance issues (KB2001003) Testing virtual machine storage I/O performance for VMware ESX and ESXi (KB1006821) Network performance issues Lastly, when it comes to addressing network performance issues, there are a few areas you will want to consider. Similar to the storage performance issues, a lot of these are often addressed by the backend networking infrastructure. However, there are a few items you'll want to investigate within the virtual machines to ensure network reliability. Networking error, IP already assigned to another adapter: This is a common problem experienced post V2V or P2V migrations, which results in ghosted network adapters. VMware KB 1179 guides you through the steps to go about removing these ghosted network adapters. Speed or duplex mismatch within the OS: Left at defaults, the virtual machine will use auto-negotiation to get maximum network performance; if configured down from that speed, this can introduce virtual machine limitations. Choose the correct network adapter for your VM: Newer operating systems should support the VMXNET3 adapter while some virtual machines, either legacy or upgraded from previous versions, may run older network adapter types. See KB 1001805 to help decide which adapters are correct for your usage. The following are some KB articles that can help you further troubleshoot virtual machine networking: Troubleshooting virtual machine network connection issues (KB 1003893) Troubleshooting network performance issues in a vSphere environment (KB 1004097) Summary With this article, you should be able to inspect existing VMs while following design principles that will lead to correctly sized and deployed virtual machines. You also should have a better understanding of when your configuration is meeting your needs, and how to go about identifying performance problems associated with your VMs. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article] Networking Performance Design [Article]
Read more
  • 0
  • 0
  • 9681

article-image-symmetric-messages-and-asynchronous-messages-part-1
Packt
05 May 2015
31 min read
Save for later

Symmetric Messages and Asynchronous Messages (Part 1)

Packt
05 May 2015
31 min read
In this article by Kingston Smiler. S, author of the book OpenFlow Cookbook describes the steps involved in sending and processing symmetric messages and asynchronous messages in the switch and contains the following recipes: Sending and processing a hello message Sending and processing an echo request and a reply message Sending and processing an error message Sending and processing an experimenter message Handling a Get Asynchronous Configuration message from the controller, which is used to fetch a list of asynchronous events that will be sent from the switch Sending a Packet-In message to the controller Sending a Flow-removed message to the controller Sending a port-status message to the controller Sending a controller-role status message to the controller Sending a table-status message to the controller Sending a request-forward message to the controller Handling a packet-out message from the controller Handling a barrier-message from the controller (For more resources related to this topic, see here.) Symmetric messages can be sent from both the controller and the switch without any solicitation between them. The OpenFlow switch should be able to send and process the following symmetric messages to or from the controller, but error messages will not be processed by the switch: Hello message Echo request and echo reply message Error message Experimenter message Asynchronous messages are sent by both the controller and the switch when there is any state change in the system. Like symmetric messages, asynchronous messages also should be sent without any solicitation between the switch and the controller. The switch should be able to send the following asynchronous messages to the controller: Packet-in message Flow-removed message Port-status message Table-status message Controller-role status message Request-forward message Similarly, the switch should be able to receive, or process, the following controller-to-switch messages: Packet-out message Barrier message The controller can program or instruct the switch to send a subset of interested asynchronous messages using an asynchronous configuration message. Based on this configuration, the switch should send the subset of asynchronous messages only via the communication channel. The switch should replicate and send asynchronous messages to all the controllers based on the information present in the asynchronous configuration message sent from each controller. The switch should maintain asynchronous configuration information on a per communication channel basis. Sending and processing a hello message The OFPT_HELLO message is used by both the switch and the controller to identify and negotiate the OpenFlow version supported by both the devices. Hello messages should be sent from the switch once the TCP/TLS connection is established and are considered part of the communication channel establishment procedure. The switch should send a hello message to the controller immediately after establishing the TCP/TLS connection with the controller. How to do it... As hello messages are transmitted by both the switch and the controller, the switch should be able to send, receive, and process the hello message. The following section explains these procedures in detail. Sending the OFPT_HELLO message The message format to be used to send the hello message from the switch is as follows. This message includes the OpenFlow header along with zero or more elements that have variable size: /* OFPT_HELLO. This message includes zero or more    hello elements having variable size. */ struct ofp_hello { struct ofp_header header; /* Hello element list */ struct ofp_hello_elem_header elements[0]; /* List of elements */ }; The version field in the ofp_header should be set with the highest OpenFlow protocol version supported by the switch. The elements field is an optional field and might contain the element definition, which takes the following TLV format: /* Version bitmap Hello Element */ struct ofp_hello_elem_versionbitmap { uint16_t type;           /* OFPHET_VERSIONBITMAP. */ uint16_t length;         /* Length in bytes of this element. */        /* Followed by:          * - Exactly (length - 4) bytes containing the bitmaps,          * then Exactly (length + 7)/8*8 - (length) (between 0          * and 7) bytes of all-zero bytes */ uint32_t bitmaps[0]; /* List of bitmaps - supported versions */ }; The type field should be set with OFPHET_VERSIONBITMAP. The length field should be set to the length of this element. The bitmaps field should be set with the list of the OpenFlow versions the switch supports. The number of bitmaps included in the field should depend on the highest version number supported by the switch. The ofp_versions 0 to 31 should be encoded in the first bitmap, ofp_versions 32 to 63 should be encoded in the second bitmap, and so on. For example, if the switch supports only version 1.0 (ofp_versions = 0 x 01) and version 1.3 (ofp_versions = 0 x 04), then the first bitmap should be set to 0 x 00000012. Refer to the send_hello_message() function in the of/openflow.c file for the procedure to build and send the OFPT_Hello message. Receiving the OFPT_HELLO message The switch should be able to receive and process the OFPT_HELLO messages that are sent from the controller. The controller also uses the same message format, structures, and enumerations as defined in the previous section of this recipe. Once the switch receives the hello message, it should calculate the protocol version to be used for messages exchanged with the controller. The procedure required to calculate the protocol version to be used is as follows: If the hello message received from the switch contains an optional OFPHET_VERSIONBITMAP element and the bitmap field contains a valid value, then the negotiated version should be the highest common version among the supported protocol versions in the controller, with the bitmap field in the OFPHET_VERSIONBITMAP element. If the hello message doesn't contain any OFPHET_VERSIONBITMAP element, then the negotiated version should be the smallest of the switch-supported protocol versions and the version field set in the OpenFlow header of the received hello message. If the negotiated version is supported by the switch, then the OpenFlow connection between the controller and the switch continues. Otherwise, the switch should send an OFPT_ERROR message with the type field set as OFPET_HELLO_FAILED, the code field set as OFPHFC_INCOMPATIBLE, and an optional ASCII string explaining the situation in the data and terminate the connection. There's more… Once the switch and the controller negotiate the OpenFlow protocol version to be used, the connection setup procedure is complete. From then on, both the controller and the switch can send OpenFlow protocol messages to each other. Sending and processing an echo request and a reply message Echo request and reply messages are used by both the controller and the switch to maintain and verify the liveliness of the controller-switch connection. Echo messages are also used to calculate the latency and bandwidth of the controller-switch connection. On reception of an echo request message, the switch should respond with an echo reply message. How to do it... As echo messages are transmitted by both the switch and the controller, the switch should be able to send, receive, and process them. The following section explains these procedures in detail. Sending the OFPT_ECHO_REQUEST message The OpenFlow specification doesn't specify how frequently this echo message has to be sent from the switch. However, the switch might choose to send an echo request message periodically to the controller with the configured interval. Similarly, the OpenFlow specification doesn't mention what the timeout (the longest period of time the switch should wait) for receiving echo reply message from the controller should be. After sending an echo request message to the controller, the switch should wait for the echo reply message for the configured timeout period. If the switch doesn't receive the echo reply message within this period, then it should initiate the connection interruption procedure. The OFPT_ECHO_REQUEST message contains an OpenFlow header followed by an undefined data field of arbitrary length. The data field might be filled with the timestamp at which the echo request message was sent, various lengths or values to measure the bandwidth, or be zero-size for just checking the liveliness of the connection. In most open source implementations of OpenFlow, the echo request message only contains the header field and doesn't contain any body. Refer to the send_echo_request() function in the of/openflow.c file for the procedure to build and send the echo_request message. Receiving OFPT_ECHO_REQUEST The switch should be able to receive and process OFPT_ECHO_REQUEST messages that are sent from the controller. The controller also uses the same message format, structures, and enumerations as defined in the previous section of this recipe. Once the switch receives the echo request message, it should build the OFPT_ECHO_REPLY message. This message consists of ofp_header and an arbitrary-length data field. While forming the echo reply message, the switch should copy the content present in the arbitrary-length field of the request message to the reply message. Refer to the process_echo_request() function in the of/openflow.c file for the procedure to handle and process the echo request message and send the echo reply message. Processing OFPT_ECHO_REPLY message The switch should be able to receive the echo reply message from the controller. If the switch sends the echo request message to calculate the latency or bandwidth, on receiving the echo reply message, it should parse the arbitrary-length data field and can calculate the bandwidth, latency, and so on. There's more… If the OpenFlow switch implementation is divided into multiple layers, then the processing of the echo request and reply should be handled in the deepest possible layer. For example, if the OpenFlow switch implementation is divided into user-space processing and kernel-space processing, then the echo request and reply message handling should be in the kernel space. Sending and processing an error message Error messages are used by both the controller and the switch to notify the other end of the connection about any problem. Error messages are typically used by the switch to inform the controller about failure of execution of the request sent from the controller. How to do it... Whenever the switch wants to send the error message to the controller, it should build the OFPT_ERROR message, which takes the following message format: /* OFPT_ERROR: Error message (datapath -> the controller). */ struct ofp_error_msg { struct ofp_header header; uint16_t type; uint16_t code; uint8_t data[0]; /* Variable-length data. Interpreted based on the type and code. No padding. */ }; The type field indicates a high-level type of error. The code value is interpreted based on the type. The data value is a piece of variable-length data that is interpreted based on both the type and the value. The data field should contain an ASCII text string that adds details about why the error occurred. Unless specified otherwise, the data field should contain at least 64 bytes of the failed message that caused this error. If the failed message is shorter 64 bytes, then the data field should contain the full message without any padding. If the switch needs to send an error message in response to a specific message from the controller (say, OFPET_BAD_REQUEST, OFPET_BAD_ACTION, OFPET_BAD_INSTRUCTION, OFPET_BAD_MATCH, or OFPET_FLOW_MOD_FAILED), then the xid field of the OpenFlow header in the error message should be set with the offending request message. Refer to the send_error_message() function in the of/openflow.c file for the procedure to build and send an error message. If the switch needs to send an error message for a request message sent from the controller (because of an error condition), then the switch need not send the reply message to that request. Sending and processing an experimenter message Experimenter messages provide a way for the switch to offer additional vendor-defined functionalities. How to do it... The controller sends the experimenter message with the format. Once the switch receives this message, it should invoke the appropriate vendor-specific functions. Handling a "Get Asynchronous Configuration message" from the controller The OpenFlow specification provides a mechanism in the controller to fetch the list of asynchronous events that can be sent from the switch to the controller channel. This is achieved by sending the "Get Asynchronous Configuration message" (OFPT_GET_ASYNC_REQUEST) to the switch. How to do it... The message format to be used to get the asynchronous configuration message (OFPT_GET_ASYNC_REQUEST) doesn't have any body other than ofp_header. On receiving this OFPT_GET_ASYNC_REQUEST message, the switch should respond with the OFPT_GET_ASYNC_REPLY message. The switch should fill the property list with the list of asynchronous configuration events / property types that the relevant controller channel is preconfigured to receive. The switch should get this information from its internal data structures. Refer to the process_async_config_request() function in the of/openflow.c file for the procedure to process the get asynchronous configuration request message from the controller. Sending a packet-in message to the controller Packet-in messages (OFP_PACKET_IN) are sent from the switch to the controller to transfer a packet received from one of the switch-ports to the controller for further processing. By default, a packet-in message should be sent to all the controllers that are in equal (OFPCR_ROLE_EQUAL) and master (OFPCR_ROLE_MASTER) roles. This message should not be sent to controllers that are in the slave state. There are three ways by which the switch can send a packet-in event to the controller: Table-miss entry: When there is no matching flow entry for the incoming packet, the switch can send the packet to the controller. TTL checking: When the TTL value in a packet reaches zero, the switch can send the packet to the controller. The "send to the controller" action in the matching entry (either the flow table entry or the group table entry) of the packet. How to do it... When the switch wants to send a packet received in its data path to the controller, the following message format should be used: /* Packet received on port (datapath -> the controller). */ struct ofp_packet_in { struct ofp_header header; uint32_t buffer_id; /* ID assigned by datapath. */ uint16_t total_len; /* Full length of frame. */ uint8_t reason;     /* Reason packet is being sent                      * (one of OFPR_*) */ uint8_t table_id;   /* ID of the table that was looked up */ uint64_t cookie;   /* Cookie of the flow entry that was                      * looked up. */ struct ofp_match match; /* Packet metadata. Variable size. */ /* The variable size and padded match is always followed by: * - Exactly 2 all-zero padding bytes, then * - An Ethernet frame whose length is inferred from header.length. * The padding bytes preceding the Ethernet frame ensure that IP * header (if any) following the Ethernet header is 32-bit aligned. */ uint8_t pad[2]; /* Align to 64 bit + 16 bit */ uint8_t data[0]; /* Ethernet frame */ }; The buffer-id field should be set to the opaque value generated by the switch. When the packet is buffered, the data portion of the packet-in message should contain some bytes of data from the incoming packet. If the packet is sent to the controller because of the "send to the controller" action of a table entry, then the max_len field of ofp_action_output should be used as the size of the packet to be included in the packet-in message. If the packet is sent to the controller for any other reason, then the miss_send_len field of the OFPT_SET_CONFIG message should be used to determine the size of the packet. If the packet is not buffered, either because of unavailability of buffers or an explicit configuration via OFPCML_NO_BUFFER, then the entire packet should be included in the data portion of the packet-in message with the buffer-id value as OFP_NO_BUFFER. The date field should be set to the complete packet or a fraction of the packet. The total_length field should be set to the length of the packet included in the data field. The reason field should be set with any one of the following values defined in the enumeration, based on the context that triggers the packet-in event: /* Why is this packet being sent to the controller? */ enum ofp_packet_in_reason { OFPR_TABLE_MISS = 0,   /* No matching flow (table-miss                        * flow entry). */ OFPR_APPLY_ACTION = 1, /* Output to the controller in                        * apply-actions. */ OFPR_INVALID_TTL = 2, /* Packet has invalid TTL */ OFPR_ACTION_SET = 3,   /* Output to the controller in action set. */ OFPR_GROUP = 4,       /* Output to the controller in group bucket. */ OFPR_PACKET_OUT = 5,   /* Output to the controller in packet-out. */ }; If the packet-in message was triggered by the flow-entry "send to the controller" action, then the cookie field should be set with the cookie of the flow entry that caused the packet to be sent to the controller. This field should be set to -1 if the cookie cannot be associated with a particular flow. When the packet-in message is triggered by the "send to the controller" action of a table entry, there is a possibility that some changes have already been applied over the packet in previous stages of the pipeline. This information needs to be carried along with the packet-in message, and it can be carried in the match field of the packet-in message with a set of OXM (short for OpenFlow Extensible Match) TLVs. If the switch includes an OXM TLV in the packet-in message, then the match field should contain a set of OXM TLVs that include context fields. The standard context fields that can be added into the OXL TLVs are OFPXMT_OFB_IN_PORT, OFPXMT_OFB_IN_PHY_PORT, OFPXMT_OFB_METADATA, and OFPXMT_OFB_TUNNEL_ID. When the switch receives the packet in the physical port, and this packet information needs to be carried in the packet-in message, then OFPXMT_OFB_IN_PORT and OFPXMT_OFB_IN_PHY_PORT should have the same value, which is the OpenFlow port number of that physical port. When the switch receives the packet in the logical port and this packet information needs to be carried in the packet-in message, then the switch should set the logical port's port number in OFPXMT_OFB_IN_PORT and the physical port's port number in OFPXMT_OFB_IN_PHY_PORT. For example, consider a packet received on a tunnel interface defined over a Link Aggregation Group (LAG) with two member ports. Then the packet-in message should carry the tunnel interface's port_no to the OFPXMT_OFB_IN_PORT field and the physical interface's port_no to the OFPXMT_OFB_IN_PHY_PORT field. Refer to the send_packet_in_message() function in the of/openflow.c file for the procedure to send a packet-in message event to the controller. How it works... The switch can send either the entire packet it receives from the switch port to the controller, or a fraction of the packet to the controller. When the switch is configured to send only a fraction of the packet, it should buffer the packet in its memory and send a portion of packet data. This is controlled by the switch configuration. If the switch is configured to buffer the packet, and it has sufficient memory to buffer the packet, then the packet-in message should contain the following: A fraction of the packet. This is the size of the packet to be included in the packet-in message, configured via the switch configuration message. By default, it is 128 bytes. When the packet-in message is resulted by a table-entry action, then the output action itself can specify the size of the packet to be sent to the controller. For all other packet-in messages, it is defined in the switch configuration. The buffer ID to be used by the controller when the controller wants to forward the message at a later point in time. There's more… The switch that implements buffering should be expected to expose some details, such as the amount of available buffers, the period of time the buffered data will be available, and so on, through documentation. The switch should implement the procedure to release the buffered packet when there is no response from the controller to the packet-in event. Sending a flow-removed message to the controller A flow-removed message (OFPT_FLOW_REMOVED) is sent from the switch to the controller when a flow entry is removed from the flow table. This message should be sent to the controller only when the OFPFF_SEND_FLOW_REM flag in the flow entry is set. The switch should send this message only to the controller channel wherein the controller requested the switch to send this event. The controller can express its interest in receiving this event by sending the switch configuration message to the switch. By default, OFPT_FLOW_REMOVED should be sent to all the controllers that are in equal (OFPCR_ROLE_EQUAL) and master (OFPCR_ROLE_MASTER) roles. This message should not be sent to a controller that is in the slave state. How to do it... When the switch removes an entry from the flow table, it should build an OFPT_FLOW_REMOVED message with the following format and send this message to the controllers that have already shown interest in this event: /* Flow removed (datapath -> the controller). */ struct ofp_flow_removed { struct ofp_header header; uint64_t cookie;       /* Opaque the controller-issued identifier. */ uint16_t priority;     /* Priority level of flow entry. */ uint8_t reason;         /* One of OFPRR_*. */ uint8_t table_id;       /* ID of the table */ uint32_t duration_sec; /* Time flow was alive in seconds. */ uint32_t duration_nsec; /* Time flow was alive in nanoseconds                          * beyond duration_sec. */ uint16_t idle_timeout; /* Idle timeout from original flow mod. */ uint16_t hard_timeout; /* Hard timeout from original flow mod. */ uint64_t packet_count; uint64_t byte_count; struct ofp_match match; /* Description of fields.Variable size. */ }; The cookie field should be set with the cookie of the flow entry, the priority field should be set with the priority of the flow entry, and the reason field should be set with one of the following values defined in the enumeration: /* Why was this flow removed? */ enum ofp_flow_removed_reason { OFPRR_IDLE_TIMEOUT = 0,/* Flow idle time exceeded idle_timeout. */ OFPRR_HARD_TIMEOUT = 1, /* Time exceeded hard_timeout. */ OFPRR_DELETE = 2,       /* Evicted by a DELETE flow mod. */ OFPRR_GROUP_DELETE = 3, /* Group was removed. */ OFPRR_METER_DELETE = 4, /* Meter was removed. */ OFPRR_EVICTION = 5,     /* the switch eviction to free resources. */ }; The duration_sec and duration_nsec should be set with the elapsed time of the flow entry in the switch. The total duration in nanoseconds can be computed as duration_sec*109 + duration_nsec. All the other fields, such as idle_timeout, hard_timeoutm, and so on, should be set with the appropriate value from the flow entry, that is, these values can be directly copied from the flow mode that created this entry. The packet_count and byte_count should be set with the number of packet count and the byte count associated with the flow entry, respectively. If the values are not available, then these fields should be set with the maximum possible value. Refer to the send_flow_removed_message() function in the of/openflow.c file for the procedure to send a flow removed event message to the controller. Sending a port-status message to the controller Port-status messages (OFPT_PORT_STATUS) are sent from the switch to the controller when there is any change in the port status or when a new port is added, removed, or modified in the switch's data path. The switch should send this message only to the controller channel that the controller requested the switch to send it. The controller can express its interest to receive this event by sending an asynchronous configuration message to the switch. By default, the port-status message should be sent to all configured controllers in the switch, including the controller in the slave role (OFPCR_ROLE_SLAVE). How to do it... The switch should construct an OFPT_PORT_STATUS message with the following format and send this message to the controllers that have already shown interest in this event: /* A physical port has changed in the datapath */ struct ofp_port_status { struct ofp_header header; uint8_t reason; /* One of OFPPR_*. */ uint8_t pad[7]; /* Align to 64-bits. */ struct ofp_port desc; }; The reason field should be set to one of the following values as defined in the enumeration: /* What changed about the physical port */ enum ofp_port_reason { OFPPR_ADD = 0,   /* The port was added. */ OFPPR_DELETE = 1, /* The port was removed. */ OFPPR_MODIFY = 2, /* Some attribute of the port has changed. */ }; The desc field should be set to the port description. In the port description, all properties need not be filled by the switch. The switch should fill the properties that have changed, whereas the unchanged properties can be included optionally. Refer to the send_port_status_message() function in the of/openflow.c file for the procedure to send port_status_message to the controller. Sending a controller role-status message to the controller Controller role-status messages (OFPT_ROLE_STATUS) are sent from the switch to the set of controllers when the role of a controller is changed as a result of an OFPT_ROLE_REQUEST message. For example, if there are three the controllers connected to a switch (say controller1, controller2, and controller3) and controller1 sends an OFPT_ROLE_REQUEST message to the switch, then the switch should send an OFPT_ROLE_STATUS message to controller2 and controller3. How to do it... The switch should build the OFPT_ROLE_STATUS message with the following format and send it to all the other controllers: /* Role status event message. */ struct ofp_role_status { struct ofp_header header; /* Type OFPT_ROLE_REQUEST /                            * OFPT_ROLE_REPLY. */ uint32_t role;           /* One of OFPCR_ROLE_*. */ uint8_t reason;           /* One of OFPCRR_*. */ uint8_t pad[3];           /* Align to 64 bits. */ uint64_t generation_id;   /* Master Election Generation Id */ /* Role Property list */ struct ofp_role_prop_header properties[0]; }; The reason field should be set with one of the following values as defined in the enumeration: /* What changed about the controller role */ enum ofp_controller_role_reason { OFPCRR_MASTER_REQUEST = 0, /* Another the controller asked                            * to be master. */ OFPCRR_CONFIG = 1,         /* Configuration changed on the                            * the switch. */ OFPCRR_EXPERIMENTER = 2,   /* Experimenter data changed. */ }; The role should be set to the new role of the controller. The generation_id should be set with the generation ID of the OFPT_ROLE_REQUEST message that triggered the OFPT_ROLE_STATUS message. If the reason code is OFPCRR_EXPERIMENTER, then the role property list should be set in the following format: /* Role property types. */ enum ofp_role_prop_type { OFPRPT_EXPERIMENTER = 0xFFFF, /* Experimenter property. */ };   /* Experimenter role property */ struct ofp_role_prop_experimenter { uint16_t type;         /* One of OFPRPT_EXPERIMENTER. */ uint16_t length;       /* Length in bytes of this property. */ uint32_t experimenter; /* Experimenter ID which takes the same                        * form as struct                        * ofp_experimenter_header. */ uint32_t exp_type;     /* Experimenter defined. */ /* Followed by: * - Exactly (length - 12) bytes containing the experimenter data, * - Exactly (length + 7)/8*8 - (length) (between 0 and 7) * bytes of all-zero bytes */ uint32_t experimenter_data[0]; }; The experimenter field in the experimenter ID should take the same format as the experimenter structure. Refer to the send_role_status_message() function in the of/openflow.c file for the procedure to send a role status message to the controller. Sending a table-status message to the controller Table-status messages (OFPT_TABLE_STATUS) are sent from the switch to the controller when there is any change in the table status; for example, the number of entries in the table crosses the threshold value, called the vacancy threshold. The switch should send this message only to the controller channel in which the controller requested the switch to send it. The controller can express its interest to receive this event by sending the asynchronous configuration message to the switch. How to do it... The switch should build an OFPT_TABLE_STATUS message with the following format and send this message to the controllers that have already shown interest in this event: /* A table config has changed in the datapath */ struct ofp_table_status { struct ofp_header header; uint8_t reason;             /* One of OFPTR_*. */ uint8_t pad[7];             /* Pad to 64 bits */ struct ofp_table_desc table; /* New table config. */ }; The reason field should be set with one of the following values defined in the enumeration: /* What changed about the table */ enum ofp_table_reason { OFPTR_VACANCY_DOWN = 3, /* Vacancy down threshold event. */ OFPTR_VACANCY_UP = 4,   /* Vacancy up threshold event. */ }; When the number of free entries in the table crosses the vacancy_down threshold, the switch should set the reason code as OFPTR_VACANCY_DOWN. Once the vacancy_down event is generated by the switch, the switch should not generate any further vacancy down event until a vacancy up event is generated. When the number of free entries in the table crosses the vacancy_up threshold value, the switch should set the reason code as OFPTR_VACANCY_UP. Again, once the vacancy up event is generated by the switch, the switch should not generate any further vacancy up event until a vacancy down event is generated. The table field should be set with the table description. Refer to the send_table_status_message() function in the of/openflow.c file for the procedure to send a table status message to the controller. Sending a request-forward message to the controller When a the switch receives a modify request message from the controller to modify the state of a group or meter entries, after successful modification of the state, the switch should forward this request message to all other controllers as a request forward message (OFPT_REQUESTFORWAD). The switch should send this message only to the controller channel in which the controller requested the switch to send this event. The controller can express its interest to receive this event by sending an asynchronous configuration message to the switch. How to do it... The switch should build the OFPT_REQUESTFORWAD message with the following format, and send this message to the controllers that have already shown interest in this event: /* Group/Meter request forwarding. */ struct ofp_requestforward_header { struct ofp_header header; /* Type OFPT_REQUESTFORWARD. */ struct ofp_header request; /* Request being forwarded. */ }; The request field should be set with the request that received from the controller. Refer to the send_request_forward_message() function in the of/openflow.c file for the procedure to send request_forward_message to the controller. Handling a packet-out message from the controller Packet-out (OFPT_PACKET_OUT) messages are sent from the controller to the switch when the controller wishes to send a packet out through the switch's data path via a switch port. How to do it... There are two ways in which the controller can send a packet-out message to the switch: Construct the full packet: In this case, the controller generates the complete packet and adds the action list field to the packet-out message. The action field contains a list of actions defining how the packet should be processed by the switch. If the switch receives a packet_out message with buffer_id set as OFP_NO_BUFFER, then the switch should look into the action list, and based on the action to be performed, it can do one of the following: Modify the packet and send it via the switch port mentioned in the action list Hand over the packet to OpenFlow's pipeline processing, based on the OFPP_TABLE specified in the action list Use a packet buffer in the switch: In this mechanism, the switch should use the buffer that was created at the time of sending the packet-in message to the controller. While sending the packet_in message to the controller, the switch adds the buffer_id to the packet_in message. When the controller wants to send a packet_out message that uses this buffer, the controller includes this buffer_id in the packet_out message. On receiving the packet_out message with a valid buffer_id, the switch should fetch the packet from the buffer and send it via the switch port. Once the packet is sent out, the switch should free the memory allocated to the buffer, which was cached. Handling a barrier message from the controller The switch implementation could arbitrarily reorder the message sent from the controller to maximize its performance. So, if the controller wants to enforce the processing of the messages in order, then barrier messages are used. Barrier messages (OFPT_TABLE_STATUS) are sent from the controller to the switch to ensure message ordering. The switch should not reorder any messages across the barrier message. For example, if the controller is sending a group add message, followed by a flow add message referencing the group, then the message order should be preserved in the barrier message. How to do it... When the controller wants to send messages that are related to each other, it sends a barrier message between these messages. The switch should process these messages as follows: Messages before a barrier request should be processed fully before the barrier, including sending any resulting replies or errors. The barrier request message should then be processed and a barrier reply should be sent. While sending the barrier reply message, the switch should copy the xid value from the barrier request message. The switch should process the remaining messages. Both the barrier request and barrier reply messages don't have any body. They only have the ofp_header. Summary This article covers the list of symmetric and asynchronous messages sent and received by the OpenFlow switch, along with the procedure for handling these messages. Resources for Article: Further resources on this subject: The OpenFlow Controllers [article] Untangle VPN Services [article] Getting Started [article]
Read more
  • 0
  • 0
  • 9638

article-image-architectural-and-feature-overview
Packt
22 Feb 2016
12 min read
Save for later

Architectural and Feature Overview

Packt
22 Feb 2016
12 min read
 In this article by Giordano Scalzo, the author of Learning VMware App Volumes, we are going to look a little deeper into the different component parts that make up an App Volumes solution. Then, once you are familiar with these different components, we will discuss how they fit and work together. (For more resources related to this topic, see here.) App Volumes Components We are going to start by covering an overview of the different core components that make up the complete App Volumes solution, a glossary if you like. These are either the component parts of the actual App Volumes solution or additional components that are required to build your complete environment. App Volumes Manager The App Volumes Manager is the heart of the solution. Installed on a Windows Server operating system, the App Volumes Manager controls the application delivery engine and also provides you the access to a web-based dashboard and console from where you can manage your entire App Volumes environment. You will get your first glimpse of the App Volumes Manager when you complete the installation process and start the post-installation tasks, where you will configure details about your virtual host servers, storage, Active Directory, and other environment variables. Once you have completed the installation tasks, you will use the App Volumes Manager to perform tasks, such as creating new and updating existing AppStacks, creating Writable Volumes as well as then assigning both AppStacks and Writable Volumes to end users or virtual desktop machines. The App Volumes Manager also manages the virtual desktop machine that has the App Volumes Agent installed. Once virtual desktop machine has the agent installed, then it will then appear within the App Volumes Manager inventory so that you are able to configure assignments. In summary the App Volumes Manager performs the following functions: It provides the following functionality: Orchestrates the key infrastructure components such as, Active Directory, AppStack or Writable Volumes attachments, virtual hosting infrastructure (ESXi hosts and vCenter Servers) Manages assignments of AppStack or Writable Volumes to users, groups, and virtual desktop machines Collates AppStacks and Writable Volumes usage Provides a history of administrative actions Acts as a broker for the App Volumes agents for automated assignment of AppStacks and Writable Volumes as virtual desktop machines boot up and the end user logs in Provides a web-based graphical interface from which to manage the entire environment Throughout this article you will see the following icon used in any drawings or schematics to denote the App Volumes Manager. App Volumes Agent The App Volumes Agent is installed onto a virtual desktop machine on which you want to be able to attach AppStacks or Writable Volumes, and runs as a service on that machine. As such it is invisible to the end user. When you attach AppStack or Writable Volume to a virtual machine, then the agent acts as a filter driver and takes care of any application calls and file system redirects between the operating system and the App Stack or Writable Volume. Rather than seeing your AppStack, which appears as an additional hard drive within the operating system, the agent makes the applications appear as if they were natively installed. So, for example, the icons for your applications will automatically appear on your desktop/taskbar. The App Volumes Agent is also responsible for registering the virtual machine with the App Volumes Manager. Throughout this article, you will see the following icon used in any drawings or schematics to denote the App Volumes Agent. The App Volumes Agent can also be installed onto an RDSH host server to allow the attaching of AppStacks within a hosted applications environment. AppStacks An AppStack is a read-only volume that contains your applications, which is mounted as a Virtual Machine Disk file (VMDK) for VMware environments, or as a Virtual Hard Disk file (VHD) for Citrix and Microsoft environments on your virtual desktop machine, or RDSH host server. An AppStack is created using a provisioning machine, which has the App Volumes Agent installed on it. Then, as a part of the provisioning process, you mount an empty container (VMDK or VHD file) and then install the application(s) as you would do normally. The App Volumes Agent redirects the installation files, file system, and registry settings to the AppStack. Once completed, AppStack is set to read-only, which then allows one AppStack to be used for multiple users. This not only helps you reduce the storage requirements (an App Stack is also thin provisioned) but also allows any application that is delivered via AppStack to be centrally managed and updated. AppStacks are then delivered to the end users either as individual user assignments or via group membership using Active Directory. Throughout this article, you will see the following icon used in any drawings or schematics to denote AppStack. Writable Volumes One of the many use cases that was not best suited to a virtual desktop environment was that of developers, where they would need to install various different applications and other software. To cater for this use case, you would need to deploy a dedicated, persistent desktop to meet their requirements. This method of deployment is not necessarily the most cost-effective method, which potentially requires additional infrastructure resources, and management. With App Volumes, this all changes with the Writable Volumes feature. In the same way as you assign AppStack containing preinstalled and configured applications to an end user, with Writable Volumes, you attach an empty container as a VMDK file to their virtual desktop machine into which they can install their own applications. This virtual desktop machine will be running the App Volumes Agent, which provides the filter between any applications that the end user installs into the Writable Volume and the native operating system of the virtual desktop machine. The user then has their own drive onto which they can install applications. Now you can deploy nonpersistent, floating desktops for these users and attach not only their corporate applications via AppStacks, but also their own user-installed applications via a Writable Volume. Throughout this article, you will see the following icon used in any drawings or schematics to denote a Writable Volume. Provisioning virtual machine Although not an actual part of the App Volumes software, a key component is to have a clean virtual desktop machine to use as reference point from which to create your AppStacks from. This is known as the provisioning machine. Once you have your provisioning virtual desktop machine, you first install the App Volumes Agent onto it. Then, from the App Volumes Manager, you initiate the provisioning process, which attaches an empty VMDK file to the provisioning virtual desktop machine, and then prompts you, as the IT admin, to install the application. Before you start the installation of the application(s) that you are going to create as AppStack, it’s a good practice to take a snapshot before you start. in this way, you can roll back to your clean virtual desktop machine state before installation, ready to create the next AppStack. Throughout this article, you will see the following icon used in any drawings or schematics to denote a provisioning machine. A Broker Integration service The Broker Integration service is installed on a VMware Horizon View Connection Server, and it provides faster log on times for the end users who have access to a Horizon View virtual desktop machine. Throughout this article, you will see the following icon used in any drawings or schematics to denote the Broker Integration Service. Storage Groups Again, although not a specific component of App Volumes, you have the ability to define Storage Groups to store your AppStacks and Writable Volumes. Storage Groups are primarily used to provide replication of AppStacks and distribute Writable Volumes across multiple data stores. With AppStack storage groups, you can define a group of data stores that will be used to store the same AppStacks, enabling replication to be automatically deployed on those data stores. With Writable Volumes, only some of the storage group settings will apply attributes for the storage group, for example, the template location and distribution strategy. The distribution strategy allows you to define how writable volumes are distributed across the storage group. There are two settings for this as described: Spread: This will distribute files evenly across all the storage locations. When a file is created, the storage with the most available space is used. Round-Robin: This works by distributing the Writable Volume files sequentially, using the storage location that was used the longest amount of time ago. In this article, you will see the following icon used in any drawings or schematics to denote storage groups. We have introduced you to the core components that make up the App Volumes deployment. App Volumes Architecture Now that you understand what each of the individual components is used for, the next step is to look at how they all fit together to form the complete solution. We are going to break the architecture down into two parts. The first part will be focused on the application delivery and virtual desktop machines from an end user’s perspective. In the second part, we will look more at the supporting and underlying infrastructure, so the view from an IT administrator’s point of view. Finally, in the infrastructure section, we will look at the infrastructure with a networking hat on and illustrate the various network ports we are going to require to be available to us. So let's go back and look at our first part, what the end user will see. In this example, we have a virtual desktop machine to run a Windows operating system as the starting point of our solution. Onto that virtual desktop machine, we have installed the App Volumes Agent. We also have some core applications already installed onto this virtual desktop machine as a part of the core/parent image. These would be applications that are delivered to every user, such as Adobe Reader for example. This is exactly the same best practice as we would normally follow in any other virtual desktop environment. The updates here would be taken care of by updating the parent image and then using the recompose feature of linked clones in Horizon View. With the agent installed, the virtual desktop machine will appear in the App Volumes Manager console from where we can start to assign AppStacks to our Active Directory users and groups. When a user who has been assigned AppStack or Writable Volume logs in to a virtual desktop machine, AppStack that has been assigned to them will be attached to that virtual desktop machine, and the applications within that AppStack will seamlessly appear on the desktop. Users will also have access to their Writable Volume. The following diagram illustrates an example deployment from the virtual desktop machines perspective, as we have just described. Moving on to the second part of our focus on the architecture, we are now going to look at the underlying/supporting infrastructure. As a starting point, all of our infrastructure components have been deployed as virtual machines. They are hosted on the VMware vSphere platform. The following diagram illustrates the infrastructure components and how they fit together to deliver the applications to the virtual desktop machines. In the top section of the diagram, we have the virtual desktop machine running our Windows operating system with the App Volumes Agent installed. Along with acting as the filter driver, the agent talks to the App Volumes Manager (1) to read user assignment information for who can access which AppStacks and Writable Volumes. The App Volumes Manager also communicates with Active Directory (2) to read user, group, and machine information to assign AppStacks and Writable Volumes. The virtual desktop machine also talks to Active Directory to authenticate user logins (3). The App Volumes Manager also needs access to a SQL database (4), which stores the information about the assignments, AppStacks, Writable Volumes, and so on. A SQL database is also a requirement for vCenter Server (5), and if you are using the linked clone function of Horizon View, then a database is required for the View Composer. The final part of this diagram shows the App Volumes storage groups that are used to store the AppStacks and the Writable Volumes. These get mounted to the virtual desktop machines as virtual disks or VMDK files (6). Following on from the architecture and the how the different components fit together and communicate, later we are going to cover which ports need to be open to allow the communication between the various services and components. Network ports Now, we are going to cover the firewall ports that are required to be open in order for the App Volumes components to communicate with the other infrastructure components. The diagram here shows the port numbers (highlighted in the boxes) that are required to be open for each component to communicate. It's worth ensuring that these ports are configured before you start the deployment of App Volumes. Summary In this article, we introduced you to the individual components that make up the App Volumes solution and what task each of them performs. We then went on to look at how those components fit into the overall solution architecture, as well as how the architecture works. Resources for Article:   Further resources on this subject: Elastic Load Balancing [article] Working With CEPH Block Device [article] Android and IOs Apps Testing At A Glance [article]
Read more
  • 0
  • 0
  • 9571
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-hyper-v-basics
Packt
06 Feb 2015
10 min read
Save for later

Hyper-V Basics

Packt
06 Feb 2015
10 min read
This article by Vinith Menon, the author of Microsoft Hyper-V PowerShell Automation, delves into the basics of Hyper-V, right from installing Hyper-V to resizing virtual hard disks. The Hyper-V PowerShell module includes several significant features that extend its use, improve its usability, and allow you to control and manage your Hyper-V environment with more granular control. Various organizations have moved on from Hyper-V (V2) to Hyper-V (V3). In Hyper-V (V2), the Hyper-V management shell was not built-in and the PowerShell module had to be manually installed. In Hyper-V (V3), Microsoft has provided an exhaustive set of cmdlets that can be used to manage and automate all configuration activities of the Hyper-V environment. The cmdlets are executed across the network using Windows Remote Management. In this article, we will cover: The basics of setting up a Hyper-V environment using PowerShell The fundamental concepts of Hyper-V management with the Hyper-V management shell The updated features in Hyper-V (For more resources related to this topic, see here.) Here is a list of all the new features introduced in Hyper-V in Windows Server 2012 R2. We will be going in depth through the important changes that have come into the Hyper-V PowerShell module with the following features and functions: Shared virtual hard disk Resizing the live virtual hard disk Installing and configuring your Hyper-V environment Installing and configuring Hyper-V using PowerShell Before you proceed with the installation and configuration of Hyper-V, there are some prerequisites that need to be taken care of: The user account that is used to install the Hyper-V role should have administrative privileges on the computer There should be enough RAM on the server to run newly created virtual machines Once the prerequisites have been taken care of, let's start with installing the Hyper-V role: Open a PowerShell prompt in Run as Administrator mode: Type the following into the PowerShell prompt to install the Hyper-V role along with the management tools; once the installation is complete, the Hyper-V Server will reboot and the Hyper-V role will be successfully installed: Install-WindowsFeature –Name Hyper-V -IncludeManagementTools - Restart Once the server boots up, verify the installation of Hyper-V using the Get-WindowsFeature cmdlet: Get-WindowsFeature -Name hyper* You will be able to see that the Hyper-V role, Hyper-V PowerShell management shell, and the GUI management tools are successfully installed:   Fundamental concepts of Hyper-V management with the Hyper-V management shell In this section, we will look at some of the fundamental concepts of Hyper-V management with the Hyper-V management shell. Once you get the Hyper-V role installed as per the steps illustrated in the previous section, a PowerShell module to manage your Hyper-V environment will also get installed. Now, perform the following steps: Open a PowerShell prompt in the Run as Administrator mode. PowerShell uses cmdlets that are built using a verb-noun naming system (for more details, refer to Learning Windows PowerShell Names at http://technet.microsoft.com/en-us/library/dd315315.aspx). Type the following command into the PowerShell prompt to get a list of all the cmdlets in the Hyper-V PowerShell module: Get-Command -Module Hyper-V Hyper-V in Windows Server 2012 R2 ships with about 178 cmdlets. These cmdlets allow a Hyper-V administrator to handle very simple, basic tasks to advanced ones such as setting up a Hyper-V replica for virtual machine disaster recovery. To get the count of all the available Hyper-V cmdlets, you can type the following command in PowerShell: Get-Command -Module Hyper-V | Measure-Object The Hyper-V PowerShell cmdlets follow a very simple approach and are very user friendly. The cmdlet name itself indirectly communicates with the Hyper-V administrator about its functionality. The following screenshot shows the output of the Get command: For example, in the following screenshot, the Remove-VMSwitch cmdlet itself says that it's used to delete a previously created virtual machine switch: If the administrator is still not sure about the task that can be performed by the cmdlet, he or she can get help with detailed examples using the Get-Help cmdlet. To get help on the cmdlet type, type the cmdlet name in the prescribed format. To make sure that the latest version of help files are installed on the server, run the Update-Help cmdlet before executing the following cmdlet: Get-Help <Hyper-V cmdlet> -Full The following screenshot is an example of the Get-Help cmdlet: Shared virtual hard disks This new and improved feature in Windows Server 2012 R2 allows an administrator to share a virtual hard disk file (the .vhdx file format) between multiple virtual machines. These .vhdx files can be used as shared storage for a failover cluster created between virtual machines (also known as guest clustering). A shared virtual hard disk allows you to create data disks and witness disks using .vhdx files with some advantages: Shared disks are ideal for SQL database files and file servers Shared disks can be run on generation 1 and generation 2 virtual machines This new feature allows you to save on storage costs and use the .vhdx files for guest clustering, enabling easier deployment rather than using virtual Fibre Channel or Internet Small Computer System Interface (iSCSI), which are complicated and require storage configuration changes such as zoning and Logic Unit Number (LUN) masking. In Windows Server 2012 R2, virtual iSCSI disks (both shared and unshared virtual hard disk files) show up as virtual SAS disks when you add an iSCSI hard disk to a virtual machine. Shared virtual hard disks (.vhdx) files can be placed on Cluster Shared Volumes (CSV) or a Scale-Out File Server cluster Let's look at the ways you can automate and manage your shared .vhdx guest clustering configuration using PowerShell. In the following example, we will demonstrate how you can create a two-node file server cluster using the shared VHDX feature. After that, let's set up a testing environment within which we can start learning these new features. The steps are as follows: We will start by creating two virtual machines each with 50 GB OS drives, which contains a sysprep image of Windows Server 2012 R2. Each virtual machine will have 4 GB RAM and four virtual CPUs. D:vhdbase_1.vhdx and D:vhdbase_2.vhdx are already existing VHDX files with sysprepped image of Windows Server 2012 R2. The following code is used to create two virtual machines: New-VM –Name "Fileserver_VM1" –MemoryStartupBytes 4GB – NewVHDPath d:vhdbase_1.vhdx -NewVHDSizeBytes 50GB New-VM –Name "Fileserver_VM2" –MemoryStartupBytes 4GB –NewVHDPath d:vhdbase_2.vhdx -NewVHDSizeBytes 50GB Next, we will install the file server role and configure a failover cluster on both the virtual machines using PowerShell. You need to enable PowerShell remoting on both the file servers and also have them joined to a domain. The following is the code: Install-WindowsFeature -computername Fileserver_VM1 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM1 RSAT- Clustering –IncludeAllSubFeature   Install-WindowsFeature -computername Fileserver_VM2 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM2 RSAT- Clustering -IncludeAllSubFeature Once we have the virtual machines created and the file server and failover clustering features installed, we will create the failover cluster as per Microsoft's best practices using the following set of cmdlets: New-Cluster -Name Cluster1 -Node FileServer_VM1,   FileServer_VM2 -StaticAddress 10.0.0.59 -NoStorage – Verbose You will need to choose a name and IP address that fits your organization. Next, we will create two vhdx files named sharedvhdx_data.vhdx (which will be used as a data disk) and sharedvhdx_quorum.vhdx (which will be used as the quorum or the witness disk). To do this, the following commands need to be run on the Hyper-V cluster: New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX -Fixed - SizeBytes 10GB   New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_quorum.VHDX -Fixed - SizeBytes 1GB Once we have created these virtual hard disk files, we will add them as shared .vhdx files. We will attach these newly created VHDX files to the Fileserver_VM1 and Fileserver_VM2 virtual machines and specify the parameter-shared VHDX files for guest clustering: Add-VMHardDiskDrive –VMName Fileserver_VM1 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk   Add-VMHardDiskDrive –VMName Fileserver_VM2 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk Finally, we will be making the disks available online and adding them to the failover cluster using the following command: Get-ClusterAvailableDisk | Add-ClusterDisk Once we have executed the preceding set of steps, we will have a highly available file server infrastructure using shared VHD files. Live virtual hard disk resizing With Windows Server 2012 R2, a newly added feature in Hyper-V allows the administrators to expand or shrink the size of a virtual hard disk attached to the SCSI controller while the virtual machines are still running. Hyper-V administrators can now perform maintenance operations on a live VHD and avoid any downtime by not temporarily shutting down the virtual machine for these maintenance activities. Prior to Windows Server 2012 R2, to resize a VHD attached to the virtual machine, it had to be turned off leading to costly downtime. Using the GUI controls, the VHD resize can be done by using only the Edit Virtual Hard Disk wizard. Also, note that the VHDs that were previously expanded can be shrunk. The Windows PowerShell way of doing a VHD resize is by using the Resize-VirtualDisk cmdlet. Let's look at the ways you can automate a VHD resize using PowerShell. In the next example, we will demonstrate how you can expand and shrink a virtual hard disk connected to a VM's SCSI controller. We will continue using the virtual machine that we created for our previous example. We have a pre-created VHD of 50 GB that is connected to the virtual machine's SCSI controller. Expanding the virtual hard disk Let's resize the aforementioned virtual hard disk to 57 GB using the Resize-Virtualdisk cmdlet: Resize-VirtualDisk -Name "scsidisk" -Size (57GB) Next, if we open the VM settings and perform an inspect disk operation, we'll be able to see that the VHDX file size has become 57 GB: Also, one can verify this when he or she logs into the VM, opens disk management, and extends the unused partition. You can see that the disk size has increased to 57 GB: Resizing the virtual hard disk Let's resize the earlier mentioned VHD to 57 GB using the Resize-Virtualdisk cmdlet: For this exercise, the primary requirement is to shrink the disk partition by logging in to the VM using disk management, as you can see in the following screenshot; we're shrinking the VHDX file by 7 GB: Next, click on Shrink. Once you complete this step, you will see that the unallocated space is 7 GB. You can also execute this step using the Resize-Partition Powershell cmdlet: Get-Partition -DiskNumber 1 | Resize-Partition -Size 50GB The following screenshot shows the partition: Next, we will resize/shrink the VHD to 50 GB: Resize-VirtualDisk -Name "scsidisk" -Size (50GB) Once the previous steps have been executed successfully, run a re-scan disk using disk management and you will see that the disk size is 50 GB: Summary In this article, we went through the basics of setting up a Hyper-V environment using PowerShell. We also explored the fundamental concepts of Hyper-V management with Hyper-V management shell. Resources for Article: Further resources on this subject: Hyper-V building blocks for creating your Microsoft virtualization platform [article] The importance of Hyper-V Security [article] Network Access Control Lists [article]
Read more
  • 0
  • 0
  • 9499

article-image-deploying-new-hosts-vcenter
Packt
04 Jun 2015
8 min read
Save for later

Deploying New Hosts with vCenter

Packt
04 Jun 2015
8 min read
In this article by Konstantin Kuminsky author of the book, VMware vCenter Cookbook, we will review some options and features available in vCenter to improve an administrator's efficiency. (For more resources related to this topic, see here.) Deploying new hosts faster with scripted installation Scripted installation is an alternative way to deploy ESXi hosts. It can be used when several hosts need to be deployed or upgraded. The installation script contains ESXi settings and can be accessed by a host during the ESXi boot from the following locations: FTP HTTP or HTTPS NFS USB flash drive or CD-ROM How to do it... The following sections describe the process of creating an installation script and using it to boot the ESXi host. Creating an installation script An installation script contains installation options for ESXi. It's a text file with the .cfg extension. The best way to create an installation script is to use the default script supplied with the ESXi installer and modify it. The default script is located in the /etc/vmware/weasel/ folder location and is called ks.cfg. Commands that can be modified include, but are not limited to: The install, installorupgrade, or upgrade commands define the ESXi disk—location, where the installation or upgrade will be installed. The available options are: --disk: This option is the disk name which can be specified as path (/vmfs/devices/disks/vmhbaX:X:X), VML name (vml.xxxxxxxx) or as LUN UID (vmkLUM_UID) –overwritevmfs: This option wipes the existing datastore. --preservevmfs: This option keeps the existing datastore. --novmfsondisk: This option prevents a new partition from being created. The Network command, which specifies the network settings. Most of the available options are self-explanatory: --bootproto=[dhcp|static] --device: MAC address of NIC to use --ip --gateway --nameserver --netmask --hostname --vlanid A full list of installation and upgrade commands can be found in the vSphere5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Use the installation script to configure ESXi In order to use the installation script, you will need to use additional ESXi boot options. Boot a host from the ESXi installation disk. When the ESXi installer screen appears, press Shift + O to provide additional boot options. In the command prompt, type the following: ks=<location of the script> <additional boot options> The valid locations are as follows: ks=cdrom:/path ks=file://path ks=protocol://path ks=usb:/path The additional options available are as follows: gateway: This option is the default gateway ip: This option is the IP address nameserver: This option is the DNS server netmask: This option is the subnet mask vlanid: This option is the VLAN ID netdevice: This option is the MAC address of NIC to use bootif: This option is the MAC address of NIC to use in PXELINUX format For example, for the HTTP location, the command will look like this: ks=http://XX.XX.XX.XX/scripts/ks-v1.cfg nameserver=XX.XX.XX.XX ip=XX.XX.XX.XX netmask=255.255.255.0 gateway=XX.XX.XX.XX Deploying new hosts faster with auto deploy vSphere Auto Deploy is VMware's solution to simplify the deployment of large numbers of ESXi hosts. It is one of the available options for ESXi deployment along with an interactive and scripted installation. The main difference of Auto Deploy compared to other deployment options is that the ESXi configuration is not stored on the host's disk. Instead, it's managed with image and host profiles by the Auto Deploy server. Getting ready Before using Auto Deploy, confirm the following: The Auto Deploy server is installed and registered with vCenter. It can be installed as a standalone server or as part of the vCenter installation. The DHCP server exists in the environment. The DHCP server is configured to point to the TFTP server for PXE boot (option 66) with the boot filename undionly.kpxe.vmw-hardwired. The TFTP server that will be used for PXE boot exists and is configured properly. The machine where Auto Deploy cmdlets will run has the following installed: Microsoft .NET 2.0 or later PowerShell 2.0 or later PowerCLI including Auto Deploy cmdlets New hosts that will be provisioned with Auto Deploy must: Meet the hardware requirements for ESXi 5 Have network connectivity to vCenter, preferably 1 Gbps or higher Have PXE boot enabled How to do it... Once prerequisites are met, the following steps are required to start deploying hosts. Configuring the TFTP server In order to configure the TFTP server with the correct boot image for ESXi, execute the following steps: In vCenter, go to Home | Auto Deploy. Switch to the Administration tab. From the Auto Deploy page, click on Download TFTP Boot ZIP. Download the file and unzip it to the appropriate folder on the TFTP server. Creating an image profile Image profies are created using Image Builder PowerCLI cmdlets. Image Builder requires PowerCLI and can be installed on a machine that's used to run administrative tasks. It doesn't have to be a vCenter server or Auto Deploy server and the only requirement for this machine is that it must have access to the software depot—a file server that stores image profiles. Image profiles can be created from scratch or by cloning an existing profile. The following steps outline the process of creating an image profile by cloning. The steps assume that: The Image Builder has been installed. The appropriate software depot has been downloaded from the VMware website by going to http://www.vmware.com/downloads and searching for the software depot. Cloning an existing profile included in the depot is the easiest way to create a new profile. The steps to do so are as follows: Add a depot with the image profile to be cloned: Add-EsxSoftwareDepot -DepotUrl <Path to softwaredepot> Find the name of the profile to be cloned using Get-ESXImageProfile. Clone the profile: New-EsxImageProfile -CloneProfile <Existing profile name> - Name <New profile name> Add a software package to the new image profile: Add-EsxSoftwarePackage -ImageProfile <New profile name> - SoftwarePackage <Package> At this point, the software package will be validated and in case of errors, or if there are any dependencies that need to be resolved, an appropriate message will be displayed. Assigning an image profile to hosts To create a rule that assigns an image profile to a host, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Add the software depot with the correct image profile to the PowerCLI session: Add-EsxSoftwareDepot <depot URL> Locate the image profile using the Get-EsxImageProfile cmdlet. Define a rule that assigns hosts with certain attributes to an image profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host profile to hosts Optionally, the existing host profile can be assigned to hosts. To accomplish this, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Locate the host profile name using the Get-VMhostProfile command. Define a rule that assigns hosts with certain attributes to a host profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host to a folder or cluster in vCenter To make sure a host is placed in a certain folder or cluster once it boots, do the following: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Define a rule that assigns hosts with certain attributes to a folder or cluster. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Folder name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> If a host is assigned to a cluster it inherits that cluster's host profile. How it works... Auto Deploy utilizes the PXE boot to connect to the Auto Deploy server and get an image profile, vCenter location, and optionally, host profiles. The detailed process is as follows: The host gets gPXE executable and gPXE configuration files from the PXE TFTP server. As gPXE executes, it uses instructions from the configuration file to query the Auto Deploy server for specific information. The Auto Deploy server returns the requested information specified in the image and host profiles. The host boots using this information. Auto Deploy adds a host to the specified vCenter server. The host is placed in maintenance mode when additional information such as IP address is required from the administrator. To exit maintenance mode, the administrator will need to provide this information and reapply the host profile. When a new host boots for the first time, vCenter creates a new object and stores it together with the host and image profiles in the database. For any subsequent reboots, the existing object is used to get the correct host profile and any changes that have been made. More details can be found in the vSphere 5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Summary In this article we learnt how new hosts can be deployed with scripted installation and auto deploy techniques. Resources for Article: Further resources on this subject: VMware vRealize Operations Performance and Capacity Management [Article] Backups in the VMware View Infrastructure [Article] Application Packaging in VMware ThinApp 4.7 Essentials [Article]
Read more
  • 0
  • 0
  • 9491

article-image-high-availability-scenarios
Packt
26 Nov 2014
14 min read
Save for later

High Availability Scenarios

Packt
26 Nov 2014
14 min read
"Live Migration between hosts in a Hyper-V cluster is very straightforward and requires no specific configuration, apart from type and amount of simultaneous Live Migrations. If you add multiple clusters and standalone Hyper-V hosts into the mix, I strongly advise you to configure Kerberos Constrained Delegation for all hosts and clusters involved." Hans Vredevoort – MVP Hyper-V This article written by Benedict Berger, the author of Hyper-V Best Practices, will guide you through the installation of Hyper-V clusters and their best practice configuration. After installing the first Hyper-V host, it may be necessary to add another layer of availability to your virtualization services. With Failover Clusters, you get independence from hardware failures and are protected from planned or unplanned service outages. This article includes prerequirements and implementation of Failover Clusters. (For more resources related to this topic, see here.) Preparing for High Availability Like every project, a High Availability (HA) scenario starts with a planning phase. Virtualization projects are often turning up the question for additional availability for the first time in an environment. In traditional data centers with physical server systems and local storage systems, an outage of a hardware component will only affect one server hosting one service. The source of the outage can be localized very fast and the affected parts can be replaced in a short amount of time. Server virtualization comes with great benefits, such as improved operating efficiency and reduced hardware dependencies. However, a single component failure can impact a lot of virtualized systems at once. By adding redundant systems, these single points of failure can be avoided. Planning a HA environment The most important factor in the decision whether you need a HA environment is your business requirements. You need to find out how often and how long an IT-related production service can be interrupted unplanned, or planned, without causing a serious problem to your business. Those requirements are defined in a central IT strategy of a business as well as in process definitions that are IT-driven. They include Service Level Agreements of critical business services run in the various departments of your company. If those definitions do not exist or are unavailable, talk to the process owners to find out the level of availability needed. High Availability is structured in different classes, measured by the total uptime in a defined timespan, that is 99.999 percent in a year. Every nine in this figure adds a huge amount of complexity and money needed to ensure this availability, so take time to find out the real availability needed by your services and resist the temptation to plan running every service on multi-redundant, geo-spread cluster systems, as it may not fit in the budget. Be sure to plan for additional capacity in a HA environment, so you can lose hardware components without the need to sacrifice application performance. Overview of the Failover Cluster A Hyper-V Failover Cluster consists of two or more Hyper-V Server compute nodes. Technically, it's possible to use a Failover Cluster with just one computing node; however, it will not provide any availability advantages over a standalone host and is typically only used for migration scenarios. A Failover Cluster is hosting roles such as Hyper-V virtual machines on its computing nodes. If one node fails due to a hardware problem, it will not answer any more to cluster heartbeat communication, even though the service interruption is almost instantly detected. The virtual machines running on the particular node are powered off immediately due to the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster—utilizing the Windows Server Failover-Clustering feature—it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Failover Cluster prerequirements To successfully implement a Hyper-V Failover Cluster, we need suitable hardware, software, permissions, and network and storage infrastructure as outlined in the following sections. Hardware The hardware used in a Failover Cluster environment needs to be validated against the Windows Server Catalogue. Microsoft will only support Hyper-V clusters when all components are certified for Windows Server 2012 R2. The servers used to run our HA virtual machines should ideally consist of identical hardware models with identical components. It is possible, and supported, to run servers in the same cluster with different hardware components, that is, different size of RAM; however, due to a higher level of complexity, this is not best practice. Special planning considerations are needed to address the CPU requirements of a cluster. To ensure maximum compatibility, all CPUs in a cluster should be exactly the same model. While it's possible from a technical point of view to mix even CPUs from Intel and AMD in the same cluster through to different architecture, you will lose core cluster capabilities such as Live Migration. Choosing a single vendor for your CPUs is not enough, even when using different CPU models your cluster nodes may be using a different set of CPU instruction set extensions. With different instructions sets, Live Migrations won't work either. There is a compatibility mode that disables most of the instruction set on all CPUs on all cluster nodes; however, this leaves you with a negative impact on performance and should be avoided. A better approach to this problem would be creating another cluster from the legacy CPUs running smaller or non-production workloads without affecting your high-performance production workloads. If you want to extend your cluster after some time, you will find yourself with the problem of not having the exact same hardware available to purchase. Choose the current revision of the model or product line you are already using in your cluster and manually compare the CPU instruction sets at http://ark.intel.com/ and http://products.amd.com/, respectively. Choose the current CPU model that best fits the original CPU features of your cluster and have this design validated by your hardware partner. Ensure that your servers are equipped with compatible CPUs, the same amount of RAM, and the same network cards and storage controllers. The network design Mixing different vendors of network cards in a single server is fine and best practice for availability, but make sure all your Hyper-V hosts are using an identical hardware setup. A network adapter should only be used exclusively for LAN traffic or storage traffic. Do not mix these two types of communication in any basic scenario. There are some more advanced scenarios involving converged networking that can enable mixed traffic, but in most cases, this is not a good idea. A Hyper-V Failover Cluster requires multiple layers of communication between its nodes and storage systems. Hyper-V networking and storage options have changed dramatically through the different releases of Hyper-V. With Windows Server 2012 R2, the network design options are endless. In this article, we will work with a typically seen basic set of network designs. We have at least six Network Interface Cards (NICs) available in our servers with a bandwidth of 1 Gb/s. If you have more than five interface cards available per server, use NIC Teaming to ensure the availability of the network or even use converged networking. Converged networking will also be your choice if you have less than five network adapters available. The First NIC will be exclusively used for Host Communication to our Hyper-V host and will not be involved in the VM network traffic or cluster communication at any time. It will ensure Active Directory and management traffic to our Management OS. The second NIC will ensure Live Migration of virtual machines between our cluster nodes. The third NIC will be used for VM traffic. Our virtual machines will be connected to the various production and lab networks through this NIC. The fourth NIC will be used for internal cluster communication. The first four NICs can either be teamed through Windows Server NIC Teaming or can be abstracted from the physical hardware through to Windows Server network virtualization and converged fabric design. The fifth NIC will be reserved for storage communication. As advised, we will be isolating storage and production LAN communication from each other. If you do not use iSCSI or SMB3 storage communication, this NIC will not be necessary. If you use Fibre Channel SAN technology, use a FC-HBA instead. If you leverage Direct Attached Storage (DAS), use the appropriate connector for storage communication. The sixth NIC will also be used for storage communication as a redundancy. The redundancy will be established via MPIO and not via NIC Teaming. There is no need for a dedicated heartbeat network as in older revisions of Windows Server with Hyper-V. All cluster networks will automatically be used for sending heartbeat signals throughout the other cluster members. If you don't have 1 Gb/s interfaces available, or if you use 10 GbE adapters, it’s best practice to implement a converged networking solution. Storage design All cluster nodes must have access to the virtual machines residing on a centrally shared storage medium. This could be a classic setup with a SAN, a NAS, or a more modern concept with Windows Scale Out File Servers hosting Virtual Machine Files SMB3 Fileshares. In this article, we will use a NetApp SAN system that's capable of providing a classic SAN approach with LUNs mapped to our Hosts as well as utilizing SMB3 Fileshares, but any other Windows Server 2012 R2 validated SAN will fulfill the requirements. In our first setup, we will utilize Cluster Shared Volumes (CSVs) to store several virtual machines on the same storage volume. It's not good these days to create a single volume per virtual machine due to a massive management overhead. It's a good rule of thumb to create one CSV per cluster node; in larger environments with more than eight hosts, a CSV per two to four cluster nodes. To utilize CSVs, follow these steps: Ensure that all components (SAN, Firmware, HBAs, and so on) are validated for Windows Server 2012 R2 and are up to date. Connect your SAN physically to all your Hyper-V hosts via iSCSI or Fibre Channel connections. Create two LUNs on your SAN for hosting virtual machines. Activate Hyper-V performance options for these LUNs if possible (that is, on a NetApp, by setting the LUN type to Hyper-V). Size the LUNs for enough capacity to host all your virtual hard disks. Label the LUNs CSV01 and CSV02 with appropriate LUN IDs. Create another small LUN with 1 GB in size and label it Quorum. Make the LUNs available to all Hyper-V hosts in this specified cluster by mapping it on the storage device. Do not make these LUNs available to any other hosts or cluster. Prepare storage DSMs and drivers (that is, MPIO) for Hyper-V host installation. Refresh disk configuration on hosts, install drivers and DSMs, and format volumes as NTFS (quick). Install Microsoft Multipath IO when using redundant storage paths: Install-WindowsFeature -Name Multipath-IO –Computername ElanityHV01, ElanityHV02 In this example, I added the MPIO feature to two Hyper-V hosts with the computer names ElanityHV01 and ElanityHV02. SANs typically are equipped with two storage controllers for redundancy reasons. Make sure to disperse your workloads over both controllers for optimal availability and performance. If you leverage file servers providing SMB3 shares, the preceding steps do not apply to you. Perform the following steps instead: Create a storage space with the desired disk-types, use storage tiering if possible. Create a new SMB3 Fileshare for applications. Customize the Permissions to include all Hyper-V servers from the planned clusters as well as the Hyper-V cluster object itself with full control. Server and software requirements To create a Failover Cluster, you need to install a second Hyper-V host. Use the same unattended file but change the IP address and the hostname. Join both Hyper-V hosts to your Active Directory domain if you have not done this until yet. Hyper-V can be clustered without leveraging Active Directory but it's lacking several key components, such as Live Migration, and shouldn't be done on purpose. The availability to successfully boot up a domain-joined Hyper-V cluster without the need to have any Active Directory domain controller present during boot time is the major benefit from the Active Directory independency of Failover Clusters. Ensure that you create a Hyper-V virtual switch as shown earlier with the same name on both hosts, to ensure cluster compatibility and that both nodes are installed with all updates. If you have System Center 2012 R2 in place, use the System Center Virtual Machine Manager to create a Hyper-V cluster. Implementing Failover Clusters After preparing our Hyper-V hosts, we will now create a Failover Cluster using PowerShell. I'm assuming your hosts are installed, storage and network connections are prepared, and the Hyper-V role is already active utilizing up-to-date drivers and firmware on your hardware. First, we need to ensure that Servername, Date, and Time of our Hosts are correct. Time and Timezone configurations should occur via Group Policy. For automatic network configuration later on, it's important to rename the network connections from default to their designated roles using PowerShell, as seen in the following commands: Rename-NetAdapter -Name "Ethernet" -NewName "Host" Rename-NetAdapter -Name "Ethernet 2" -NewName "LiveMig" Rename-NetAdapter -Name "Ethernet 3" -NewName "VMs" Rename-NetAdapter -Name "Ethernet 4" -NewName "Cluster" Rename-NetAdapter -Name "Ethernet 5" -NewName "Storage" The Network Connections window should look like the following screenshot: Hyper-V host Network Connections Next, IP configuration of the network adapters. If you are not using DHCP for your servers, manually set the IP configuration (different subnets) of the specified network cards. Here is a great blog post on how to automate this step: http://bit.ly/Upa5bJ Next, we need to activate the necessary Failover Clustering features on both of our Hyper-V hosts: Install-WindowsFeature -Name Failover-Clustering-IncludeManagementTools –Computername ElanityHV01, ElanityHV02 Before actually creating the cluster, we are launching a cluster validation cmdlet via PowerShell: Test-Cluster ElanityHV01, ElanityHV02 Test-Cluster cmdlet Open the generated .mht file for more details, as shown in the following screenshot: Cluster validation As you can see, there are some warnings that should be investigated. However, as long as there are no errors, the configuration is ready for clustering and fully supported by Microsoft. However, check out Warnings to be sure you won't run into problems in the long run. After you have fixed potential errors and warnings listed in the Cluster Validation Report, you can finally create the cluster as follows: New-Cluster-Name CN=ElanityClu1,OU=Servers,DC=cloud,DC=local-Node ElanityHV01, ElanityHV02-StaticAddress 192.168.1.49 This will create a new cluster named ElanityClu1 consisting of the nodes ElanityHV01 and ElanityHV02 and using the cluster IP address 192.168.1.49. This cmdlet will create the cluster and the corresponding Active Directory Object in the specified OU. Moving the cluster object to a different OU later on is no problem at all; even renaming is possible when done the right way. After creating the cluster, when you open the Failover Cluster Management console, you should be able to connect to your cluster: Failover Cluster Manager You will see that all your cluster nodes and Cluster Core Resources are online. Rerun the Validation Report and copy the generated .mht files to a secure location if you need them for support queries. Keep in mind that you have to rerun this wizard if any hardware or configuration changes occurring to the cluster components, including any of its nodes. The initial cluster setup is now complete and we can continue with post creation tasks. Summary With the knowledge from this article, you are now able to design and implement Hyper-V Failover Clusters as well as guest clusters. You are aware of the basic concepts of High Availability and the storage and networking options necessary to achieve this. You have seen real-world proven configurations to ensure a stable operating environment. Resources for Article: Further resources on this subject: Planning Desktop Virtualization [Article] Backups in the VMware View Infrastructure [Article] Virtual Machine Design/a> [Article]
Read more
  • 0
  • 0
  • 9389

Packt
09 Jul 2015
29 min read
Save for later

Process of designing XenDesktop® deployments

Packt
09 Jul 2015
29 min read
In this article by, Govardhan Gunnala and Daniele Tosatto, authors of the book Mastering Citrix® XenDesktop®, we will discuss the process of designing XenDesktop® deployments. The uniqueness of the XenDesktop architecture is its modular five layer model. It covers all the key decisions in designing the XenDesktop deployment. User layer: Defines the users and their requirements Access layer: Defines how the users will access the resources Desktop/resource layer: Defines what resources will be delivered Control layer: Defines managing and maintaining the solution Hardware layer: Defines what resources it needs for implementing the chosen solution (For more resources related to this topic, see here.) While FMA is simple at a high level, its implementation can become complex depending on the technologies/options that are chosen for each component across the layers of FMA. Along with great flexibility, comes the responsibility of diligently choosing the technologies/options for fulfilling your business requirements. Importantly, the decisions made in the first three layers impact the last two layers of the deployment. It means that fixing a wrong decision anywhere in the first three layers during/after implementation stage would have less or no scope, and may even lead to implement the solution from the scratch again. Your design decisions speak for your solution's effectiveness in helping with the given business requirements. The layered architecture of the XenDesktop FMA, featuring the components at each layer is given in the following diagram. Each component of XenDesktop will fall under one of the layers shown in the succeeding diagram. We'll see what decisions are to be made for each of these components at each layer in the next sub section. Decisions to be made at each layer I will have to write a separate book for discussing all the possible technologies/options that are available at each layer. Following is a highly summarized list of the decisions to be made at each layer. This will help you in realizing the breadth of designing XenDesktop. This high level coverage of the various options will help you in locating and considering all the possible options that are available for making the right decisions and avoiding any slippages and missing any considerations. User layer The user layer refers to the specification of the users who will utilize the XenDesktop deployment. A business requirement statement may mention that the service users can either be the internal business users or the external customers accessing the service from Internet. Furthermore, both of these users may also need mobile access to the XenDesktop services. The Citrix receiver is the only component that belongs to the user layer, and XenDesktop is dependent on it for successfully delivering a XenDesktop session. By correlating this technical aspect with the preceding business requirement statement, one needs to consider all the possible aspects of receiver software on the client devices. This involves making the following decisions: Endpoint/user devices related: What are the devices that the users are supposed to access the services from? Who owns and administrates those devices throughout their lifecycle? Endpoints supported: Corporate computers, laptops, or mobiles running on Windows or thin clients. User smart devices, such as Android tablets, Apple iPads, and so on. In case of service providers, the endpoints can usually be any device and they need to be supported. Endpoint ownership: Device management includes security, availability, and compliance. Maintaining the responsibility of the devices on network. Endpoint lifecycle: Devices become either outdated or limited very quickly. Define minimum device hardware requirements to run your business workloads. Endpoint form factor: Choose the devices that may either be fully featured or have limited thin clients, or be a mix of both to support features, such as HDX graphics, multi-monitors, and so on. Thin client selection: Choose if the thin clients, such as Dell Wyse Zero clients, running on the limited functionality operating systems would suffice your user requirements. Understand its licensing cost. Receiver selection: Once you determine your endpoint device and its capabilities, you need to decide on the receiver selection that can be run on the devices. The greatest thing is that receiver is available for almost any device. Receiver type: Choose the receiver that is required for your device. Since the Receiver software for each platform (OS) differs, it is important to use the appropriate Receiver software for your devices while considering the platform that it runs on. You can download the appropriate Receiver software for your device from http://www.Citrix.com/go/receiver.html page. Initial deployment: Receiver is like any other software that will fit into your overall application portfolio. Determine how you would deploy this application on your devices. For corporate desktops and mobiles, you may use the enterprise application deployment and the mobile device management software. Otherwise, the users will be prompted to install it when they access the StoreFront URL, or they can even download it from Citrix for facilitating the installation process. For user-managed mobile devices, you can get it from the respective Windows or Google Apple stores/marketplaces. Initial configuration: Similar to other applications, Receiver requires certain initial configuration. It can be configured either manually or by using a provisioning file, group policy, and e-mail based discovery. Keeping the Receiver software up-to-date: Once you have installed Receiver on user devices, you will also require a mechanism for deploying the updates to Receiver. This can also be the way of initial deployments. Access layer An access layer refers to the specification of how the users gain access to the resources. A business requirement statement may usually state that the users should be validated for gaining access, and the access should be secured when the user is connected over the Internet. The technical components that fall under this layer include firewall(s), NetScaler, and StoreFront. These components play a broader role in the overall networking infrastructure of the company, which also includes the XenDesktop, as well as complete Citrix solutions in the environment. Their key activities include firewalling, external to internal IP address NATing, NetScaler Gateway to secure the connection between the virtual desktop and the user device, global load balancing, user validation/authentication, and GUI presentation of the enumerated resources to the end users. It involves making the following decisions: Authentication: Authentication point: A user can be authenticated at the NetScaler Gateway or StoreFront. Authentication policy: Various business use cases and compliance makes certain modes of authentication mandatory. You can choose from the different authentication methods supported at: StoreFront: Basic authentication by using a username and a password; Domain Pass-through, the NS Gateway pass-through, smart card, and even unauthenticated access. NetScaler Gateway: LDAP, RADIUS (token), client certificates. StoreFront: The decisions that are to be made around the scope of StoreFront are as follows: Unauthenticated access: Provides access to the users who don't require a username and a password, but they are still able to access the administrator allowed resources. Usually, this fits well with public or Kiosk systems. High availability: Making the StoreFront servers available at all times. Hardware load balancing, DNS Round Robin, Windows network load balancing, and so on. Delivery controller high availability and StoreFront: Building high availability for the delivery controller is recommended since they are needed for forming successful connections. Defining more than one delivery controller for the stores makes StoreFront auto failover to the next server in the list. Security - Inbound traffic: Consider securing the user connection to virtual desktops from the internal StoreFront and the external NetScaler Gateway. Security – Backend traffic: Consider securing the communication between the StoreFront and the XML services running on the controller servers. As these will be within the internal network, they can be secured by using the internal private certificate. Routing Receiver with Beacons: Receiver supports websites called Beacons to identify whether the user connection is internal or external. StoreFront provides Receiver with the http(s) addresses of the Beacon points during the initial connection. Resource Presentation: StoreFront presents a webpage, which provides self-service of the resources by the user. Scalability: The StoreFront server load and capacity for the user workload. Multi-site App synchronization: StoreFront can connect to the controllers at multiple site deployments. StoreFront can replicate the user subscribed applications across the servers. NetScaler Gateway: In the NetScaler Gateway, the decision regarding the secured external user access from public Internet involves the following: Topology: NetScaler supports two topologies: 1-Arm (normal security) and 2-Arm (high security). High availability: The NetScaler Gateways can be configured in pairs to provide high availability. Platform: NetScaler is available in different platforms, such as VPX, MDX, and SDX. They have different SSL throughput and SSL Transaction Per Second (TPS) metrics. Pre-authentication policy: Specifies about the Endpoint Analysis (EPA) scans for evaluating whether the endpoints meet the pre-set security criteria. This is available when NetScaler is chosen as the authentication point. Session management: The session policies define the overall user experience by classifying the endpoints into mobile and non-mobile devices. Session profile defines the details needed for gaining access to the environment. These are in two forms: SSLVPN and HDX proxy. Preferred data center: In multi-active data center deployments, StoreFront can determine the user resources primary data center and NetScaler can direct the user connections to that. Static and dynamic methods are used for specifying the preferred data center. Desktop/resource layer The desktop or resource layer refers to the specification of which resources (applications and desktops) users will receive. This layer comes with various options, which are tailored for business user roles and their requirements. This layer makes XenDesktop a better fit for achieving the varying user needs across each of their departments. It includes specification of the FlexCast model (type of desktop), user personalization, and delivering the application to the users in the desktop session. An example business requirement statement may specify that all the permanent employees would require a desktop with all the basic applications pre-installed based on their team and role, with their user settings and data to be retained. For all the contract employees, provide a basic desktop with controlled access to the applications on-demand and do not retain their user data. It includes various components, such as profile management solutions (including Windows profiles, the Citrix profile management, AppSense), Citrix print server, Windows operating systems, application delivery, and so on. It involves making decisions, such as: Images: It involves choosing the FlexCast model that is tailored to the user requirements, thereby delivering the expected desktop behavior to the end users, as follows: Operating system related: It requires choosing the desktop or the server operating systems for your master image, which depends on the FlexCast model that you are choosing from. Hosted Shared Hosted VDI: Pooled-static, pooled-random, pooled with PvD, dedicated, existing, physical/remote PC, streamed and streamed with PvD Streamed VHD Local VM On-demand apps Local apps In case of the desktop OS, it's also important to choose the right OS architecture according to the 32-bit or 64-bit processor architecture of the desktop. Computer policies: Define the controls over the user connection, security and bandwidth settings, devices or connection types, and so on. Specify all the policy features similar to that of the user policies. Machine catalogs: Define your catalog settings, including the FlexCast model, AD computer accounts, provisioning method, OS of the base image, and so on. Delivery groups: Assign desktops or applications to the user groups. Application folders: This is a tidy interface feature in Studio for organizing the applications into folders for easy management. StoreFront integration: This is an option for specifying the StoreFront URL for the Receiver in the master image so that the users will be auto connected to the storefront in the session. Resource allocation: This defines the hardware resources for the desktop VMs. It primarily involves hosts and storage. Depending on your estimated workloads, you can define the resources, such as number of virtual processors (vCPU), amount of virtual memory (vRAM), storage requirements for the needed disk space, and also the following resources Graphics (GPU): For the advanced use cases, you may choose to allocate the pass-through GPU, hardware vGPU, or the software vGPUs. IOPS: Depending on the operating system, the FlexCast model, and estimated workloads, you can analyze the overall IOPS load from the system and plan the corresponding hardware to support that load. Optimizations: Depending on the operating system, you can apply various optimizations to Windows that run on the master image. This greatly reduces the overall load later. Bandwidth requirements: Bandwidth can be a limiting factor in case of WAN and remote user connections of slow networks. Bandwidth consumption and user experience depend on various factors, such as the operating system being used, the application design, and screen resolution. To retain high user experience, it's important to consider the bandwidth requirements and optimization technologies, as follows: Bandwidth minimizing technologies: These include Quality of Service (QoS), HDX RealTime, and WAN Optimization, with Citrix's own CloudBridge solution. HDX Encoding Method: HDX encoding method also affects the bandwidth usage. For XenDesktop 7.x, there are three encoding methods that are available. These will appropriately be employed by the HDX protocol. These are Desktop Composition Redirection, H.264 Enhanced SuperCodec, and Legacy Mode (XenDesktop 5.X Adaptive Display). Session Bandwidth: Bandwidth needed in a session depends on the user interaction with desktop and applications. Latency: HDX can typically perform well up to 300 ms latency and the experience begins to degrade as latency increases. Personalization: This is an essential element of the desktop environment. It involves the decisions that are critical for the end user experience/acceptance and for the overall success of the solution during implementation. Following are the decisions that are involved in personalization. User profiles: This involves the decisions that are related to the user login, roaming of their settings, and seamless profile experience across overall Windows network: Profile type: Choose which profile type works for your user requirements. Possible options include local, roaming, mandatory, and hybrid profile with Citrix Profile Management. Citrix Profile Management provides various additional features, such as profile streaming, active write back, and configuring profiles using an .ini file, and so on. Folder redirection: This option saves the user's application settings in the profile. Represents special folders, such as AppData, Desktop, and so on. Folder exclusion: This option is for setting the exclusion of folders that are to be saved in the user profile. Usually, it refers to the local and IE Temp folders of a user profile. Profile caching: Caching profiles on a local system improves the user login experience and it occurs by default. You need to consider this depending on the type of virtual desktop FlexCast mode. Profile permissions: Specify whether the administrator needs access to the user profiles based on information sensitivity. Profile path: The decision to place the user profiles on a network location for high availability. It affects the logon performance depending on how close the profile is to the virtual desktop from which the user is logging on. It can be managed either from Active Directory or through Citrix Profile Management. User profile replication between data centers: This involves making the user profiles highly available and supporting the profile roaming among multiple data centers. User policies: Involves the decision regarding deploying the user settings and controlling those using management policies providing consistent settings for users, such as: Preferred policy engine: This requires choosing the policy processing for the Windows systems. The Citrix policies can be defined and managed from either Citrix Studio or the Active Directory group policy. Policy filtering: The Citrix policies can be applied to the users and their desktop with the various filter options that are available in the Citrix policy engine. If the group policies have been used, then you'll use the group policy filtering options. Policy precedence: The Citrix policies are processed in the order of LCSDOU (Local, Citrix, Site, Domain, OU policies). Baseline policy: This defines the policy with default and common settings for all the desktop images. Citrix provides the policy templates that suit specific business use cases. A baseline should cover security requirements, common network conditions, and managing the user device or the user profile requirements. Such a baseline can be configured using the security policies, connection-based policies, device-based policies, and profile-based policies. Printing: This is one of the most common desktop user requirements. XenDesktop supports printing, which can work for various scenarios. The printing technology involves deploying and using appropriate drivers. Provisioning printers: These can either be a static or dynamic set of printers. The options for dynamic printers do and do not auto-create all the client printers and auto-create the non-network client printers only. You can also set the options for session printers through the Citrix policy, which can include either static or dynamic printers. Furthermore, you can also set proximity printers. Managing print drivers: This option can be configured so that printer drivers are auto-installed during the session creation. It can be installed by using either the generic Citrix universal printer driver, or the manual option. You can also have all the known drivers preinstalled on the master image. Citrix even provides the Citrix universal print server, which extends XenDesktop universal printing support to network printing. Print job routing: It can be routed among client device or through the network server. The ICA protocol is used for compressing and sending data. Personal vDisk: Desktops with personal vDisks retain the user changes. Choosing the personal vDisk depends on the user requirements and the FlexCast Model that was opted for. Personal vDisk can be set to thin provisioned for estimated growth, but it can't be shrunk later. Applications: The application separation into another layer improves the scalability of the overall desktop solution. Applications are critical elements, which the users require from a desktop environment: Application delivery method: Applications can be installed on the base image, on the Personal vDisks, streamed into the session, or through the on-demand XenApp hosted mode. It also depends on application compatibility, and it requires technical expertise and tools, such as AppDNA, for effectively resolving them. Application streaming: XenDesktop supports App-V to build isolated application packages, which can be streamed to desktops. 16-bit legacy application delivery: If there are any legacy 16-bit applications to be supported, then you can choose from the 32 bit OS, VM hosted App, or a parallel XenApp5 deployment. Control layer Control layer speaks about all the backend systems that are required for managing and maintaining the overall solution through its life cycle. The control layer includes most of the XenDesktop components that are further classified into categories, such as resource/access controllers, image/desktop controllers, and infrastructure controllers. These respectively correspond to the first three layers of FMA, as shown here: Resource/access controllers: Supports the access layer Image/desktop controllers: Supports the desktop/resource layer Infrastructure controllers: Provides the underlying hardware for the overall FMA components/environment This layer involves the specification of capacity, configuration, and the topology of the environment. Building required/planned redundancy for each of these components enables achieving the enterprise business capabilities, such as HA, scalability, disaster recovery, load balancing, and so on. Components and technologies that operate under this layer include Active Directory, group policies, site database, Citrix licensing, XenDesktop delivery controllers, XenClient hypervisor, the Windows server and the Desktop operating systems, provisioning services, which can be either MCS or PVS and their controllers, and so on. An example business requirement statement may be as follows: Build a highly available desktop environment for a fast growing business users group. We currently have a head count of 30 users, which is expected to double in a year. It involves making the following decisions: Infrastructure controllers: It includes common infrastructure, which is required for XenDesktop to function in the Windows domain network. Active Directory: This is used for the authentication and authorization of users in a Citrix environment. It's also responsible for providing and synchronizing time on the systems, which is critical for Kerberos. For the most part, your AD structure will be in-place, and it may require certain changes for accommodating your XenDesktop requirements, such as: Forest design: It involves choosing the AD forest and domain decisions, such as multi-domain, multi-forest, domain and forest trusts, and so on, which will define the users of the XenDesktop resources. Site design: It involves choosing the number of sites that represent your geographical locations, the number of domain controllers, the subnets that accommodate the IP addresses, site links for replication, and so on. Organizational unit structure: Planning the OU structure for easier management of XenDesktop Workers and VDAs. In the case of multi-forest deployment scenarios (as supported in App Orchestration), having the same OU structure is critical. Naming standards: Planning proper conventions for XenDesktop AD objects, which includes users, security groups, XenDesktop servers, OUs, and so on. User groups: This helps in choosing the individual user names or groups. The user security groups are recommended as they reduce validation to just one object despite the number of users in it. Policy control: This helps in planning GPOs ordering and sizing, inheritance, filtering, enforcement, blocking, and loopback processing for reducing the overall processing time on the VDAs and servers. Database: Citrix uses the Microsoft SQL server database for most of its products, as follows: Edition: Microsoft ships the SQL server database in different editions, which provide varying features and capabilities. Using the standard edition for typical XenDesktop production deployments is recommended. For larger/enterprise deployments, depending on the requirement, a higher edition may be required. Database and Transaction Log Sizing: This involves estimating the storage requirements for the Site Configuration database, Monitoring database, and configuration logging databases. Database Location: By default, the Configuration Logging and the Monitoring databases are located within the Site Configuration database. Separating these into separate databases and relocating the Monitoring database to a different SQL server is recommended. High availability: Choose from VM-level HA, Mirroring, AlwaysOn Failover Cluster, and AlwaysOn Availability Groups. Database Creation: Usually, the database is automatically recreated during the XenDesktop installation. Alternatively, they can be created by using the scripts. Citrix licensing: Citrix licensing for XenDesktop requires the existence of a Citrix license server on the network. You can install and manage the multiple Citrix licenses. License type: Choose from user, device, and concurrent licenses. Version: Citrix's new license servers are backward compatible. Sizing: A license server can be scaled out to support a higher number of license requests per second. High availability: License server comes with a 30 day grace period to usually help in recovering from failures. High Availability for license server can be implemented through Window clustering technology or duplication of the virtual server. Optimization: Optimize the number of the receiving and processing threads depending on your hardware. This is generally required in large and heavily-loaded enterprise environments. Resource controllers: The resource controllers include the XenDesktop, the XenApp controllers, and the XenClient synchronizer, as shown here: XenDesktop and XenApp delivery controller. Number of sites: It is considered to have been based on network, risk tolerance, security requirements. Delivery controller sizing: Delivery controller scalability is based on CPU utilization. The more processor cores are available, the more virtual desktops a controller can support. High availability: Always plan for the N+1 deployment of the controllers for achieving the HA. Then, update the controllers' details on VDA through policy. Host connection configuration: Host connections define the hosts, storage repositories, and guest network to be used by the virtual machines on hypervisors. XML service encryption: The XML service protocol running on delivery controllers uses clear text for exchanging all data except passwords. Consider using an SSL encryption for sending the StoreFront data over a secure HTTP connection. Server OS load management: The default maximum number of sessions per server has been set to 250. Using real time usage monitoring and loading analysis, you can define appropriate load management policies. Session PreLaunch and Session Linger: Designed for helping the users in quickly accessing the applications by starting the sessions before they are requested (session prelaunch) and by keeping the user sessions active after a user closes all the applications in a session (session linger). XenClient synchronizer: It includes considerations for its architecture, processor specification, memory specification, network specification, high availability, the SQL database, remote synchronizer servers, storage repository size and location, and external access, and Active Directory integration. Image controllers: This includes all the image provisioning controllers. MCS is built-into the delivery controller. We'll have PVS considerations, such as the following: Farms: A farm represents the top level of the PVS infrastructure. Depending on your networking and administration boundaries, you can define the number of farms to be deployed in your environment. Sites: Each Farm consists of one or more sites, which contain all the PVS objects. While multiple sites share the same database, the target devices can only failover to the other Provisioning Servers that are within the same site. Your networking and organization structure determines the number of sites in your deployment. High availability: If implemented, PVS will be a critical component of the virtual desktop infrastructure. HA should be considered for its database, PVS servers, vDisks and storage, networking and TFTP, and so on. Bootstrap delivery: There are three methods in which the target device can receive the bootstrap program. This can be done by using the DHCP options, the PXE broadcasts, and the boot device manager. Write cache placement: Write cache uniquely identifies the target device by including the target device's MAC address and disk identifier. Write cache can be placed on the following: Cache on the Device Hard Drive, Cache on the Device Hard Drive Persisted, Cache in the Device RAM, Cache in the Device RAM with overflow on the hard disk, and Cache on the Provisioning Server Disk, and Cache on the Provisioning Server Disk Persisted. vDisk format and replication: PVS supports the use of fixed-size or dynamic vDisks. vDisks hosted on a SAN, local, or Direct Attached Storage must be replicated between the vDisk stores whenever a vDisk is created or changed. It can be replicated either manually or automatically. Virtual or physical servers, processor and memory: The virtual Provisioning Servers are preferred when sufficient processor, memory, disk and networking resources are guaranteed. Scale up or scale out: Determining whether to scale up or scale out the servers requires considering factors like redundancy, failover times, datacenter capacity, hardware costs, hosting costs, and so on. Bandwidth requirements and network configuration: PVS can boot 500 devices simultaneously. A 10Gbps network is recommended for provisioning services. Network configuration should consider the PVS Uplink, the Hypervisor Uplink, and the VM Uplink. Recommended switch settings include either Disable Spanning Tree or Enable Portfast, Storm Control, and Broadcast Helper. Network interfaces: Teaming the multiple network interfaces with link aggregation can provide a greater throughput. Consider the NIC features TCP Offloading and Receive Side Scaling (RSS) while selecting NICs. Subnet affinity: It is a load balancing algorithm, which helps in ensuring that the target devices are connected to the most appropriate Provisioning Server. It can be configured to Best Effort and Fixed. Auditing: By default, the auditing feature is disabled. When enabled, the audit trail information is written in the provisioning services database along with the general configuration data. Antivirus: The antivirus software can cause file-locking issues on the PVS server by contending with the files being accessed by PVS Services. The vDisk store and the write cache should be excluded from any antivirus scans in order to prevent file contention issues. Hardware layer The hardware layer involves choosing the right capacity, make, and hardware features of the backend systems that are required for the overall solution as defined in the control layer. In-line with the control layer, the hardware layer decisions will change if any of the first three layer decisions are changed. Components and technologies that operate under this layer include server hardware, storage technologies, hard disks and the RAID configurations, hypervisors and their management software, backup solutions, monitoring, network devices and connectivity, and so on. It involves making the decisions shown here: Hardware Sizing: The hardware sizing is usually done in two ways. The first, and the preferred, way is to plan ahead and purchase the hardware based on the workload requirements. The second way to size the hosts to use the existing hardware in the best configuration to support the different workload requirements, as follows: Workload separation: Workloads can either be separated into dedicated resource clusters or be mixed in the same physical hosts. Control host sizing: The VM resource allocation for each control component should be determined in the control layer and it should be allocated accordingly. Desktop host sizing: This involves choosing the physical resources required for the virtual desktops as well as the hosted server deployments. It includes estimating the pCPU, pRAM, GPU, and the number of hosts. Hypervisors: This involves choosing from the supported hypervisors that include major players, such as Hyper-V, XenServer, and ESX. Choosing from these requires considering a vast range of parameters, such as host hardware - processor and memory, storage requirements, network requirements, scale up/out, and host scalability. Further considerations to be made also include the following: Networking: Networks, physical NIC, NIC teaming, virtual NICs—hosts, virtual NICs—guests, and IP addressing VM provisioning: Templates High availability: Microsoft Hyper-V: Failover clustering, cluster shared volumes, CSV cache VMware ESXi: VMware vSphere high availability cluster Citrix XenServer: XenServer high availability by using the server pool Monitoring: Use the hypervisor specific vendor provided management and monitoring tools for hypervisor monitoring; use hardware specific vendor provided monitoring tools for hardware level monitoring. Backup and recovery: Backup method and components to be backed up. Storage: Storage architecture, RAID level, numbers of disks, disk type, storage bandwidth, tiered storage, thin provisioning, and data de-duplication Disaster recovery Data center utilization: The XenDesktop deployments can leverage multiple data centers for improving the user performance and the availability of resources. Multiple data centers can be deployed in an active/active or an active/passive configuration. An active/active configuration allows for both data centers to be utilized, although the individual users are tied to a specific location. Data center connectivity: An active/active data center configuration utilizing GSLB (Global Server Load Balancing) ensures that the users will be able to establish a connection even if one datacenter is unavailable. In the active/active configuration, the considerations that should be made are as follows: data center failover time, application servers, and StoreFront optimal routing. Capacity in the secondary data center: Planning of the secondary data center capacity is determined by the cost and by the management to support full capacity in each data center. A percent of the overall users, or a percent of the users per application, may be considered for the secondary data center facility. Then, it also needs the consideration of the type and amount of resources that will be made available in a failover scenario. Tools for designing XenDesktop® In the previous section, we saw a broad list of components, technologies, and configuration options, and so on, which we learned are involved in the process of designing the XenDesktop deployment. Obviously, designing the XenDesktop deployment for large, advanced, and complex business scenarios is a mammoth task, which requires operational knowledge of a broad range of technologies. Understanding the maze of this complexity, Citrix constantly helps the customers with great learning resources through handbooks, reviewer guides, blueprints, online eDocs, and training sessions. To ease the life of technical architects and XenDesktop designing and deployment consultants, Citrix has developed an online designing portal called Project Accelerator, which automates, streamlines, and covers all the broad aspects that are involved in the XenDesktop deployment. Project Accelerator Citrix designed the Project Accelerator web based designing tool, and it is available to the customers after they login. Its design is based on the Citrix consulting best practices for the XenDesktop deployment and implementation. It follows the layered FMA and allows you to create a close to deployment architecture. It covers all the key decisions and facilitates modifying them and evaluating their impact on the overall architecture. Upon completion of the design, it generates an architectural diagram and a deployment sizing plan. One can define more than one project and customize them in parallel to achieve multiple deployment plans. I highly recommended starting your Production XenDesktop deployment with the Project Accelerator architecture and the sizing design. Virtual Desktop Handbook Citrix provides the handbook along with new XenDesktop releases. The handbook covers the latest features of that XenDesktop version and provides detailed information on the design decisions. It provides all the possible options for each of the decisions involved, and these options are evaluated and validated in an in-depth manner by the Citrix Solutions lab. They include the Citrix Consulting leading best practices as well. This helps architects and engineers to consider the recommended technologies, and then evaluate them further for fulfilling the business requirements. The Virtual Desktop Handbook for latest the version of XenDesktop, that is, 7.x, can be found at: http://support.Citrix.com/article/CTX139331. XenDesktop® Reviewer's Guide The Reviewer's Guide is also released along with the new versions of XenDesktop. They are designed for helping businesses in quickly installing and configuring the XenDesktop for evaluation. They provide a step-by-step screencast of the installation and configuration wizards of XenDesktop. This provides practical guidance to the IT administrators for successfully installing and delivering the XenDesktop sessions. The XenDesktop Reviewers Guide for the latest version of XenDesktop, that is, 7.6, can be found at https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/xendesktop-reviewers-guide.pdf. Summary We learnt the decision making that is involved in designing the XenDesktop in general, and we also saw the deployment designs of the complex environments involving the cloud capabilities. We also saw different tools for designing XenDesktop. Resources for Article: Further resources on this subject: Understanding Citrix®Provisioning Services 7.0 [article] Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article]
Read more
  • 0
  • 0
  • 9311
Packt
17 Apr 2014
10 min read
Save for later

Designing a XenDesktop® Site

Packt
17 Apr 2014
10 min read
(For more resources related to this topic, see here.) The core components of a XenDesktop® Site Before we get started with the designing of the XenDesktop Site, we need to understand the core components that go into building it. XenDesktop can support all types of workers—from task workers who run Microsoft Office applications to knowledge users who host business applications, to mobile workshifting users, and to high-end 3D application users. It scales from small businesses that support five to ten users to large enterprises that support thousands of users. Please follow the steps in the guide in the order in which they are presented; do not skip steps or topics for a successful implementation of XenDesktop. The following is a simple diagram to illustrate the components that make up the XenDesktop architecture: If you have the experience of using XenDesktop and XenApp, you will be pleased to learn that XenDesktop and XenApp now share management and delivery components to give you a unified management experience. Now that you have a visual of how a simple Site will look when it is completed, let's take a look at each individual component so that you can understand their roles. Terminology and concepts We will cover some commonly used terminology and concepts used with XenDesktop. Server side It is important to understand the terminology and concepts as they apply to the server side of the XenDesktop architecture, so we will cover them. Hypervisor A Hypervisor is an operating system that hosts multiple instances of other operating systems. XenDesktop is supported by three Hypervisors—Citrix XenServer, VMware ESX, and Microsoft Hyper-V. Database In XenDesktop, we use the Microsoft SQL Server. The database is sometimes referred to as the data store. Almost everything in XenDesktop is database driven, and the SQL database holds all state information in addition to the session and configuration information. The XenDesktop Site is only available if the database is available. If the database server fails, existing connections to virtual desktops will continue to function until the user either logs off or disconnects from their virtual desktop; new connections cannot be established if the database server is unavailable. There is no caching in XenDesktop 7.x, so Citrix recommends that you implement SQL mirroring and clustering for High Availability. The IMA data store is no longer used, and everything is now done in the SQL database for both session and configuration information. The data collector is shared evenly across XenDesktop controllers. Delivery Controller The Delivery Controller distributes desktops and applications, manages user access, and optimizes connections to applications. Each Site has one or more Delivery Controllers. Studio Studio is the management console that enables you to configure and manage your XenDesktop and XenApp deployment, eliminating the need for two separate management consoles to manage the delivery of desktops and applications. Studio provides you with various wizards to guide you through the process of setting up your environment, creating your workloads to host and assign applications and desktops, and assigning applications and desktops to users. Citrix Studio replaces the Delivery Services Console and the Citrix AppCenter from previous XenDesktop versions. Director Director is used to monitor and troubleshoot the XenDesktop deployment. StoreFront StoreFront authenticates users to Site(s) hosting the XenApp and XenDesktop resources and manages the stores of desktops and applications that users access. Virtual machines A virtual machine (VM) is a software-implemented version of the hardware. For example, Windows Server 2012 R2 is installed as a virtual machine running in XenServer. In fact, every server and desktop will be installed as a VM with the exception of the Hypervisor, which obviously needs to be installed on the server hardware before we can install any VMs. The Virtual Desktop Agent The Virtual Desktop Agent (VDA) has to be installed on the VM to which users will connect. It enables the machines to register with controllers and manages the ICA/HDX connection between the machines and the user devices. The VDA is installed on the desktop operating system VM, such as Windows 7 or Windows 8, which is served to the client. The VDA maintains a heartbeat with the Delivery Controller, updates policies, and registers the controllers with the Delivery Controller. Server OS machines VMs or physical machines based on the Windows Server operating system are used to deliver applications or host shared desktops to users. Desktop OS machines VMs or physical machines based on the Windows desktop operating system are used to deliver personalized desktops to users or applications from desktop operating systems. Active Directory Microsoft Active Directory is required for authentication and authorization. Active Directory can also be used for controller discovery by desktops to discover the controllers within a Site. Desktops determine which controllers are available by referring to information that controllers publish in Active Directory. Active Directory's built-in security infrastructure is used by desktops to verify whether communication between controllers comes from authorized controllers in the appropriate Site. Active Directory's security infrastructure also ensures that the data exchanged between desktops and controllers is confidential. Installing XenDesktop or SQL Server on the domain controller is not supported; in fact, it is not even possible. Desktop A desktop is the instantiation of a complete Windows operating system, typically Windows 7 or Windows 8. In XenDesktop, we install the Windows 7 or Windows 8 desktop in a VM and add the VDA to it so that it can work with XenDesktop and can be delivered to clients. This will be the end user's virtual desktop. XenApp® Citrix XenApp is an on-demand application delivery solution that enables any Windows application to be virtualized, centralized, and managed in the data center and instantly delivered as a service. Prior to XenDesktop 7.x, XenApp delivered applications and XenDesktop delivered desktops. Now, with the release of XenDesktop 7.x, XenApp delivers both desktops and applications. Edgesight® Citrix Edgesight is a performance and availability management solution for XenDesktop, XenApp, and endpoint systems. Edgesight monitors applications, devices, sessions, license usage, and the network in real time. Edgesight will be phased out as a product. FlexCast® Don't let the term FlexCast confuse you. FlexCast is just a marketing term designed to encompass all of the different architectures that XenDesktop can be deployed in. FlexCast allows you to deliver virtual desktops and applications according to the needs of diverse performance, security, and flexibility requirements of every type of user in your organization. FlexCast is a way of describing the different ways to deploy XenDesktop. For example, task workers who use low-end thin clients in remote offices will use a different FlexCast model than a group of HDX 3D high-end graphics users. The following table lists the FlexCast models you may want to consider; these are available at http://flexcast.citrix.com: FlexCast model Use case Citrix products used Local VM Local VM desktops extend the benefit of a centralized, single-instance management to mobile workers who need to use their laptops offline. Changes to the OS, apps, and data are synchronized when they connect to the network. XenClient Streamed VHD Streamed VHDs leverage the local processing power of rich clients, which provides a centralized, single-image management of the desktop. It is an easy, low-cost way to get started with desktop virtualization (rarely used). Receiver XenApp Hosted VDI Hosted VDI desktops offer a personalized Windows desktop experience typically required by office workers, which can be delivered to any device. This combines the central management of the desktop with complete user personalization. The user's desktop runs in a virtual machine. Users get the same high-definition experience that they had with a local PC but with a centralized management. The VDI approach provides the best combination of security and customization. Personalization is stored in the Personal vDisk. VDI desktops can be accessed from any device, such as thin clients, laptops, PCs, and mobile devices (most common). Receiver XenDesktop Personal vDisk Hosted shared Hosted shared desktops provide a locked-down, streamlined, and standardized environment with a core set of applications. This is ideal for task workers where personalization is not required. All the users share a single desktop image. These desktops cannot be modified, except by the IT personnel. It is not appropriate for mobile workers or workers who need personalization, but it is appropriate for task workers who use thin clients. Receiver XenDesktop On-demand applications This allows any Windows application to be centralized and managed in the data center, which is hosted on either multiuser terminal servers or virtual machines, and delivered as a service to physical and virtual desktops. Receiver XenApp and XenDesktop App Edition Storage All of the XenDesktop components use storage. Storage is managed by the Hypervisor, such as Citrix XenServer. There is a personalization feature to store personal data from virtual desktops called the Personal vDisk (PvD). The client side For a complete end-to-end solution, an important part of the architecture that needs to be mentioned is the end user device or client. There isn't much to consider here; however, the client devices can range from a high-powered Windows desktop to low-end thin clients and to mobile devices. Receiver Citrix Receiver is a universal software client that provides a secure, high-performance delivery of virtual desktops and applications to any device anywhere. Receiver is platform agnostic. Citrix Receiver is device agnostic, meaning that there is a Receiver for just about every device out there, from Windows to Linux-based thin clients and to mobile devices including iOS and Android. In fact, some thin-client vendors have performed a close integration with the Citrix Ready program to embed the Citrix Receiver code directly into their homegrown operating system for seamless operation with XenDesktop. Citrix Receiver must be installed on the end user client device in order to receive the desktop and applications from XenDesktop. It must also be installed on the virtual desktop in order to receive applications from the application servers (XenApp or XenDesktop), and this is taken care of for you automatically when you install the VDA on the virtual desktop machine. System requirements Each component has its requirements in terms of operating system and licensing. You will need to build these operating systems on VMs before installing each component. For help in creating VMs, look at the relevant Hypervisor documentation. We have used Citrix XenServer as the Hypervisor. Receiver Citrix Receiver is a universal software client that provides a secure, high-performance delivery of virtual desktops and applications. Receiver is available for Windows, Mac, mobile devices such as iOS and Android, HTML5, Chromebook, and Java 10.1. You will need to install the Citrix Receiver twice for a complete end-to-end connection to be made. Once on the end user's client device—there are many supported devices including iOS and Android—and once on the Windows virtual desktop (for Windows) that you will serve your users. This is done automatically when you install the Virtual Desktop Agent (VDA) on the Windows virtual desktop. You need this Receiver to access the applications that are running on a separate application server (XenApp or XenDesktop). StoreFront 2.1 StoreFront replaces the web interface. StoreFront 2.1 can also be used with XenApp and XenDesktop 5.5 and above. The operating systems that are supported are as follows: Windows Server 2012 R2, Standard or Data center Windows Server 2012, Standard or Data center Windows Server 2008 R2 SP1, Standard or Enterprise System requirements are as follows: RAM: 2 GB Microsoft Internet Information Services (IIS) Microsoft Internet Information Services Manager .NET Framework 4.0 Firewall ports – external: As StoreFront is the gateway to the Site, you will need to open specific ports on the firewall to allow connections in, mentioned as follows: Ports: 80 (http) and 443 (https) Firewall ports – internal: By default, StoreFront communicates with the internal XenDesktop Delivery Controller servers using the following ports: 80 (for StoreFront servers) and 8080 (for HTML5 clients) You can specify different ports. For more information on StoreFront and how to plug it into the architecture, refer to http://support.citrix.com/article/CTX136547.
Read more
  • 0
  • 0
  • 9281

article-image-storage-scalability
Packt
11 Aug 2015
17 min read
Save for later

Storage Scalability

Packt
11 Aug 2015
17 min read
In this article by Victor Wu and Eagle Huang, authors of the book, Mastering VMware vSphere Storage, we will learn that, SAN storage is a key component of a VMware vSphere environment. We can choose different vendors and types of SAN storage to deploy on a VMware Sphere environment. The advanced settings of each storage can affect the performance of the virtual machine, for example, FC or iSCSI SAN storage. It has a different configuration in a VMware vSphere environment. Host connectivity of Fibre Channel storage is accessed by Host Bus Adapter (HBA). Host connectivity of iSCSI storage is accessed by the TCP/IP networking protocol. We first need to know the concept of storage. Then we can optimize the performance of storage in a VMware vSphere environment. In this article, you will learn these topics: What the vSphere storage APIs for Array Integration (VAAI) and Storage Awareness (VASA) are The virtual machine storage profile VMware vSphere Storage DRS and VMware vSphere Storage I/O Control (For more resources related to this topic, see here.) vSphere storage APIs for array integration and storage awareness VMware vMotion is a key feature in vSphere hosts. An ESXi host cannot provide the vMotion feature if it is without shared SAN storage. SAN storage is a key component in a VMware vSphere environment. In large-scale virtualization environments, there are many virtual machines stored in SAN storage. When a VMware administrator executes virtual machine cloning or migrates a virtual machine to another ESXi host by vMotion, this operation allocates the resource on that ESXi host and SAN storage. In vSphere 4.1 and later versions, it can support VAAI. The vSphere storage API is used by a storage vendor who provides hardware acceleration or offloads vSphere I/O between storage devices. These APIs can reduce the resource overhead on ESXi hosts and improve performance for ESXi host operations, for example, vMotion, virtual machine cloning, creating a virtual machine, and so on. VAAI has two APIs: the hardware acceleration API and the array thin provisioning API. The hardware acceleration API is used to integrate with VMware vSphere to offload storage operations to the array and reduce the CPU overload on the ESXi host. The following table lists the features of the hardware acceleration API for block and NAS: Array integration Features Description Block Fully copy This blocks clone or copy offloading. Block zeroing This is also called write same. When you provision an eagerzeroedthick VMDK, the SCSI command is issued to write zeroes to disks. Atomic Test & Set (ATS) This is a lock mechanism that prevents the other ESXi host from updating the same VMFS metadata. NAS Full file clone This is similar to Extended Copy (XCOPY) hardware acceleration. Extended statistics This feature is enabled in space usage in the NAS data store. Reserved space The allocated space of virtual disk in thick format. The array thin provisioning API is used to monitor the ESXi data store space on the storage arrays. It helps prevent the disk from running out of space and reclaims disk space. For example, if the storage is assigned as 1 x 3 TB LUN in the ESXi host, but the storage can only provide 2 TB of data storage space, it is considered to be 3 TB in the ESXi host. Streamline its monitoring LUN configuration space in order to avoid running out of physical space. When vSphere administrators delete or remove files from the data store that is provisioned LUN, the storage can reclaim free space in the block level. In vSphere 4.1 or later, it can support VAAI features. In vSphere 5.5, you can reclaim the space on thin provisioned LUN using esxcli. VMware VASA is a piece of software that allows the storage vendor to provide information about their storage array to VMware vCenter Server. The information includes storage capability, the state of physical storage devices, and so on. vCenter Server collects this information from the storage array using a software component called VASA provider, which is provided by the storage array vendor. A VMware administrator can view the information in VMware vSphere Client / VMware vSphere Web Client. The following diagram shows the architecture of VASA with vCenter Server. For example, the VMware administrator requests to create a 1 x data store in VMware ESXi Server. It has three main components: the storage array, the storage provider and VMware vCenter Server. The following is the procedure to add the storage provider to vCenter Server: Log in to vCenter by vSphere Client. Go to Home | Storage Providers. Click on the Add button. Input information about the storage vendor name, URL, and credentials. Virtual machine storage profile The storage provider can help the vSphere administrator know the state of the physical storage devices and the capabilities on which their virtual machines are located. It also helps choose the correct storage in terms of performance and space by using virtual machine storage policies. A virtual machine storage policy helps you ensure that a virtual machine guarantees a specified level of performance or capacity of storage, for example, the SSD/SAS/NL-SAS data store, spindle I/O, and redundancy. Before you define a storage policy, you need to specify the storage requirement for your application that runs on the virtual machine. It has two types of storage requirement, which is storage-vendor-specific storage capability and user-defined storage capability. Storage-vendor-specific storage capability comes from the storage array. The storage vendor provider informs vCenter Server that it can guarantee the use of storage features by using storage-vendor-specific storage capability. vCenter Server assigns vendor-specific storage capability to each ESXi data store. User-defined storage capability is the one that you can define and assign storage profile to each ESXi datastore. In vSphere 5.1/5.5, the name of the storage policy is VM storage profile. Virtual machine storage policies can include one or more storage capabilities and assign to one or more VM. The virtual machine can be checked for storage compliance if it is placed on compliant storage. When you migrate, create, or clone a virtual machine, you can select the storage policy and apply it to that machine. The following procedure shows how to create a storage policy and apply it to a virtual machine in vSphere 5.1 using user-defined storage capability: The vSphere ESXi host requires the license edition of Enterprise Plus to enable the VM storage profile feature. The following procedure is adding the storage profile into vCenter Server: Log in to vCenter Server using vSphere Client. Click on the Home button in the top bar, and choose the VM Storage Profiles button under Management. Click on the Manage Storage Capabilities button to create user-defined storage capability. Click on the Add button to create the name of the storage capacity, for example, SSD Storage, SAS Storage, or NL-SAS Storage. Then click on the Close button. Click on the Create VM Storage Profile button to create the storage policy. Input the name of the VM storage profile, as shown in the following screenshot, and then click on the Next button to select the user-defined storage capability, which is defined in step 4. Click on the Finish button. Assign the user-defined storage capability to your specified ESXi data store. Right-click on the data store that you plan to assign the user-defined storage capability to. This capability is defined in step 4. After creating the VM storage profile, click on the Enable VM Storage Profiles button. Then click on the Enable button to enable the profiles. The following screenshot shows Enable VM Storage Profiles: After enabling the VM storage profile, you can see VM Storage Profile Status as Enabled and Licensing Status as Licensed, as shown in this screenshot: We have successfully created the VM storage profile. Now we have to associate the VM storage profile with a virtual machine. Right-click on a virtual machine that you plan to apply to the VM storage profile, choose VM Storage Profile, and then choose Manage Profiles. From the drop-down menu of VM Storage Profile select your profile. Then you can click on the Propagate to disks button to associate all virtual disks or decide which virtual disks you want to associate with that profile by setting manually. Click on OK. Finally, you need to check the compliance of VM Storage Profile on this virtual machine. Click on the Home button in the top bar. Then choose the VM Storage Profiles button under Management. Go to Virtual Machines and click on the Check Compliance Now button. The Compliance Status will display Compliant after compliance checking, as follows: Pluggable Storage Architecture (PSA) exists in the SCSI middle layer of the VMkernel storage stack. PSA is used to allow thirty-party storage vendors to use their failover and load balancing techniques for their specific storage array. A VMware ESXi host uses its multipathing plugin to control the ownership of the device path and LUN. The VMware default Multipathing Plugin (MPP) is called VMware Native Multipathing Plugin (NMP), which includes two subplugins as components: Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). SATP is used to handle path failover for a storage array, and PSP is used to issue an I/O request to a storage array. The following diagram shows the architecture of PSA: This table lists the operation tasks of PSA and NMP in the ESXi host:   PSA NMP Operation tasks Discovers the physical paths Manages the physical path Handles I/O requests to the physical HBA adapter and logical devices Creates, registers, and deregisters logical devices Uses predefined claim rules to control storage devices Selects an optimal physical path for the request The following is an example of operation of PSA in a VMkernel storage stack: The virtual machine sends out an I/O request to a logical device that is managed by the VMware NMP. The NMP requests the PSP to assign to this logical device. The PSP selects a suitable physical path to send the I/O request. When the I/O operation is completed successfully, the NMP reports that the I/O operation is complete. If the I/O operation reports an error, the NMP calls the SATP. The SATP fails over to the new active path. The PSP selects a new active path from all available paths and continues the I/O operation. The following diagram shows the operation of PSA: VMware vSphere provides three options for the path selection policy. These are Most Recently Used (MRU), Fixed, and Round Robin (RR). The following table lists the advantages and disadvantages of each path: Path selection Description Advantage Disadvantage MRU The ESXi host selects the first preferred path at system boot time. If this path becomes unavailable, the ESXi host changes to the other active path. You can select your preferred path manually in the ESXi host. The ESXi host does not revert to the original path when that l path becomes available again. Fixed You can select the preferred path manually. The ESXi host can revert to the original path when the preferred path becomes available again. If the ESXi host cannot select the preferred path, it selects an available preferred path randomly. RR The ESXi host uses automatic path selection. The storage I/O across all available paths and enable load balancing across all paths. The storage is required to support ALUA mode. You cannot know which path is preferred because the storage I/O across all available paths. The following is the procedure of changing the path selection policy in an ESXi host: Log in to vCenter Server using vSphere Client. Go to the configuration of your selected ESXi host, choose the data store that you want to configure, and click on the Properties… button. Click on the Manage Paths… button. Select the drop-down menu and click on the Change button. If you plan to deploy a third-party MPP on your ESXi host, you need to follow up the storage vendor's instructions for the installation, for example, EMC PowerPath/VE for VMware that it is a piece of path management software for VMware's vSphere server and Microsoft's Hyper-V server. It also can provide load balancing and path failover features. VMware vSphere Storage DRS VMware vSphere Storage DRS (SDRS) is the placement of virtual machines in an ESX's data store cluster. According to storage capacity and I/O latency, it is used by VMware storage vMotion to migrate the virtual machine to keep the ESX's data store in a balanced status that is used to aggregate storage resources, and enable the placement of the virtual disk (VMDK) of virtual machine and load balancing of existing workloads. What is a data store cluster? It is a collection of ESXi's data stores grouped together. The data store cluster is enabled for vSphere SDRS. SDRS can work in two modes: manual mode and fully automated mode. If you enable SDRS in your environment, when the vSphere administrator creates or migrates a virtual machine, SDRS places all the files (VMDK) of this virtual machine in the same data store or different a data store in the cluster, according to the SDRS affinity rules or anti-affinity rules. The VMware ESXi host cluster has two key features: VMware vSphere High Availability (HA) and VMware vSphere Distributed Resource Scheduler (DRS). SDRS is different from the host cluster DRS. The latter is used to balance the virtual machine across the ESXi host based on the memory and CPU usage. SDRS is used to balance the virtual machine across the SAN storage (ESX's data store) based on the storage capacity and IOPS. The following table lists the difference between SDRS affinity rules and anti-affinity rules: Name of SDRS rules Description VMDK affinity rules This is the default SDRS rule for all virtual machines. It keeps each virtual machine's VMDKs together on the same ESXi data store. VMDK anti-affinity rules Keep each virtual machine's VMDKs on different ESXi data stores. You can apply this rule into all virtual machine's VMDKs or to dedicated virtual machine's VMDKs. VM anti-affinity rules Keep the virtual machine on different ESXi data stores. This rule is similar to the ESX DRS anti-affinity rules. The following is the procedure to create a storage DRS in vSphere 5: Log in to vCenter Server using vSphere Client. Go to home and click on the Datastores and Datastore Clusters button. Right-click on the data center and choose New Datastore Cluster. Input the name of the SDRS and then click on the Next button. Choose Storage DRS mode, Manual Mode and Fully Automated Mode. Manual Mode: According to the placement and migration recommendation, the placement and migration of the virtual machine are executed manually by the user.Fully Automated Mode: Based on the runtime rules, the placement of the virtual machine is executed automatically. Set up SDRS Runtime Rules. Then click on the Next button. Enable I/O metric for SDRS recommendations is used to enable I/O load balancing. Utilized Space is the percentage of consumed space allowed before the storage DRS executes an action. I/O Latency is the percentage of consumed latency allowed before the storage DRS executes an action. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected. No recommendations until utilization difference between source and destination is is used to configure the space utilization difference threshold. I/O imbalance threshold is used to define the aggressive of IOPs load balancing. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected. Select the ESXi host that is required to create SDRS. Then click on the Next button. Select the data store that is required to join the data store cluster, and click on the Next button to complete. After creating SDRS, go to the vSphere Storage DRS panel on the Summary tab of the data store cluster. You can see that Storage DRS is Enabled. On the Storage DRS tab on the data store cluster, it displays the recommendation, placement, and reasons. Click on the Apply Recommendations button if you want to apply the recommendations. Click on the Run Storage DRS button if you want to refresh the recommendations. VMware vSphere Storage I/O Control What is VMware vSphere Storage I/O Control? It is used to control in order to share and limit the storage of I/O resources, for example, the IOPS. You can control the number of storage IOPs allocated to the virtual machine. If a certain virtual machine is required to get more storage I/O resources, vSphere Storage I/O Control can ensure that that virtual machine can get more storage I/O than other virtual machines. The following table shows example of the difference between vSphere Storage I/O Control enabled and without vSphere Storage I/O Control: In this diagram, the VMware ESXi Host Cluster does not have vSphere Storage I/O Control. VM 2 and VM 5 need to get more IOPs, but they can allocate only a small amount of I/O resources. On the contrary, VM 1 and VM 3 can allocate a large amount of I/O resources. Actually, both VMs are required to allocate a small amount of IOPs. In this case, it wastes and overprovisions the storage resources. In the diagram to the left, vSphere Storage I/O Control is enabled in the ESXi Host Cluster. VM 2 and VM 5 are required to get more IOPs. They can allocate a large amount of I/O resources after storage I/O control is enabled. VM 1, VM 3, and VM 4 are required to get a small amount of I/O resources, and now these three VMs allocate a small amount of IOPs. After enabling storage I/O control, it helps reduce waste and overprovisioning of the storage resources. When you enable VMware vSphere Storage DRS, vSphere Storage I/O Control is automatically enabled on the data stores in the data store cluster. The following is the procedure to be carried out to enable vSphere Storage I/O control on an ESXi data store, and set up storage I/O shares and limits using vSphere Client 5: Log in to vCenter Server using vSphere Client. Go to the Configuration tab of the ESXi host, select the data store, and then click on the Properties… button. Select Enabled under Storage I/O Control, and click on the Close button. After Storage I/O Control is enabled, you can set up the storage I/O shares and limits on the virtual machine. Right-click on the virtual machine and select Edit Settings. Click on the Resources tab in the virtual machine properties box, and select Disk. You can individually set each virtual disk's Shares and Limit field. By default, all virtual machine shares are set to Normal and with Unlimited IOPs. Summary In this article, you learned what VAAI and VASA are. In a vSphere environment, the vSphere administrator learned how to configure the storage profile in vCenter Server and assign to the ESXi data store. We covered the benefits of vSphere Storage I/O Control and vSphere Storage DRS. When you found that it has a storage performance problem in the vSphere host, we saw how to troubleshoot the performance problem, and found out the root cause. Resources for Article: Further resources on this subject: Essentials of VMware vSphere [Article] Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article]
Read more
  • 0
  • 0
  • 9208

article-image-network-access-control-lists
Packt
27 Nov 2014
6 min read
Save for later

Network Access Control Lists

Packt
27 Nov 2014
6 min read
In this article by Ryan Boud, author of Hyper-V Network Virtualization Cookbook, we will learn to lock down a VM for security access. (For more resources related to this topic, see here.) Locking down a VM for security access This article will show you how to apply ACLs to VMs to protect them from unauthorized access. Getting ready You will need to start two VMs in the Tenant A VM Network: in this case, Tenant A – VM 10, to test the gateway and as such should have IIS installed) and Tenant A – VM 11. How to do it... Perform the following steps to lock down a VM: In the VMM console, click on the Home tab in the ribbon bar and click on the PowerShell button. This will launch PowerShell with the VMM module already loaded and the console connected to the current VMM instance. To obtain the Virtual Subnet IDs for all subnets in the Tenant A VM Network, enter the following PowerShell: $VMNetworkName = "Tenant A" $VMNetwork = Get-SCVMNetwork | Where-Object -Property Name -EQ $VMNetworkName Get-SCVMSubnet -VMNetwork $VMNetwork | Select-Object VMNetwork,Name,SubnetVlans,VMSubnetID You will be presented with the list of subnets and the VMSubnetID for each. The VMSubnetID will used later in this article; in this case, the VMSubnetID is 4490741, as shown in the following screenshot: Your VMSubnet ID value may be different to the one obtained here; this is normal behavior. In the PowerShell Console, run the following PowerShell to get the IP addresses of Tenant A – VM 10 and Tenant A – VM 11: $VMs = @() $VMs += Get-SCVirtualMachine -Name "Tenant A - VM 10" $VMs += Get-SCVirtualMachine -Name "Tenant A - VM 11" ForEach($VM in $VMs){    Write-Output "$($VM.Name): $($VM.VirtualNetworkAdapters.IPv4Addresses)"    Write-Output "Host name: $($VM.HostName)" } You will be presented with the IPv4 addresses for the two VMs as shown in the following screenshot: Please leave this PowerShell console open. Your IP addresses and host names may differ from those shown here; this is normal behavior. In the VMM console, open the VMs and Services workspace and navigate to All Hosts | Hosts | hypvclus01. Right-click on Tenant A – VM 11, navigate to Connect or View, and then click on Connect via Console. Log in to the VM via the Remote Console. Open Internet Explorer and go to the URL http://10.0.0.14, where 10.0.0.14 is the IP address of Tenant A – VM 10, as we discussed in step 4. You will be greeted with default IIS page. This shows that there are currently no ACLs preventing Tenant A – VM 11 accessing Tenant A – VM 10 within Hyper-V or within the Windows Firewall. Open a PowerShell console on Tenant A – VM 11 and enter the following command: Ping 10.0.0.14 –t Here, 10.0.0.14 is the IP address of Tenant A – VM 10. This will run a continuous ping against Tenant A – VM10. In the PowerShell console left open in Step 4, enter the following PowerShell: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Deny -Direction      Inbound -VMName "Tenant A - VM 10" -Weight 1 -        IsolationID 4490741 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4 and the Isolation ID needs to be VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding base rules such as a Deny All, it is suggested to apply a weight of 1 to allow other rules to override it if appropriate. Return to the PowerShell console left open on Tenant A – VM 11 in step 10. You will see that Tenant A – VM 10 has stopped responding to pings. This has created a Hyper-V Port ACL that will deny all inbound traffic to Tenant A – VM10. In the same PowerShell console, enter the following PowerShell: Test-NetConnection -CommonTCPPort HTTP -ComputerName 10.0.0.14 -InformationLevel Detailed Here, 10.0.0.14 is the IP address of Tenant A – VM 10. This shows that you cannot access the IIS website on Tenant A – VM 10. Return to the PowerShell console left open on the VMM console in step 11 and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Allow -      Direction Inbound -VMName "Tenant A - VM 10" -Weight        10 -IsolationID 4490741 -LocalPort 80 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4, and the Isolation ID needs to be set to VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding rules it is suggested to use weight increments of 10 to allow other rules to be inserted between rules if necessary. On Tenant A – VM 11, repeat step 13. You will see that TCPTestSucceeded has changed to True. Return to the PowerShell console left open on the VMM console in step 14, and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Deny -Direction      Outbound -VMName "Tenant A - VM 10" -Weight 1 -        IsolationID 4490741 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4, and the Isolation ID needs to be set to VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding base rules such as a Deny All, it is suggested to apply a weight of 1 to allow other rules to override it if appropriate. On Tenant A – VM 11 repeat step 14. You will see that TCPTestSucceeded has changed to False. This is because all outbound connections have been denied. Return to the PowerShell console left open on the VMM console in step 17, and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Remove-VMNetworkAdapterExtendedAcl -Direction Inbound -      VMName "Tenant A - VM 10" -Weight 10 } This removes the inbound rule for port 80. In the same PowerShell console enter the following cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Allow -      Direction Inbound -VMName "Tenant A - VM 10" -Weight        10 -IsolationID 4490741 -LocalPort 80 -Stateful          $True -Protocol TCP } This adds a stateful ACL rule; this ensures that the switch dynamically creates an outbound rule to allow the traffic to return to the requestor. On Tenant A – VM 11 repeat step 14. You will see that the TCPTestSucceeded has changed to True. This is because the stateful ACL is now in place. How it works... Extended ACLs are applied as traffic ingresses and egresses the VM into and out of the Hyper-V switch. As the ACLs are VM-specific, they are stored in the VM's configuration file. This ensures that the ACLs are moved with the VM ensuring continuity of ACL. For the complete range of options, it is advisable to review the TechNet article at http://technet.microsoft.com/en-us/library/dn464289.aspx. Summary In this article we learned how to lock down a VM for security access. Resources for Article: Further resources on this subject: High Availability Scenarios [Article] Performance Testing and Load Balancing [Article] Your first step towards Hyper-V Replica [Article]
Read more
  • 0
  • 0
  • 9195
article-image-storage-configurations
Packt
07 Sep 2015
21 min read
Save for later

Storage Configurations

Packt
07 Sep 2015
21 min read
In this article by Wasim Ahmed, author of the book Proxmox Cookbook, we will cover topics such as local storage, shared storage, Ceph storage, and a recipe which shows you how to configure the Ceph RBD storage. (For more resources related to this topic, see here.) A storage is where virtual disk images of virtual machines reside. There are many different types of storage systems with many different features, performances, and use case scenarios. Whether it is a local storage configured with direct attached disks or a shared storage with hundreds of disks, the main responsibility of a storage is to hold virtual disk images, templates, backups, and so on. Proxmox supports different types of storages, such as NFS, Ceph, GlusterFS, and ZFS. Different storage types can hold different types of data. For example, a local storage can hold any type of data, such as disk images, ISO/container templates, backup files and so on. A Ceph storage, on the other hand, can only hold a .raw format disk image. In order to provide the right type of storage for the right scenario, it is vital to have a proper understanding of different types of storages. The full details of each storage is beyond the scope of this article, but we will look at how to connect them to Proxmox and maintain a storage system for VMs. Storages can be configured into two main categories: Local storage Shared storage Local storage Any storage that resides in the node itself by using directly attached disks is known as a local storage. This type of storage has no redundancy other than a RAID controller that manages an array. If the node itself fails, the storage becomes completely inaccessible. The live migration of a VM is impossible when VMs are stored on a local storage because during migration, the virtual disk of the VM has to be copied entirely to another node. A VM can only be live-migrated when there are several Proxmox nodes in a cluster and the virtual disk is stored on a shared storage accessed by all the nodes in the cluster. Shared storage A shared storage is one that is available to all the nodes in a cluster through some form of network media. In a virtual environment with shared storage, the actual virtual disk of the VM may be stored on a shared storage, while the VM actually runs on another Proxmox host node. With shared storage, the live migration of a VM becomes possible without powering down the VM. Multiple Proxmox nodes can share one shared storage, and VMs can be moved around since the virtual disk is stored on different shared storages. Usually, a few dedicated nodes are used to configure a shared storage with their own resources apart from sharing the resources of a Proxmox node, which could be used to host VMs. In recent releases, Proxmox has added some new storage plugins that allow users to take advantage of some great storage systems and integrating them with the Proxmox environment. Most of the storage configurations can be performed through the Proxmox GUI. Ceph storage Ceph is a powerful distributed storage system, which provides RADOS Block Device (RBD) object storage, Ceph filesystem (CephFS), and Ceph Object Storage. Ceph is built with a very high-level of reliability, scalability, and performance in mind. A Ceph cluster can be expanded to several petabytes without compromising data integrity, and can be configured using commodity hardware. Any data written to the storage gets replicated across a Ceph cluster. Ceph was originally designed with big data in mind. Unlike other types of storages, the bigger a Ceph cluster becomes, the higher the performance. However, it can also be used in small environments just as easily for data redundancy. A lower performance can be mitigated using SSD to store Ceph journals. Refer to the OSD Journal subsection in this section for information on journals. The built-in self-healing features of Ceph provide unprecedented resilience without a single point of failure. In a multinode Ceph cluster, the storage can tolerate not just hard drive failure, but also an entire node failure without losing data. Currently, only an RBD block device is supported in Proxmox. Ceph comprises a few components that are crucial for you to understand in order to configure and operate the storage. The following components are what Ceph is made of: Monitor daemon (MON) Object Storage Daemon (OSD) OSD Journal Metadata Server (MSD) Controlled Replication Under Scalable Hashing map (CRUSH map) Placement Group (PG) Pool MON Monitor daemons form quorums for a Ceph distributed cluster. There must be a minimum of three monitor daemons configured on separate nodes for each cluster. Monitor daemons can also be configured as virtual machines instead of using physical nodes. Monitors require a very small amount of resources to function, so allocated resources can be very small. A monitor can be set up through the Proxmox GUI after the initial cluster creation. OSD Object Storage Daemons (OSDs) are responsible for the storage and retrieval of actual cluster data. Usually, each physical storage device, such as HDD or SSD, is configured as a single OSD. Although several OSDs can be configured on a single physical disc, it is not recommended for any production environment at all. Each OSD requires a journal device where data first gets written and later gets transferred to an actual OSD. By storing journals on fast-performing SSDs, we can increase the Ceph I/O performance significantly. Thanks to the Ceph architecture, as more and more OSDs are added into the cluster, the I/O performance also increases. An SSD journal works very well on small clusters with about eight OSDs per node. OSDs can be set up through the Proxmox GUI after the initial MON creation. OSD Journal Every single piece of data that is destined to be a Ceph OSD first gets written in a journal. A journal allows OSD daemons to write smaller chunks to allow the actual drives to commit writes that give more time. In simpler terms, all data gets written to journals first, then the journal filesystem sends data to an actual drive for permanent writes. So, if the journal is kept on a fast-performing drive, such as SSD, incoming data will be written at a much higher speed, while behind the scenes, slower performing SATA drives can commit the writes at a slower speed. Journals on SSD can really improve the performance of a Ceph cluster, especially if the cluster is small, with only a few terabytes of data. It should also be noted that if there is a journal failure, it will take down all the OSDs that the journal is kept on the journal drive. In some environments, it may be necessary to put two SSDs to mirror RAIDs and use them as journaling. In a large environment with more than 12 OSDs per node, performance can actually be gained by collocating a journal on the same OSD drive instead of using SSD for a journal. MDS The Metadata Server (MDS) daemon is responsible for providing the Ceph filesystem (CephFS) in a Ceph distributed storage system. MDS can be configured on separate nodes or coexist with already configured monitor nodes or virtual machines. Although CephFS has come a long way, it is still not fully recommended to use in a production environment. It is worth mentioning here that there are many virtual environments actively running MDS and CephFS without any issues. Currently, it is not recommended to configure more than two MDSs in a Ceph cluster. CephFS is not currently supported by a Proxmox storage plugin. However, it can be configured as a local mount and then connected to a Proxmox cluster through the Directory storage. MDS cannot be set up through the Proxmox GUI as of version 3.4. CRUSH map A CRUSH map is the heart of the Ceph distributed storage. The algorithm for storing and retrieving user data in Ceph clusters is laid out in the CRUSH map. CRUSH allows a Ceph client to directly access an OSD. This eliminates a single point of failure and any physical limitations of scalability since there are no centralized servers or controllers to manage data in and out. Throughout Ceph clusters, CRUSH maintains a map of all MONs and OSDs. CRUSH determines how data should be chunked and replicated among OSDs spread across several local nodes or even nodes located remotely. A default CRUSH map is created on a freshly installed Ceph cluster. This can be further customized based on user requirements. For smaller Ceph clusters, this map should work just fine. However, when Ceph is deployed with very big data in mind, this map should be customized. A customized map will allow better control of a massive Ceph cluster. To operate Ceph clusters of any size successfully, a clear understanding of the CRUSH map is mandatory. For more details on the Ceph CRUSH map, visit http://ceph.com/docs/master/rados/operations/crush-map/ and http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map. As of Proxmox VE 3.4, we cannot customize the CRUSH map throughout the Proxmox GUI. It can only be viewed through a GUI and edited through a CLI. PG In a Ceph storage, data objects are aggregated in groups determined by CRUSH algorithms. This is known as a Placement Group (PG) since CRUSH places this group in various OSDs depending on the replication level set in the CRUSH map and the number of OSDs and nodes. By tracking a group of objects instead of the object itself, a massive amount of hardware resources can be saved. It would be impossible to track millions of individual objects in a cluster. The following diagram shows how objects are aggregated in groups and how PG relates to OSD: To balance available hardware resources, it is necessary to assign the right number of PGs. The number of PGs should vary depending on the number of OSDs in a cluster. The following is a table of PG suggestions made by Ceph developers: Number of OSDs Number of PGs Less than 5 OSDs 128 Between 5-10 OSDs 512 Between 10-50 OSDs 4096 Selecting the proper number of PGs is crucial since each PG will consume node resources. Too many PGs for the wrong number of OSDs will actually penalize the resource usage of an OSD node, while very few assigned PGs in a large cluster will put data at risk. A rule of thumb is to start with the lowest number of PGs possible, then increase them as the number of OSDs increases. For details on Placement Groups, visit http://ceph.com/docs/master/rados/operations/placement-groups/. There's a great PG calculator created by Ceph developers to calculate the recommended number of PGs for various sizes of Ceph clusters at http://ceph.com/pgcalc/. Pools Pools in Ceph are like partitions on a hard drive. We can create multiple pools on a Ceph cluster to separate stored data. For example, a pool named accounting can hold all the accounting department data, while another pool can store the human resources data of a company. When creating a pool, assigning the number of PGs is necessary. During the initial Ceph configuration, three default pools are created. They are data, metadata, and rbd. Deleting a pool will delete all stored objects permanently. For details on Ceph and its components, visit http://ceph.com/docs/master/. The following diagram shows a basic Proxmox+Ceph cluster: The preceding diagram shows four Proxmox nodes, three Monitor nodes, three OSD nodes, and two MDS nodes comprising a Proxmox+Ceph cluster. Note that Ceph is on a different network than the Proxmox public network. Depending on the set replication number, each incoming data object needs to be written more than once. This causes high bandwidth usage. By separating Ceph on a dedicated network, we can ensure that a Ceph network can fully utilize the bandwidth. On advanced clusters, a third network is created only between Ceph nodes for cluster replication, thus improving network performance even further. As of Proxmox VE 3.4, the same node can be used for both Proxmox and Ceph. This provides a great way to manage all the nodes from the same Proxmox GUI. It is not advisable to put Proxmox VMs on a node that is also configured as Ceph. During day-to-day operations, Ceph nodes do not consume large amounts of resources, such as CPU or memory. However, when Ceph goes into rebalancing mode due to OSD or node failure, a large amount of data replication occurs, which takes up lots of resources. Performance will degrade significantly if resources are shared by both VMs and Ceph. Ceph RBD storage can only store .raw virtual disk image files. Ceph itself does not come with a GUI to manage, so having the option to manage Ceph nodes through the Proxmox GUI makes administrative tasks mush easier. Refer to the Monitoring the Ceph storage subsection under the How to do it... section of the Connecting the Ceph RBD storage recipe later in this article to learn how to install a great read-only GUI to monitor Ceph clusters. Connecting the Ceph RBD storage In this recipe, we are going to see how to configure a Ceph block storage with a Proxmox cluster. Getting ready The initial Ceph configuration on a Proxmox cluster must be accomplished through a CLI. After the Ceph installation, initial configurations and one monitor creation for all other tasks can be accomplished through the Proxmox GUI. How to do it... We will now see how to configure the Ceph block storage with Proxmox. Installing Ceph on Proxmox Ceph is not installed by default. Prior to configuring a Proxmox node for the Ceph role, Ceph needs to be installed and the initial configuration must be created through a CLI. The following steps need to be performed on all Proxmox nodes that will be part of the Ceph cluster: Log in to each node through SSH or a console. Configure a second network interface to create a separate Ceph network with a different subnet. Reboot the nodes to initialize the network configuration. Using the following command, install the Ceph package on each node: # pveceph install –version giant Initializing the Ceph configuration Before Ceph is usable, we have to create the initial Ceph configuration file on one Proxmox+Ceph node. The following steps need to be performed only on one Proxmox node that will be part of the Ceph cluster: Log in to the node using SSH or a console. Run the following command create the initial Ceph configuration: # pveceph init –network <ceph_subnet>/CIDR Run the following command to create the first monitor: # pveceph createmon Configuring Ceph through the Proxmox GUI After the initial Ceph configuration and the creation of the first monitor, we can continue with further Ceph configurations through the Proxmox GUI or simply run the Ceph Monitor creation command on other nodes. The following steps show how to create Ceph Monitors and OSDs from the Proxmox GUI: Log in to the Proxmox GUI as a root or with any other administrative privilege. Select a node where the initial monitor was created in previous steps, and then click on Ceph from the tabbed menu. The following screenshot shows a Ceph cluster as it appears after the initial Ceph configuration: Since no OSDs have been created yet, it is normal for a new Ceph cluster to show PGs stuck and unclean error Click on Disks on the bottom tabbed menu under Ceph to display the disks attached to the node, as shown in the following screenshot: Select an available attached disk, then click on the Create: OSD button to open the OSD dialog box, as shown in the following screenshot: Click on the Journal Disk drop-down menu to select a different device or collocate the journal on the same OSD by keeping it as the default. Click on Create to finish the OSD creation. Create additional OSDs on Ceph nodes as needed. The following screenshot shows a Proxmox node with three OSDs configured: By default, Proxmox has created OSDs with an ext3 partition. However, sometimes, it may be necessary to create OSDs with different partition types due to a requirement or for performance improvement. Enter the following command format through the CLI to create an OSD with a different partition type: # pveceph createosd –fstype ext4 /dev/sdX The following steps show how to create Monitors through the Proxmox GUI: Click on Monitor from the tabbed menu under the Ceph feature. The following screenshot shows the Monitor status with the initial Ceph Monitor we created earlier in this recipe: Click on Create to open the Monitor dialog box. Select a Proxmox node from the drop-down menu. Click on the Create button to start the monitor creation process. Create a total of three Ceph monitors to establish a Ceph quorum. The following screenshot shows the Ceph status with three monitors and OSDs added: Note that even with three OSDs added, the PGs are still stuck with errors. This is because by default, the Ceph CRUSH is set up for two replicas. So far, we've only created OSDs on one node. For a successful replication, we need to add some OSDs on the second node so that data objects can be replicated twice. Follow the steps described earlier to create three additional OSDs on the second node. After creating three more OSDs, the Ceph status should look like the following screenshot: Managing Ceph pools It is possible to perform basic tasks, such as creating and removing Ceph pools through the Proxmox GUI. Besides these, we can see check the list, status, number of PGs, and usage of the Ceph pools. The following steps show how to check, create, and remove Ceph pools through the Proxmox GUI: Click on the Pools tabbed menu under Ceph in the Proxmox GUI. The following screenshot shows the status of the default rbd pool, which has replica 1, 256 PG, and 0% usage: Click on Create to open the pool creation dialog box. Fill in the required information, such as the name of the pool, replica size, and number of PGs. Unless the CRUSH map has been fully customized, the ruleset should be left at the default value 0. Click on OK to create the pool. To remove a pool, select the pool and click on Remove. Remember that once a Ceph pool is removed, all the data stored in this pool is deleted permanently. To increase the number of PGs, run the following command through the CLI: #ceph osd pool set <pool_name> pg_num <value> #ceph osd pool set <pool_name> pgp_num <value> It is only possible to increase the PG value. Once increased, the PG value can never be decreased. Connecting RBD to Proxmox Once a Ceph cluster is fully configured, we can proceed to attach it to the Proxmox cluster. During the initial configuration file creation, Ceph also creates an authentication keyring in the /etc/ceph/ceph.client.admin.keyring directory path. This keyring needs to be copied and renamed to match the name of the storage ID to be created in Proxmox. Run the following commands to create a directory and copy the keyring: # mkdir /etc/pve/priv/ceph # cd /etc/ceph/ # cp ceph.client.admin.keyring /etc/pve/priv/ceph/<storage>.keyring For our storage, we are naming it rbd.keyring. After the keyring is copied, we can attach the Ceph RBD storage with Proxmox using the GUI: Click on Datacenter, then click on Storage from the tabbed menu. Click on the Add drop-down menu and select the RBD storage plugin. Enter the information as described in the following table: Item Type of value Entered value ID The name of the storage. rbd Pool The name of the Ceph pool. rbd Monitor Host The IP address and port number of the Ceph MONs. We can enter multiple MON hosts for redundancy. 172.16.0.71:6789;172.16.0.72:6789; 172.16.0.73:6789 User name The default Ceph administrator. Admin Nodes The Proxmox nodes that will be able to use the storage. All Enable The checkbox for enabling/disabling the storage. Enabled Click on Add to attach the RBD storage. The following screenshot shows the RBD storage under Summary: Monitoring the Ceph storage Ceph itself does not come with any GUI to manage or monitor the cluster. We can view the cluster status and perform various Ceph-related tasks through the Proxmox GUI. There are several third-party software that allow Ceph-only GUI to manage and monitor the cluster. Some software provide management features, while others provide read-only features for Ceph monitoring. Ceph Dash is such a software that provides an appealing read-only GUI to monitor the entire Ceph cluster without logging on to the Proxmox GUI. Ceph Dash is freely available through GitHub. There are other heavyweight Ceph GUI dashboards, such as Kraken, Calamari, and others. In this section, we are only going to see how to set up the Ceph Dash cluster monitoring GUI. The following steps can be used to download and start Ceph Dash to monitor a Ceph cluster using any browser: Log in to any Proxmox node, which is also a Ceph MON. Run the following commands to download and start the dashboard: # mkdir /home/tools # apt-get install git # git clone https://github.com/Crapworks/ceph-dash # cd /home/tools/ceph-dash # ./ceph_dash.py Ceph Dash will now start listening on port 5000 of the node. If the node is behind a firewall, open port 5000 or any other ports with port forwarding in the firewall. Open any browser and enter <node_ip>:5000 to open the dashboard. The following screenshot shows the dashboard of the Ceph cluster we have created: We can also monitor the status of the Ceph cluster through a CLI using the following commands: To check the Ceph status: # ceph –s To view OSDs in different nodes: # ceph osd tree To display real-time Ceph logs: # ceph –w To display a list of Ceph pools: # rados lspools To change the number of replicas of a pool: # ceph osd pool set size <value> Besides the preceding commands, there are many more CLI commands to manage Ceph and perform advanced tasks. The Ceph official documentation has a wealth of information and how-to guides along with the CLI commands to perform them. The documentation can be found at http://ceph.com/docs/master/. How it works… At this point, we have successfully integrated a Ceph cluster with a Proxmox cluster, which comprises six OSDs, three MONs, and three nodes. By viewing the Ceph Status page, we can get lot of information about a Ceph cluster at a quick glance. From the previous figure, we can see that there are 256 PGs in the cluster and the total cluster storage space is 1.47 TB. A healthy cluster will have the PG status as active+clean. Based on the nature of issue, the PGs can have various states, such as active+unclean, inactive+degraded, active+stale, and so on. To learn details about all the states, visit http://ceph.com/docs/master/rados/operations/pg-states/. By configuring a second network interface, we can separate a Ceph network from the main network. The #pveceph init command creates a Ceph configuration file in the /etc/pve/ceph.conf directory path. A newly configured Ceph configuration file looks similar to the following screenshot: Since the ceph.conf configuration file is stored in pmxcfs, any changes made to it are immediately replicated in all the Proxmox nodes in the cluster. As of Proxmox VE 3.4, Ceph RBD can only store a .raw image format. No templates, containers, or backup files can be stored on the RBD block storage. Here is the content of a storage configuration file after adding the Ceph RBD storage: rbd: rbd monhost 172.16.0.71:6789;172.16.0.72:6789;172.16.0.73:6789 pool rbd content images username admin If a situation dictates the IP address change of any node, we can simply edit this content in the configuration file to manually change the IP address of the Ceph MON nodes. See also To learn about Ceph in greater detail, visit http://ceph.com/docs/master/ for the official Ceph documentation Also, visit https://indico.cern.ch/event/214784/session/6/contribution/68/material/slides/0.pdf to find out why Ceph is being used at CERN to store the massive data generated by the Large Hadron Collider (LHC) Summary In this article, we came across with different configurations for a variety of storage categories and got hands-on practice with various stages in configuring the Ceph RBD storage. Resources for Article: Further resources on this subject: Deploying New Hosts with vCenter [article] Let's Get Started with Active Di-rectory [article] Basic Concepts of Proxmox Virtual Environment [article]
Read more
  • 0
  • 0
  • 9102

Packt
26 Dec 2014
11 min read
Save for later

Getting Started with XenServer®

Packt
26 Dec 2014
11 min read
This article is written by Martez Reed, the author of Mastering Citrix® XenServer®. One of the most important technologies in the information technology field today is virtualization. Virtualization is beginning to span every area of IT, including but not limited to servers, desktops, applications, network, and more. Our primary focus is server virtualization, specifically with Citrix XenServer 6.2. There are three major platforms in the server virtualization market: VMware's vSphere, Microsoft's Hyper-V, and Citrix's XenServer. In this article, we will cover the following topics: XenServer's overview XenServer's features What's new in Citrix XenServer 6.2 Planning and installing Citrix XenServer (For more resources related to this topic, see here.) Citrix® XenServer® Citrix XenServer is a type 1 or bare metal hypervisor. A bare metal hypervisor does not require an underlying host operating system. Type 1 hypervisors have direct access to the underlying hardware, which provides improved performance and guest compatibility. Citrix XenServer is based on the open source Xen hypervisor that is widely deployed in various industries and has a proven record of stability and performance. Citrix® XenCenter® Citrix XenCenter is a Windows-based application that provides a graphical user interface for managing the Citrix XenServer hosts from a single management interface. Features of Citrix® XenServer® The following section covers the features offered by Citrix XenServer: XenMotion/Live VM Migration: The XenMotion feature allows for running virtual machines to be migrated from one host to another without any downtime. XenMotion relocates the processor and memory instances of the virtual machine from one host to another, while the actual data and settings reside on the shared storage. This feature is pivotal in providing maximum uptime when performing maintenance or upgrades. This feature requires shared storage among the hosts. Storage XenMotion / Live Storage Migration: The Storage XenMotion feature provides functionality similar to that of XenMotion, but it is used to move a virtual machine's virtual disk from one storage repository to another without powering off the virtual machine. High Availability: High Availability automatically restarts the virtual machines on another host in the event of a host failure. This feature requires shared storage among the hosts. Resource pools: Resource pools are a collection of Citrix XenServer hosts grouped together to form a single pool of compute, memory, network, and storage resources that can be managed as a single entity. The resource pool allows the virtual machines to be started on any of the hosts and seamlessly moved between them. Active Directory integration: Citrix XenServer can be joined to a Windows Active Directory domain to provide centralized authentication for XenServer administrators. This eliminates the need for multiple independent administrator accounts on each XenServer host in a XenServer environment. Role-based access control (RBAC): RBAC is a feature that takes advantage of the Active Directory integration and allows administrators to define roles that have specific privileges associated with them. This allows administrative permissions to be segregated among different administrators. Open vSwitch: The default network backend for the Citrix XenServer 6.2 hypervisor is Open vSwitch. Open vSwitch is an open source multilayer virtual switch that brings advanced network functionality to the XenServer platform such as NetFlow, SPAN, OpenFlow, and enhanced Quality of Service (QoS). The Open vSwitch backend is also an integral component of the platform's support of software-defined networking (SDN). Dynamic Memory Control: Dynamic Memory Control allows XenServer to maximize the physical memory utilization by sharing unused physical memory among the guest virtual machines. If a virtual machine has been allocated 4 GB of memory and is only using 2 GB, the remaining memory can be shared with the other guest virtual machines. This feature provides a mechanism for memory oversubscription. IntelliCache: IntelliCache is a feature aimed at improving the performance of Citrix XenDesktop virtual desktops. IntelliCache creates a cache on a XenServer local storage repository, and as the virtual desktops perform read operations, the parent VM's virtual disk is copied to the cache. Write operations are also written to the local cache when nonpersistent or shared desktops are used. This mechanism reduces the load on the storage array by retrieving data from a local source for reads instead of the array. This is particularly beneficial when multiple desktops share the same parent image. This feature is only available with Citrix XenDesktop. Disaster Recovery: The XenServer Disaster Recovery feature provides a mechanism to recover the virtual machines and vApps in the event of the failure of an entire pool or site. Distributed Virtual Switch Controller (DVSC): DVSC provides centralized management and visibility of the networking in XenServer. Thin provisioning: Thin provisioning allows for a given amount of disk space to be allocated to virtual machines but only consume the amount that is actually being used by the guest operating system. This feature provides more efficient use of the underlying storage due to the on-demand consumption. What's new in Citrix® XenServer® 6.2 Citrix has added a number of new and exciting features in the latest version of XenServer: Open source New licensing model Improved guest support Open source Starting with Version 6.2, the Citrix XenServer hypervisor is now open sourced, but continues to be managed by Citrix Systems. The move to an open source model was the result of Citrix Systems' desire to further collaborate and integrate the XenServer product with its partners and the open source community. New licensing model The licensing model has been changed in Version 6.2, with the free version of the XenServer platform now providing full functionality, the previous advanced, enterprise, and platinum versions have been eliminated. Citrix will offer paid support for the free version of the XenServer hypervisor that will include the ability to install patches/updates using the XenCenter GUI, in addition to Citrix technical support. Improved guest support Version 6.2 has added official support for the following guest operating systems: Microsoft Windows 8 (full support) Microsoft Windows Server 2012 SUSE Linux Enterprise Server (SLES) 11 SP2 (32/64 bit) Red Hat Enterprise Linux (RHEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Oracle Enterprise Linux (OEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 CentOS (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Debian Wheezy (32/64 bit) VSS support for Windows Server 2008 R2 has been improved and reintroduced Citrix XenServer 6.2 Service Pack 1 adds support for the following operating systems: Microsoft Windows 8.1 Microsoft Windows Server 2012 R2 Retired features The following features have been removed from Version 6.2 of Citrix XenServer: Workload Balancing (WLB) SCOM integration Virtual Machine Protection Recovery (VMPR) Web Self Service XenConvert (this has been replaced by XenServer Conversion Manager) Deprecated features The following features will be removed from the future releases of Citrix XenServer. Citrix has reviewed the XenServer market and has determined that there are third-party products that are able to provide the product functionality more effectively: Microsoft System Center Virtual Machine Manager SCVMM support Integrated StorageLink Planning and Installing Citrix® XenServer® Installing Citrix XenServer is generally a simple and straightforward process that can be completed in 10 to 15 minutes. While the actual install is simple, there are several major decisions that need to be made prior to installing Citrix XenServer in order to ensure a successful deployment. Selecting the server hardware Typically, the first step is to select the server hardware that will be used. While the thought might be to just pick a server that fits our needs, we should also ensure that the hardware meets the documented system requirements. Checking the hardware against the Hardware Compatibility List (HCL) provided by Citrix Systems is advised to ensure that the system qualifies for Citrix support and that the system will properly run Citrix XenServer. The HCL provides a list of server models that have been verified to work with Citrix XenServer. The HCL can be found online at http://www.citrix.com/xenserver/hcl. Meeting the system requirements The following sections cover the minimal system requirements for Citrix XenServer 6.2. Processor requirements The following list covers the minimum requirements for the processor(s) to install Citrix XenServer 6.2: One or more 64-bit x86 CPU(s), 1.5 GHz minimum, 2 GHz or faster multicore CPU To support VMs running on Windows, an Intel VT or AMD-V 64-bit x86-based system with one or more CPU(s) is required Virtualization technology needs to be enabled in the BIOS Virtualization technology is disabled by default on many server platforms and needs to be manually enabled. Memory requirements The minimum memory requirement for installing Citrix XenServer 6.2 is 2 GB with a recommendation of 4 GB or more for production workloads. In addition to the memory usage of the guest virtual machines, the Xen hypervisor on the Control Domain (dom0) consumes the memory resources. The amount of resources consumed by the Control Domain (dom0) is based on the amount of physical memory in the host. Hard disk requirements The following are the minimum requirements for the hard disk(s) to install Citrix XenServer 6.2: 16 GB of free disk space minimum and 60 GB of free disk space is recommended Direct attached storage in the form of SATA, SAS, SCSI, or PATA interfaces are supported XenServer can be installed on a LUN presented from a storage area network (SAN) via a host bus adapter (HBA) in the XenServer host A physical HBA is required to boot XenServer from a SAN. Network card requirements 100 Mbps or a faster NIC is required for installing Citrix XenServer. One or more gigabit NICs is recommended for faster P2V, export/import data transfers, and VM live migrations. Installing Citrix® XenServer® 6.2 The following sections cover the installation of Citrix XenServer 6.2. Installation methods The Citrix XenServer 6.2 installer can be launched via two methods as listed: CD/DVD PXE or network boot Installation source There are several options where the Citrix XenServer installation files can be stored, and depending on the scenario, one would be preferred over another. Typically, the HTTP, FTP, or NFS option would be used when the installer is booted over the network via PXE or when a scripted installation is being performed. The installation sources are as follows: Local media (CD/DVD) HTTP or FTP NFS Supplemental packs Supplemental packs provide additional functionality to the XenServer platform through features such as enhanced hardware monitoring and third-party management software integration. The supplemental packs are typically downloaded from the vendor's website and are installed when prompted during the XenServer installation. XenServer® installation The following steps cover installing Citrix XenServer 6.2 from a CD: Boot the server from the Citrix XenServer 6.2 installation media and press Enter when prompted to start the Citrix XenServer 6.2 installer. Select the desired key mapping and select Ok to proceed. Press F9 if additional drivers need to be installed or select Ok to continue. Accept the EULA. Select the hard drive for the Citrix XenServer installation and choose Ok to proceed. Select the hard drive(s) to be used for storing the guest virtual machines and choose Ok to continue. You need to select the Enable thin provisioning (Optimized storage for XenDesktop) option to make use of the IntelliCache feature. Select the installation media source and select Ok to continue. Install supplemental packs if necessary and choose No to proceed. Select Verify installation source and select Ok to begin the verification. The installation media should be verified at least once to ensure that none of the installation files are corrupt. Choose Ok to continue after the verification has successfully completed. Provide and confirm a password for the root account and select Ok to proceed. Select the network interface to be used as the primary management interface and choose Ok to continue. Select the Static configuration option and provide the requested information. Choose Ok to continue. Enter the desired hostname and DNS server information. Select Ok to proceed. Select the appropriate geographical area to configure the time zone and select Ok to continue. Select the appropriate city or area to configure the time zone and select Ok to proceed. Select Using NTP or Manual time entry for the server to determine the local time and choose Ok to continue. Using NTP to synchronize the time of XenServer hosts in a pool is recommended to ensure that the time on all the hosts in the pool is synchronized. Enter the IP address or hostname of the desired NTP server(s) and select Ok to proceed. Select Install XenServer to start the installation. Click on Ok to restart the server after the installation has completed. The following screen should be presented after the reboot: Summary In this article, we covered an overview of Citrix XenServer along with the features that were available. We also looked at the new features that were added in XenServer 6.2 and then examined installing XenServer. Resources for Article: Further resources on this subject: Understanding Citrix®Provisioning Services 7.0 [article] Designing a XenDesktop® Site [article] Installation and Deployment of Citrix Systems®' CPSM [article]
Read more
  • 0
  • 0
  • 9093
Modal Close icon
Modal Close icon