Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-speeding-vagrant-development-docker
Packt
03 Mar 2015
13 min read
Save for later

Speeding Vagrant Development With Docker

Packt
03 Mar 2015
13 min read
In this article by Chad Thompson, author of Vagrant Virtual Development Environment Cookbook, we will learn that many software developers are familiar with using Vagrant (http://vagrantup.com) to distribute and maintain development environments. In most cases, Vagrant is used to manage virtual machines running in desktop hypervisor software such as VirtualBox or the VMware Desktop product suites. (VMware Fusion for OS X and VMware Desktop for Linux and Windows environments.) More recently, Docker (http://docker.io) has become increasingly popular for deploying containers—Linux processes that can run in a single operating system environment yet be isolated from one another. In practice, this means that a container includes the runtime environment for an application, down to the operating system level. While containers have been popular for deploying applications, we can also use them for desktop development. Vagrant can use Docker in a couple of ways: As a target for running a process defined by Vagrant with the Vagrant provider. As a complete development environment for building and testing containers within the context of a virtual machine. This allows you to build a complete production-like container deployment environment with the Vagrant provisioner. In this example, we'll take a look at how we can use the Vagrant provider to build and run a web server. Running our web server with Docker will allow us to build and test our web application without the added overhead of booting and provisioning a virtual machine. (For more resources related to this topic, see here.) Introducing the Vagrant Provider The Vagrant Docker provider will build and deploy containers to a Docker runtime. There are a couple of cases to consider when using Vagrant with Docker: On a Linux host machine, Vagrant will use a native (locally installed) Docker environment to deploy containers. Make sure that Docker is installed before using Vagrant. Docker itself is a technology built on top of Linux Containers (LXC) technology—so Docker itself requires an operating system with a recent version (newer than Linux 3.8 which was released in February, 2013) of the Linux kernel. Most recent Linux distributions should support the ability to run Docker. On nonLinux environments (namely OS X and Windows), the provider will require a local Linux runtime to be present for deploying containers. When running the Docker provisioner in these environments, Vagrant will download and boot a version of the boot2docker (http://boot2docker.io) environment—in this case, a repackaging of boot2docker in Vagrant box format. Let's take a look at two scenarios for using the Docker provider. In each of these examples, we'll start these environments from an OS X environment so we will see some tasks that are required for using the boot2docker environment. Installing a Docker image from a repository We'll start with a simple case: installing a Docker container from a repository (a MySQL container) and connecting it to an external tool for development (the MySQL Workbench or a client tool of your choice). We'll need to initialize the boot2docker environment and use some Vagrant tools to interact with the environment and the deployed containers. Before we can start, we'll need to find a suitable Docker image to launch. One of the unique advantages to use Docker as a development environment is its ability to select a base Docker image, then add successive build steps on top of the base image. In this simple example, we can find a base MySQL image on the Docker Hub registry. (https://registry.hub.docker.com).The MySQL project provides an official Docker image that we can build from. We'll note from the repository the command for using the image: docker pull mysql and note that the image name is mysql. Start with a Vagrantfile that defines the docker: # -*- mode: ruby -*- # vi: set ft=ruby :   VAGRANTFILE_API_VERSION = "2" ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_fusion' Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define"database" do |db|    db.vm.provider"docker"do |d|      d.image="mysql"    end end end An important thing to note immediately is that when we define the database machine and the provider with the Docker provider, we do not specify a box file. The Docker provider will start and launch containers into a boot2docker environment, negating the need for a Vagrant box or virtual machine definition. This will introduce a bit of a complication in interacting with the Vagrant environment in later steps. Also note the mysql image taken from the Docker Hub Registry. We'll need to launch the image with a few basic parameters. Add the following to the Docker provider block:    db.vm.provider "docker" do |d|      d.image="mysql"      d.env = {        :MYSQL_ROOT_PASSWORD => ""root",        :MYSQL_DATABASE     => ""dockertest",        :MYSQL_USER         => ""dockertest",        :MYSQL_PASSWORD     => ""d0cker"      }      d.ports =["3306:3306"]      d.remains_running = "true"    end The environment variables (d.env) are taken from the documentation on the MySQL Docker image page (https://registry.hub.docker.com/_/mysql/). This is how the image expects to set certain parameters. In this case, our parameters will set the database root password (for the root user) and create a database with a new user that has full permissions to that database. The d.ports parameter is an array of port listings that will be forwarded from the container (the default MySQL port of 3306) to the host operating system, in this case also 3306.The contained application will, thus, behave like a natively installed MySQL installation. The port forwarding here is from the container to the operating system that hosts the container (in this case, the container host is our boot2docker image). If we are developing and hosting containers natively with Vagrant on a Linux distribution, the port forwarding will be to localhost, but boot2docker introduces something of a wrinkle in doing Docker development on Windows or OS X. We'll either need to refer to our software installation by the IP of the boot2docker container or configure a second port forwarding configuration that allows a Docker contained application to be available to the host operating system as localhost. The final parameter (d.remains_running = true) is a flag for Vagrant to note that the Vagrant run should mark as failed if the Docker container exits on start. In the case of software that runs as a daemon process (such as the MySQL database), a Docker container that exits immediately is an error condition. Start the container using the vagrant up –provider=docker command. A few things will happen here: If this is the first time you have started the project, you'll see some messages about booting a box named mitchellh/boot2docker. This is a Vagrant-packaged version of the boot2docker project. Once the machine boots, it becomes a host for all Docker containers managed with Vagrant. Keep in mind that boot2doocker is necessary only for nonLinux operating systems that are running Docker through a virtual machine. On a Linux system running Docker natively, you will not see information about boot2docker. After the container is booted (or if it is already running), Vagrant will display notifications about rsyncing a folder (if we are using boot2docker) and launching the image: Docker generates unique identifiers for containers and notes any port mapping information. Let's take a look at some details on the containers that are running in the Docker host. We'll need to find a way to gain access to the Vagrant boot2docker image (and only if we are using boot2docker and not a native Linux environment), which is not quite as straightforward as a vagrant ssh; we'll need to identify the Vagrant container to access. First, identify the Docker Vagrant machine from the global Vagrant status. Vagrant keeps track of running instances that can be accessed from Vagrant itself. In this case, we are only interested in the Vagrant instance named docker-host. The instance we're interested in can be found with the vagrant global-status command: In this case, Vagrant identifies the instance as d381331 (a unique value for every Vagrant machine launched). We can access this instance with a vagrant ssh command: vagrant ssh d381331 This will display an ASCII-art boot2docker logo and a command prompt for the boot2docker instance. Let's take a look at Docker containers running on the system with the docker psps command: The docker ps command will provide information about the running Docker containers on the system; in this case, the unique ID of the container (output during the Vagrant startup) and other information about the container. Find the IP address of the boot2docker (only if we're using boot2docker) to connect to the MySQL instance. In this case, execute the ifconfig command: docker@boot2docker:~$ ifconfig This will output information about the network interfaces on the machine; we are interested in the eth0 entry. In particular, we can note the IP address of the machine on the eth0 interface: Make a note of the IP address noted as the inet addr; in this case 192.168.30.129. Connect a MySQL client to the running Docker container. In this case, we'll need to note some information to the connection: The IP address of the boot2docker virtual machine (if using boot2docker). In this case, we'll note 192.168.30.129. The port that the MySQL instance will respond to on the Docker host. In this case, the Docker container is forwarding port 3306 in the container to port 3306 on the host. Information noted in the Vagrantfile for the username or password on the MySQL instance. With this information in hand, we can configure a MySQL client. The MySQL project provides a supported GUI client named MySQL Workbench (http://www.mysql.com/products/workbench/). With the client installed on our host operating system, we can create a new connection in the Workbench client (consult the documentation for your version of Workbench, or use a MySQL client of your choice). In this case, we're connecting to the boot2docker instance. If you are running Docker natively on a Linux instance, the connection should simply forward to localhost. If the connection is successful, the Workbench client once connected will display an empty database: Once we've connected, we can use the MySQL database as we would for any other MySQL instance that is hosted this time in a Docker container without having to install and configure the MySQL package itself. Building a Docker image with Vagrant While launching packaged Docker, applications can be useful (particularly in the case where launching a Docker container is simpler than native installation steps), Vagrant becomes even more useful when used to launch containers that are being developed. On OS X and Windows machines, the use of Vagrant can make managing the container deployment somewhat simpler through the boot2docker containers, while on Linux, using the native Docker tools could be somewhat simpler. In this example, we'll use a simple Dockerfile to modify a base image. First, start with a simple Vagrantfile. In this case, we'll specify a build directory rather than a image file: # -*- mode: ruby -*- # vi: set ft=ruby :   # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_fusion'   Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define "nginx" do |nginx|    nginx.vm.provider "docker" do |d|      d.build_dir = "build"      d.ports = ["49153:80"]    end end end This Vagrantfile specifies a build directory as well as the ports forwarded to the host from the container. In this case, the standard HTTP port (80) forwards to port 49153 on the host machine, which in this case is the boot2docker instance. Create our build directory in the same directory as the Vagrantfile. In the build directory, create a Dockerfile. A Dockerfile is a set of instructions on how to build a Docker container. See https://docs.docker.com/reference/builder/ or James Turnbull's The Docker Book for more information on how to construct a Dockerfile. In this example, we'll use a simple Dockerfile to copy a working HTML directory to a base NGINX image: FROM nginx COPY content /usr/share/nginx/html Create a directory in our build directory named content. In the directory, place a simple index.html file that will be served from the new container: <html> <body>    <div style="text-align:center;padding-top:40px;border:dashed 2px;">      This is an NGINX build.    </div> </body> </html> Once all the pieces are in place, our working directory will have the following structure: . ├── Vagrantfile └── build ├── Dockerfile    └── content        └── index.html Start the container in the working directory with the command: vagrant up nginx --provider=docker This will start the container build and deploy process. Once the container is launched, the web server can be accessed using the IP address of the boot2docker instance (see the previous section for more information on obtaining this address) and the forwarded port. One other item to note, especially, if you have completed both steps in this section without halting or destroying the Vagrant project is that when using the Docker provider, containers are deployed to a single shared virtual machine. If the boot2docker instance is accessed and the docker ps command is executed, it can be noted that two separate Vagrant projects deploy containers to a single host. When using the Docker provider, the single instance has a few effects: The single virtual machine can use fewer resources on your development workstation Deploying and rebuilding containers is a process that is much faster than booting and shutting down entire operating systems Docker development with the Docker provider can be a useful technique to create and test Docker containers, although Vagrant might not be of particular help in packaging and distributing Docker containers. If you wish to publish containers, consult the documentation or The Docker Book on getting started with packaging and distributing Docker containers. See also Docker: http://docker.io boot2docker: http://boot2docker.io The Docker Book: http://www.dockerbook.com The Docker repository: https://registry.hub.docker.com Summary In this article, we learned how to use Docker provisioner with Vagrant by covering the topics mentioned in the preceding paragraphs. Resources for Article: Further resources on this subject: Going Beyond the Basics [article] Module, Facts, Types and Reporting tools in Puppet [article] Setting Up a Development Environment [article]
Read more
  • 0
  • 0
  • 13344

article-image-working-vmware-infrastructure
Packt
04 Mar 2015
21 min read
Save for later

Working with VMware Infrastructure

Packt
04 Mar 2015
21 min read
In this article by Daniel Langenhan, the author of VMware vRealize Orchestrator Cookbook, we will take a closer look at how Orchestrator interacts with vCenter Server and vRealize Automation (vRA—formerly known as vCloud Automation Center, vCAC). vRA uses Orchestrator to access and automate infrastructure using Orchestrator plugins. We will take a look at how to make Orchestrator workflows available to vRA. We will investigate the following recipes: Unmounting all the CD-ROMs of all VMs in a cluster Provisioning a VM from a template An approval process for VM provisioning (For more resources related to this topic, see here.) There are quite a lot of plugins for Orchestrator to interact with VMware infrastructure and programs: vCenter Server vCloud Director (vCD) vRealize Automation (vRA—formally known as vCloud Automation Center, vCAC) Site Recovery Manager (SRM) VMware Auto Deploy Horizon (View and Virtual Desktops) vRealize Configuration Manager (earlier known as vCenter Configuration Manager) vCenter Update Manager vCenter Operation Manager, vCOPS (only example packages) VMware, as of writing of this article, is still renaming its products. An overview of all plugins and their names and download links can be found at http://www.vcoteam.info/links/plug-ins.html. There are quite a lot of plugins, and we will not be able to cover all of them, so we will focus on the one that is most used, vCenter. Sadly, vCloud Director is earmarked by VMware to disappear for everyone but service providers, so there is no real need to show any workflow for it. We will also work with vRA and see how it interacts with Orchestrator. vSphere automation The interaction between Orchestrator and vCenter is done using the vCenter API. Here is the explanation of the interaction, which you can refer to in the following figure. A user starts an Orchestrator workflow (1) either in an interactive way via the vSphere Web Client, the Orchestrator Web Operator, the Orchestrator Client, or via the API. The workflow in Orchestrator will then send a job (2) to vCenter and receive a task ID back (type VC:Task). vCenter will then start enacting the job (3). Using the vim3WaitTaskEnd action (4), Orchestrator pauses until the task has been completed. If we do not use the wait task, we can't be certain whether the task has ended or failed. It is extremely important to use the vim3WaitTaskEnd action whenever we send a job to vCenter. When the wait task reports that the job has finished, the workflow will be marked as finished. The vCenter MoRef The MoRef (Managed Object Reference) is a unique ID for every object inside vCenter. MoRefs are basically strings; some examples are shown here: VM Network Datastore ESXi host Data center Cluster vm-301 network-312 dvportgroup-242 datastore-101 host-44 data center-21 domain-c41 The MoRefs are typically stored in the attribute .id or .key of the Orchestrator API object. For example, the MoRef of a vSwitch Network is VC:Network.id. To browse for MoRefs, you can use the Managed Object Browser (MOB), documented at https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.pg.doc/PG_Appx_Using_MOB.20.1.html. The vim3WaitTaskEnd action As already said, vim3WaitTaskEnd is one of the most central actions while interacting with vCenter. The action has the following variables: Category Name Type Usage IN vcTask VC:Task Carries the reconfiguration task from the script to the wait task IN progress Boolean Write to the logs the progress of a task in percentage IN pollRate Number How often the action should be checked for task completion in vCenter OUT ActionResult Any Returns the task's result The wait task will check in regular intervals (pollRate) the status of a task that has been submitted to vCenter. The task can have the following states: State Meaning Queued The task is queued and will be executed as soon as possible. Running The task is currently running. If the progress is set to true, the progress in percentage will be displayed in the logs. Success The task is finished successfully. Error The task has failed and an error will be thrown. Other vCenter wait actions There are actually five waiting tasks that come with the vCenter Server plugin. Here's an overview of the other four: Task Description vim3WaitToolsStarted This task waits until the VMware tools are started on a VM or until a timeout is reached. Vim3WaitForPrincipalIP This task waits until the VMware tools report the primary IP of a VM or until a timeout is reached. This typically indicates that the operating system is ready to receive network traffic. The action will return the primary IP. Vim3WaitDnsNameInTools This task waits until the VMware tools report a given DNS name of a VM or until a timeout is reached. The in-parameter addNumberToName is not used and can be set to Null. WaitTaskEndOrVMQuestion This task waits until a task is finished or if a VM develops a question. A vCenter question is related to user interaction. vRealize Automation (vRA) Automation has changed since the beginning of Orchestrator. Before, tools such as vCloud Director or vCloud Automation Center (vCAC)/vRealize Automation (vRA), Orchestrator was the main tool for automating vCenter resources. With version 6.2 of vCloud Automation Center (vCAC), the product has been renamed vRealize Automation. Now vRA is deemed to become the central cornerstone in the VMware automation effort. vRealize Orchestrator (vRO), is used by vRA to interact with and automate VMware and non-VMware products and infrastructure elements. Throughout the various vCAC/vRA interactions, the role of Orchestrator has changed substantially. Orchestrator started off as an extension to vCAC and became a central part of vRA. In vCAC 5.x, Orchestrator was only an extension of the IaaS life cycle. Orchestrator was tied in using the stubs vCAC 6.0 integrated Orchestrator as an XaaS service (Everything as a Service) using the Advanced Service Designer (ASD) In vCAC 6.1, Orchestrator is used to perform all VMware NSX operations (VMware's new network virtualization and automation), meaning that it became even more of a central part of the IaaS services. With vCAC 6.2, the Advance Service Designer (ASD) was enhanced to allow more complex form of designs, allowing better leverage of Orchestrator workflows. As you can see in the following figure, vRA connects to the vCenter Server using an infrastructure endpoint that allows vRA to conduct basic infrastructure actions, such as power operations, cloning, and so on. It doesn't allow any complex interactions with the vSphere infrastructure, such as HA configurations. Using the Advanced Service Endpoints, vRA integrates the Orchestrator (vRO) plugins as additional services. This allows vRA to offer the entire plugin infrastructure as services to vRA. The vCenter Server, AD, and PowerShell plugins are typical integrations that are used with vRA. Using Advance Service Designer (ASD), you can create integrations that use Orchestrator workflows. ASD allows you to offer Orchestrator workflows as vRA catalog items, making it possible for tenants to access any IT service that can be configured with Orchestrator via its plugins. The following diagram shows an example using the Active Directory plugin. The Orchestrator Plugin provides access to the AD services. By creating a custom resource using the exposed AD infrastructure, we can create a service blueprint and resource actions, both of which are based on Orchestrator workflows that use the AD plugin. The other method of integrating Orchestrator into the IaaS life cycle, which was predominately used in vCAC 5.x was to use the stubs. The build process of a VM has several steps; each step can be assigned a customizable workflow (called a stub). You can configure vRA to run an Orchestrator workflow at these stubs in order to facilitate a few customized actions. Such actions could be taken to change the VMs HA or DRS configuration, or to use the guest integration to install or configure a program on a VM. Installation How to install and configure vRA is out of the scope of this article, but take a look at http://www.kendrickcoleman.com/index.php/Tech-Blog/how-to-install-vcloud-automation-center-vcac-60-part-1-identity-appliance.html for more information. If you don't have the hardware or the time to install vRA yourself, you can use the VMware Hands-on Labs, which can be accessed after clicking on Try for Free at http://hol.vmware.com. The vRA Orchestrator plugin Due to the renaming, the vRA plugin is called vRealize Orchestrator vRA Plug-in 6.2.0, however the file you download and use is named o11nplugin-vcac-6.2.0-2287231.vmoapp. The plugin currently creates a workflow folder called vCloud Automation Center. vRA-integrated Orchestrator The vRA appliance comes with an installed and configured vRO instance; however, the best practice for a production environment is to use a dedicated Orchestrator installation, even better would be an Orchestrator cluster. Dynamic Types or XaaS XaaS means Everything (X) as a Service. The introduction of Dynamic Types in Orchestrator Version 5.5.1 does exactly that; it allows you to build your own plugins and interact with infrastructure that has not yet received its own plugin. Take a look at this article by Christophe Decanini; it integrates Twitter with Orchestrator using Dynamic Types at http://www.vcoteam.info/articles/learn-vco/282-dynamic-types-tutorial-implement-your-own-twitter-plug-in-without-any-scripting.html. Read more… To read more about Orchestrator integration with vRA, please take a look at the official VMware documentation. Please note that the official documentation you need to look at is about vRealize Automation, and not about vCloud Automation Center, but, as of writing this article, the documentation can be found at https://www.vmware.com/support/pubs/vrealize-automation-pubs.html. The document called Advanced Service Design deals with vRO and Advanced Service Designer The document called Machine Extensibility discusses customization using subs Unmounting all the CD-ROMs of all VMs in a cluster This is an easy recipe to start with, but one you can really make it work for your existing infrastructure. The workflow will unmount all CD-ROMs from a running VM. A mounted CD-ROM may block a VM from being vMotioned. Getting ready We need a VM that can mount a CD-ROM either as an ISO from a host or from the client. Before you start the workflow, make sure that the VM is powered on and has an ISO connected to it. How to do it... Create a new workflow with the following variables: Name Type Section Use cluster VC:ClusterComputerResource IN Used to input the cluster clusterVMs Array of VC:VirtualMachine Attribute Use to capture all VMs in a cluster Add the getAllVMsOfCluster action to the schema and assign the cluster in-parameter and the clusterVMs attribute to it as actionResult. Now, add a Foreach element to the schema and assign the workflow Disconnect all detachable devices from a running virtual machine. Assign the Foreach element clusterVMs as a parameter. Save and run the workflow. How it works... This recipe shows how fast and easily you can design solutions that help you with everyday vCenter problems. The problem is that VMs that have CD-ROMs or floppies mounted may experience problems using vMotion, making it impossible for them to be used with DRS. The reality is that a lot of admins mount CD-ROMs and then forget to disconnect them. Scheduling this script every evening just before the nighttime backups will make sure that a production cluster is able to make full use of DRS and is therefore better load-balanced. You can improve this workflow by integrating an exclusion list. See also Refer to the example workflow, 7.01 UnMount CD-ROM from Cluster. Provisioning a VM from a template In this recipe, we will build a deployment workflow for Windows and Linux VMs. We will learn how to create workflows and reduce the amount of input variables. Getting ready We need a Linux or Windows template that we can clone and provision. How to do it… We have split this recipe in two sections. In the first section, we will create a configuration element, and in the second, we will create the workflow. Creating a configuration We will use a configuration for all reusable variables. Build a configuration element that contains the following items: Name Type Use productId String This is the Windows product ID—the licensing code joinDomain String This is the Windows domain FQDN to join domainAdmin Credential These are the credentials to join the domain licenseMode VC:CustomizationLicenseDataMode Example, perServer licenseUsers Number This denotes the number of licensed concurrent users inTimezone Enums:MSTimeZone Time zone fullName String Full name of the user orgName String Organization name newAdminPassword String New admin password dnsServerList Array of String List of DNS servers dnsDomain String DNS domain gateway Array of String List of gateways Creating the base workflow Now we will create the base workflow: Create the workflow as shown in the following figure by adding the given elements:      Clone, Windows with single NIC and credential      Clone, Linux with single NIC      Custom decision Use the Clone, Windows… workflow to create all variables. Link up the ones that you have defined in the configuration as attributes. The rest are defined as follows: Name Type Section Use vmName String IN This is the new virtual machine's name vm VC:VirtualMachine IN Virtual machine to clone folder VC:VmFolder IN This is the virtual machine folder datastore VC:Datastore IN This is the datastore in which you store the virtual machine pool VC:ResourcePool IN This is the resource pool in which you create the virtual machine network VC:Network IN This is the network to which you attach the virtual network interface ipAddress String IN This is the fixed valid IP address subnetMask String IN This is the subnet mask template Boolean Attribute For value No, mark new VM as template powerOn Boolean Attribute For value Yes, power on the VM after creation doSysprep Boolean Attribute For value Yes, run Windows Sysprep dhcp Boolean Attribute For value No, use DHCP newVM VC:VirtualMachine OUT This is the newly-created VM The following sub-workflow in-parameters will be set to special values: Workflow In-parameter value Clone, Windows with single NIC and credential host Null joinWorkgroup Null macAddress Null netBIOS Null primaryWINS Null secondaryWINS Null name vmName clientName vmName Clone, Linux with single NIC host Null macAddress Null name vmName clientName vmName Define the in-parameter VM as input for the Custom decision and add the following script. The script will check whether the name of the OS contains the word Microsoft: guestOS=vm.config.guestFullName; System.log(guestOS);if (guestOS.indexOf("Microsoft") >=0){return true;} else {return false} Save and run the workflow. This workflow will now create a new VM from an existing VM and customize it with a fixed IP. How it works… As you can see, creating workflows to automate vCenter deployments is pretty straightforward. Dealing with the various in-parameters of workflows can be quite overwhelming. The best way to deal with this problem is to hide away variables by defining them centrally using a configuration, or define them locally as attributes. Using configurations has the advantage that you can create them once and reuse them as needed. You can even push the concept a bit further by defining multiple configurations for multiple purposes, such as different environments. While creating a new workflow for automation, a typical approach is as follows: Look for a workflow that you need. Run the workflow normally to check out what it actually does. Either create a new workflow that uses the original or duplicate and edit the one you tried, modifying it until it does what you want. A fast way to deal with a lot of variables is to drag every element you need into the schema and then use the binding to create the variables as needed. You may have noticed that this workflow only lets you select vSwitch networks, not distributed vSwitch networks. You can improve this workflow with the following features: Read the existing Sysprep information stored in your vCenter Server Generate different predefined configurations (for example DEV or Prod) There's more... We can improve the workflow by implementing the ability to change the vCPU and the memory of the VM. Follow these steps to implement it: Move the out-parameter newVM to be an attribute. Add the following variables: Name Type Section Use vCPU Number IN This variable denotes the amount of vCPUs Memory Number IN This variable denotes the amount of VM memory vcTask VC:Task Attribute This variable will carry the reconfiguration task from the script to the wait task progress Boolean Attribute Value NO, vim3WaitTaskEnd pollRate Number Attribute Value 5, vim3WaitTaskEnd ActionResult Any Attribute vim3WaitTaskEnd Add the following actions and workflows according to the next figure:      shutdownVMAndForce      changeVMvCPU      vim3WaitTaskEnd      changeVMRAM      Start virtual machine Bind newVM to all the appropriate input parameters of the added actions and workflows. Bind actionResults (VC:tasks) of the change actions to vim3WaitTasks. See also Refer to the example workflows, 7.02.1 Provision VM (Base), 7.02.2 Provision VM (HW custom), as well as the configuration element, 7 VM provisioning. An approval process for VM provisioning In this recipe, we will see how to create a workflow that waits for an approver to approve the VM creation before provisioning it. We will learn how to combine mail and external events in a workflow to make it interact with different users. Getting ready For this recipe, we first need the provisioning workflow that we have created in the Provisioning a VM from a template recipe. You can use the example workflow, 7.02.1 Provision VM (Base). Additionally, we need a functional e-mail system as well as a workflow to send e-mails. You can use the example workflow, 4.02.1 SendMail as well as its configuration item, 4.2.1 Working with e-mail. How to do it… We will split this recipe in three parts. First, we will create a configuration element then, we will create the workflow, and lastly, we will use a presentation to make the workflow usable. Creating a configuration element We will use a configuration for all reusable variables. Build a configuration element that contains the following items: Name Type Use templates Array/VC:VirtualMachine This contains all the VMs that serve as templates folders Array/VC:VmFolder This contains all the VM folders that are targets for VM provisioning networks Array/VC:Network This contains all VM networks that are targets for VM provisioning resourcePools Array/VC:ResourcePool This contains all resource pools that are targets for VM provisioning datastores Array/VC:Datastore This contains all datastores that are targets for VM provisioning daysToApproval Number These are the number of days the approval should be available for approver String This is the e-mail of the approver Please note that you also have to define or use the configuration elements for SendMail, as well as the Provision VM workflows. You can use the examples contained in the example package. Creating a workflow Create a new workflow and add the following variables: Name Type Section Use mailRequester String IN This is the e-mail address of the requester vmName String IN This is the name of the new virtual machine vm VC:VirtualMachine IN This is the virtual machine to be cloned folder VC:VmFolder IN This is the virtual machine folder datastore VC:Datastore IN This is the datastore in which you store the virtual machine pool VC:ResourcePool IN This is the resource pool in which you create the virtual machine network VC:Network IN This is the network to which you attach the virtual network interface ipAddress String IN This is the fixed valid IP address subnetMask String IN This is the subnet mask isExternalEvent Boolean Attribute A value of true defines this event as external mailApproverSubject String Attribute This is the subject line of the mail sent to the approver mailApproverContent String Attribute This is the content of the mail that is sent to the approver mailRequesterSubject String Attribute This is the subject line of the mail sent to the requester when the VM is provisioned mailRequesterContent String Attribute This is the content of the mail that is sent to the requester when the VM is provisioned mailRequesterDeclinedSubject String Attribute This is the subject line of the mail sent to the requester when the VM is declined mailRequesterDeclinedContent String Attribute This is the content of the mail that is sent to the requester when the VM is declined eventName String Attribute This is the name of the external event endDate Date Attribute This is the end date for the wait of external event approvalSuccess Boolean Attribute This checks whether the VM has been approved Now add all the attributes we defined in the configuration element and link them to the configuration. Create the workflow as shown in the following figure by adding the given elements:      Scriptable task      4.02.1 SendMail (example workflow)       Wait for custom event       Decision       Provision VM (example workflow) Edit the scriptable task and bind the following variables to it: In Out vmName ipAddress mailRequester template approver days to approval mailApproverSubject mailApproverContent mailRequesterSubject mailRequesterContent mailRequesterDeclinedSubject mailRequesterDeclinedContent eventName endDate Add the following script to the scriptable task: //construct event name eventName="provision-"+vmName; //add days to today for approval var today = new Date(); var endDate = new Date(today); endDate.setDate(today.getDate()+daysToApproval); //construct external URL for approval var myURL = new URL() ; myURL=System.customEventUrl(eventName, false); externalURL=myURL.url; //mail to approver mailApproverSubject="Approval needed: "+vmName; mailApproverContent="Dear Approver,n the user "+mailRequester+" would like to provision a VM from template "+template.name+".n To approve please click here: "+externalURL; //VM provisioned mailRequesterSubject="VM ready :"+vmName; mailRequesterContent="Dear Requester,n the VM "+vmName+" has been provisioned and is now available under IP :"+ipAddress; //declined mailRequesterDeclinedSubject="Declined :"+vmName; mailRequesterDeclinedContent="Dear Requester,n the VM "+vmName+" has been declined by "+approver; Bind the out-parameter of Wait for customer event to approvalSuccess. Configure the Decision element with approvalSuccess as true. Bind all the other variables to the workflow elements. Improving with the presentation We will now edit the workflow's presentation in order to make it workable for the requester. To do so, follow the given steps: Click on Presentation and follow the steps to alter the presentation, as seen in the following screenshot: Add the following properties to the in-parameters: In-parameter Property Value template Predefined list of elements #templates folder Predefined list of elements #folders datastore Predefined list of elements #datastores pool Predefined list of elements #resourcePools network Predefined list of elements #networks You can now use the General tab of each in-parameter to change the displayed text. Save and close the workflow. How it works… This is a very simplified example of an approval workflow to create VMs. The aim of this recipe is to introduce you to the method and ideas of how to build such a workflow. This workflow will only give a requester the choices that are configured in the configuration element, making the workflow quite safe for users that have only limited knowhow of the IT environment. When the requester submits the workflow, an e-mail is sent to the approver. The e-mail contains a link, which when clicked, triggers the external event and approves the VM. If the VM is approved it will get provisioned, and when the provisioning has finished an e-mail is sent to the requester stating that the VM is now available. If the VM is not approved within a certain timeframe, the requester will receive an e-mail that the VM was not approved. To make this workflow fully functional, you can add permissions for a requester group to the workflow and Orchestrator so that the user can use the vCenter to request a VM. Things you can do to improve the workflow are as follows: Schedule the provisioning to a future date. Use the resources for the e-mail and replace the content. Add an error workflow in case the provisioning fails. Use AD to read out the current user's e-mail and full name to improve the workflow. Create a workflow that lets an approver configure the configuration elements that a requester can chose from. Reduce the selections by creating, for instance, a development and production configuration that contains the correct folders, datastores, networks, and so on. Create a decommissioning workflow that is automatically scheduled so that the VM is destroyed automatically after a given period of time. See also Refer to the example workflow, 7.03 Approval and the configuration element, 7 approval. Summary In this article, we discussed one of the important aspects of the interaction of Orchestrator with vCenter Server and vRealize Automation, that is VM provisioning. Resources for Article: Further resources on this subject: Importance of Windows RDS in Horizon View [article] Metrics in vRealize Operations [article] Designing and Building a Horizon View 6.0 Infrastructure [article]
Read more
  • 0
  • 0
  • 13128

article-image-creating-horizon-desktop-pools
Packt
16 May 2016
17 min read
Save for later

Creating Horizon Desktop Pools

Packt
16 May 2016
17 min read
A Horizon desktop pool is a collection of desktops that users select when they log in using the Horizon client. A pool can be created based on a subset of users, such as finance, but this is not explicitly required unless you will be deploying multiple virtual desktop master images. The pool can be thought of as a central point of desktop management within Horizon; from it you create, manage, and entitle access to Horizon desktops. This article by Jason Ventresco, author of the book Implementing VMware Horizon View 6.X, will discuss how to create a desktop pool using the Horizon Administrator console, an important administrative task. (For more resources related to this topic, see here.) Creating a Horizon desktop pool This section will provide an example of how to create two different Horizon dedicated assignment desktop pools, one based on Horizon Composer linked clones and another based on full clones. Horizon Instant Clone pools only support floating assignment, so they have fewer options compared to the other types of desktop pools. Also discussed will be how to use the Horizon Administrator console and the vSphere client to monitor the provisioning process. The examples provided for full clone and linked clone pools created dedicated assignment pools, although floating assignment may be created as well. The options will be slightly different for each, so refer the information provided in the Horizon documentation (https://www.vmware.com/support/pubs/view_pubs.html), to understand what each setting means. Additionally, the Horizon Administrator console often explains each setting within the desktop pool configuration screens. Creating a pool using Horizon Composer linked clones The following steps outline how to use the Horizon Administrator console to create a dedicated assignment desktop pool using Horizon Composer linked clones. As discussed previously, it is assumed that you already have a virtual desktop master image that you have created a snapshot of. During each stage of the pool creation process, a description of many of the settings is displayed in the right-hand side of the Add Desktop Pool window. In addition, a question mark appears next to some of the settings; click on it to read important information about the specified setting. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window, select the Automated Desktop Pool radio button as shown in the following screenshot, and then click on Next >: In the Desktop Pool Definition | User Assignment window, select the Dedicated radio button and check the Enable automatic assignment checkbox as shown in the following screenshot, and then click on Next >: In the Desktop Pool Definition | vCenter Server window, select the View Composer linked clones radio button, highlight the vCenter server as shown in the following screenshot, and then click on Next >: In the Setting | Desktop Pool Identification window, populate the pool ID: as shown in the following screenshot, and then click on Next >. Optionally, configure the Display Name: field. When finished, click on Next >: In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >: In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format, the number of desktops, and the number of desktops that should remain available during Horizon Composer maintenance operations. When finished, click on Next >: When creating a desktop naming pattern, use a {n} to instruct Horizon to insert a unique number in the desktop name. For example, using Win10x64{n} as shown in the preceding screenshot will name the first desktop Win10x641, the next Win10x642, and so on. In the Setting | View Composer Disks window, configure the settings for your optional linked clone disks. By default, both a Persistent Disk for user data and a non-persistent disk for Disposable File Redirection are created. When finished, click on Next >: In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN, and if not whether or not to separate our Horizon desktop replica disks from the individual desktop OS disks. In our example, we have checked the Use VMware Virtual SAN radio button as that is what our destination vSphere cluster is using. When finished, click on Next >: As all-flash storage arrays or all-flash or flash-dependent Software Defined Storage (SDS) platforms become more common, there is less of a need to place the shared linked clone replica disks on separate, faster datastores than the individual desktop OS disks. In the Setting | vCenter Settings window, we will need to configure six different options that include selecting the parent virtual machine, which snapshot of that virtual machine to use, what vCenter folder to place the desktops in, what vSphere cluster and resource pool to deploy the desktops to, and what datastores to use. Click on the Browse… button next to the Parent VM: field to begin the process and open the Select Parent VM window: In the Select Parent VM window, highlight the virtual desktop master image that you wish to deploy desktops from, as shown in the following screenshot. Click on OK when the image is selected to return to the previous window: The virtual machine will only appear if a snapshot has been created. In the Setting | vCenter Settings window, click on the Browse… button next to the Snapshot: field to open the Select default image window. Select the desired snapshot, as shown in the following screenshot, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the VM folder location: field to open the VM Folder Location window, as shown in the following screenshot. Select the folder within vCenter where you want the desktop virtual machines to be placed, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Host or cluster: field to open the Host or Cluster window, as shown in the following screenshot. Select the cluster or individual ESXi server within vCenter where you want the desktop virtual machines to be created, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Resource pool: field to open the Resource Pool window, as shown in the following screenshot. If you intend to place the desktops within a resource pool you would select that here; if not select the same cluster or ESXi server you chose in the previous step. Once finished, click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Datastores: field to open the Select Linked Clone Datastores window, as shown in the following screenshot. Select the datastore or datastores where you want the desktops to be created, and click on OK to return to the previous window: If you were using storage other than VMware Virtual SAN, and had opted to use separate datastores for your OS and replica disks in step 11, you would have had to select unique datastores for each here instead of just one. Additionally, you would have had the option to configure the storage overcommit level. The Setting | vCenter Settings window should now have all options selected, enabling the Next > button. When finished, click on Next >: In the Setting | Advanced Storage Options window, if desired select and configure the Use View Storage Accelerator and Other Options check boxes to enable those features. In our example, we have enabled both the Use View Storage Accelerator and Reclaim VM disk space options, and configured Blackout Times to ensure that these operations do not occur between 8 A.M. (08:00) and 5 P.M. (17:00) on weekdays. When finished, click on Next >: The Use native NFS snapshots (VAAI) feature enables Horizon to leverage features of the a supported NFS storage array to offload the creation of linked clone desktops. If you are using an external array with your Horizon ESXi servers, consult the product documentation to understand if it supports this feature. Since we are using VMware Virtual SAN, this and other options under Other Options are greyed out as these settings are not needed. Additionally, if View Storage Accelerator is not enabled in the vCenter Server settings the option to use it would be greyed out here. In the Setting | Guest Customization window, select the Domain: where the desktops will be created, the AD container: where the computer accounts will be placed, whether to Use QuickPrep or Use a customization specification (Sysprep), and any other options as required. When finished, click on Next >: In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The Horizon desktop pool and virtual desktops will now be created. Creating a pool using Horizon Instant Clones The process used to create an Instant Clone desktop pool is similar to that used to create a linked clone pool. As discussed previously, it is assumed that you already have a virtual desktop master image that has the Instant Clone option enabled in the Horizon agent, and that you have taken a snapshot of that master image. A master image can have either the Horizon Composer (linked clone) option or Instant Clone option enabled in the Horizon agent, but not both. To get around this restriction you can configure one snapshot of the master image with the View Composer option installed, and a second with the Instant Clone option installed. The following steps outline the process used to create the Instant Clone desktop pool. Screenshots are included only when the step differs significantly from the same step in the Creating a pool using Horizon Composer linked clones section. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window, select the Automated Desktop Pool radio button as shown in the following screenshot, and then click on Next >. In the Desktop Pool Definition | User Assignment window, select the Floating radio button (mandatory for Instant Clone desktops), and then click on Next >. In the Desktop Pool Definition | vCenter Server window, select the View Composer linked clones radio button as shown in the following screenshot, highlight the vCenter server, and then click on Next >: If Instant Clones is greyed out here, it is usually because you did not select Floating in the previous step. In the Setting | Desktop Pool Identification window, populate the pool ID:, and then click on Next >. Optionally, configure the Display Name: field. In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >. In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format, the number of desktops, and the number of desktops that should remain available during maintenance operations. When finished, click on Next >. Instant Clones are required to always be powered on, so some options available to linked clones will be greyed out here. In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN, and if not whether or not to separate our Horizon desktop replica disks from the individual desktop OS disks. When finished, click on Next >. In the Setting | vCenter Settings window, we will need to configure six different options that include selecting the parent virtual machine, which snapshot of that virtual machine to use, what vCenter folder to place the desktops in, what vSphere cluster and resource pool to deploy the desktops to, and what datastores to use. Click on the Browse… button next to the Parent VM: Horizon Instant Clone field to begin the process and open the Select Parent VM window. In the Select Parent VM window, highlight the virtual desktop master image that you wish to deploy desktops from. Click on OK when the image is selected to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Snapshot: field to open the Select default image window. Select the desired snapshot, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the VM folder location: field to open the VM Folder Location window. Select the folder within vCenter where you want the desktop virtual machines to be placed, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Host or cluster: field to open the Host or Cluster window. Select the cluster or individual ESXi server within vCenter where you want the desktop virtual machines to be created, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Resource pool: field to open the Resource Pool window. If you intend to place the desktops within a resource pool you would select that here; if not select the same cluster or ESXi server you chose in the previous step. Once finished, click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Datastores: field to open the Select Instant Clone Datastores window. Select the datastore or datastores where you want the desktops to be created, and click on OK to return to the previous window. The Setting | vCenter Settings window should now have all options selected, enabling the Next > button. When finished, click on Next >. In the Setting | Guest Customization window, select the Domain: where the desktops will be created, the AD container: where the computer accounts will be placed, and any other options as required. When finished, click on Next >. Instant Clones only support ClonePrep for customization, so there are fewer options here than seen when deploying a linked clone desktop pool. In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The Horizon desktop pool and Instant Clone virtual desktops will now be created. Creating a pool using full clones The process used to create full clone desktops pool is similar to that used to create a linked clone pool. As discussed previously, it is assumed that you already have a virtual desktop master image that you have converted to a vSphere template. In addition, if you wish for Horizon to perform the virtual machine customization, you will need to create a Customization Specification using the vCenter Customization Specifications Manager. The Customization Specification is used by the Windows Sysprep utility to complete the guest customization process. Visit the VMware vSphere virtual machine administration guide (http://pubs.vmware.com/vsphere-60/index.jsp) for instructions on how to create a Customization Specification. The following steps outline the process used to create the full clone desktop pool. Screenshots are included only when the step differs significantly from the same step in the Creating a pool using Horizon Composer linked clones section. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window select the Automated Pool radio button and then click on Next. In the Desktop Pool Definition | User Assignment window, select the Dedicated radio button, check the Enable automatic assignment checkbox, and then click on Next. In the Desktop Pool Definition | vCenter Server window, click the Full virtual machines radio button, highlight the desired vCenter server, and then click on Next. In the Setting | Desktop Pool Identification window, populate the pool ID: and Display Name: fields and then click on Next. In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >. In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format and number of desktops. When finished, click on Next >. In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN. When finished, click on Next >. In the Setting | vCenter Settings window, we will need to configure settings that set the virtual machine template, what vSphere folder to place the desktops in, which ESXi server or cluster to deploy the desktops to, and which datastores to use. Other than the Template setting described in the next step, each of these settings is identical to those seen when creating a Horizon Composer linked clone pool. Click on the Browse… button next to each of the settings in turn and select the appropriate options. To configure the Template: setting, select the vSphere template that you created from your virtual desktop master image as shown in the following screenshot, and then click OK to return to the previous window: A template will only appear if one is present within vCenter. Once all the settings in the Setting | vCenter Settings window have been configured, click on Next >. In the Setting | Advanced Storage Options window, if desired select and configure the Use View Storage Accelerator radio buttons and configure Blackout Times. When finished, click on Next >. In the Setting | Guest Customization window, select either the None | Customization will be done manually or Use this customization specification radio button, and if applicable select a customization specification. When finished, click on Next >. In the following screenshot, we have selected the Win10x64-HorizonFC customization specification that we previously created within vCenter: Manual customization is typically used when the template has been configured to run Sysprep automatically upon start up, without requiring any interaction from either Horizon or VMware vSphere. In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The desktop pool and virtual desktops will now be created. Summary In this article, we have learned about Horizon desktop pools. In addition to learning how to create three different types of desktop pools, we were introduced to a number of key concepts that are part of the pool creation process. Resources for Article: Further resources on this subject: Essentials of VMware vSphere [article] Cloning and Snapshots in VMware Workstation [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 13120

article-image-docker-container-management-with-saltstack
Nicole Thomas
25 Sep 2014
8 min read
Save for later

Docker Container Management at Scale with SaltStack

Nicole Thomas
25 Sep 2014
8 min read
Every once in a while a technology comes along that changes the way work is done in a data center. It happened with virtualization and we are seeing it with various cloud computing technologies and concepts. But most recently, the rise in popularity of container technology has given us reason to rethink interdependencies within software stacks and how we run applications in our infrastructures. Despite all of the enthusiasm around the potential of containers, they are still just another thing in the data center that needs to be centrally controlled and managed...often at massive scale. This article will provide an introduction to how SaltStack can be used to manage all of this, including container technology, at web scale. SaltStack Systems Management Software The SaltStack systems management software is built for remote execution, orchestration, and configuration management and is known for being fast, scalable, and flexible. Salt is easy to set up and can easily communicate asynchronously with tens of thousands of servers in a matter of seconds. Salt was originally built as a remote execution engine relying on a secure, bi-directional communication system utilizing a Salt Master daemon used to control Salt Minion daemons, where the minions receive commands from the remote master. Salt’s configuration management capabilities are called the state system, or Salt States, and are built on SLS formulas. SLS files are data structures based on dictionaries, lists, strings, and numbers that are used to enforce the state that a system should be in, also known as configuration management. SLS files are easy to write, simple to implement, and are typically written in YAML. State file execution occurs on the Salt Minion. Therefore, once any states files that the infrastructure requires have been written, it is possible to enforce state on tens of thousands of hosts simultaneously. Additionally, each minion returns its status to the Salt Master. Docker containers Docker is an agile runtime and packaging platform specializing in enabling applications to be quickly built, shipped, and run in any environment. Docker containers allow developers to easily compartmentalize their applications so that all of the program’s needs are installed and configured in a single container. This container can then be dropped into clouds, data centers, laptops, virtual machines (VMs), or infrastructures where the app can execute, unaltered, without any cross-environment issues. Building Docker containers is fast, simple, and inexpensive. Developers can create a container, change it, break it, fix it, throw it away, or save it much like a VM. However, Docker containers are much more nimble, as the containers only possess the application and its dependencies. VMs, along with the application and its dependencies, also install a guest operating system, which requires much more overhead than may be necessary. While being able to slice deployments into confined components is a good idea for application portability and resource allocation, it also means there are many more pieces that need to be managed. As the number of containers scales, without proper management, inefficiencies abound. This scenario, known as container sprawl, is where SaltStack can help and the combination of SaltStack and Docker quickly proves its value. SaltStack + Docker When we combine SaltStack’s powerful configuration management armory with Docker’s portable and compact containerization tools, we get the best of both worlds. SaltStack has added support for users to manage Docker containers on any infrastructure. The following demonstration illustrates how SaltStack can be used to manage Docker containers using Salt States. We are going to use a Salt Master to control a minimum of two Salt Minions. On one minion we will install HAProxy. On the other minion, we will install Docker and then spin up two containers on that minion. Each container hosts a PHP app with Apache that displays information about each container’s IP address and exposed port. The HAProxy minion will automatically update to pull the IP address and port number from each of the Docker containers housed on the Docker minion. You can certainly set up more minions for this experiment, where additional minions would be additional Docker container minions, to see an implementation of containers at scale, but it is not necessary to see what is possible. First, we have a few installation steps to take care of. SaltStack’s installation documentation provides a thorough overview of how to get Salt up and running, depending on your operating system. Follow the instructions on Salt’s Installation Guide to install and configure your Salt Master and two Salt Minions. Go through the process of accepting the minion keys on the Salt Master and, to make sure the minions are connected, run: salt ‘*’ test.ping Note: Much of the Docker functionality used in this demonstration is brand new. You must install the latest supported version of Salt, which is 2014.1.6. For reference, I have my master and minions all running on Ubuntu 14.04. My Docker minion is named “docker1” and my HAProxy minion is named “haproxy”. Now that you have a Salt Master and two Salt Minions configured and communicating with each other, it’s time to get to work. Dave Boucha, a Senior Engineer at SaltStack, has already set up the necessary configuration files for both our haproxy and docker1 minions and we will be using those to get started. First, clone the Docker files in Dave’s GitHub repository, dock-apache, and copy the dock_apache and docker> directories into the /srv/salt directory on your Salt Master (you may need to create the /srv/salt directory): cp -r path/to/download/dock_apache /srv/salt cp -r path/to/download/docker /srv/salt The init.sls file inside the docker directory is a Salt State that installs Docker and all of its dependencies on your minion. The docker.pgp file is a public key that is used during the Docker installation. To fire off all of these events, run the following command: salt ‘docker1’ state.sls docker Once Docker has successfully installed on your minion, we need to set up our two Docker containers. The init.sls file in the dock_apache directory is a Salt State that specifies the image to pull from Docker’s public container repository, installs the two containers, and assigns the ports that each container should expose: 8000 for the first container and 8080 for the second container. Now, let’s run the state: salt ‘docker1’ state.sls dock_apache Let’s check and see if it worked. Get the IP address of the minion by running: salt ‘docker1’ network.ip_addrs Copy and paste the IP address into a browser and add one of the ports to the end. Try them both (IP:8000 and IP:8080) to see each container up and running! At this point, you should have two Docker containers installed on your “docker1” minion (or any other docker minions you may have created). Now we need to set up the “haproxy” minion. Download the files from Dave’s GitHub repository, haproxy-docker, and place them all in your Salt Master’s /srv/salt/haproxy directory: cp -r path/to/download/haproxy /srv/salt First, we need to retrieve the data about our Docker containers from the Docker minion(s), such as the container IP address and port numbers, by running the haproxy.config salt ‘haproxy’ state.sls haproxy.config By running haproxy.config, we are setting up a new configuration with the Docker minion data. The configuration file also runs the haproxy state (init.sls. The init.sls file contains a haproxy state that ensures that HAProxy is installed on the minion, sets up the configuration for HAProxy, and then runs HAProxy. You should now be able to look at the HAProxy statistics by entering the haproxy minion’s IP address in your browser with /haproxy?stats on the end of the URL. When the authentication pop-up appears, the username and password are both saltstack. While this is a simple example of running HAProxy to balance the load of only two Docker containers, it demonstrates how Salt can be used to manage tens of thousands of containers as your infrastructure scales. To see this same demonstration with a haproxy minion in action with 99 Docker minions, each with two Docker containers installed, check out SaltStack’s Salt Air 20 tutorial by Dave Boucha on YouTube. About the author Nicole Thomas is a QA Engineer at SaltStack, Inc. Before coming to SaltStack, she wore many hats from web and Android developer to contributing editor to working in Environmental Education. Nicole recently graduated Summa Cum Laude from Westminster College with a degree in Computer Science. Nicole also has a degree in Environmental Studies from the University of Utah.
Read more
  • 0
  • 0
  • 12855

article-image-introducing-vsphere-vmotion
Packt
16 Aug 2016
5 min read
Save for later

Introducing vSphere vMotion

Packt
16 Aug 2016
5 min read
In this article by Abhilash G B and Rebecca Fitzhugh author of the book Learning VMware vSphere, we are mostly going to be talking about howvSphere vMotion is a VMware technology used to migrate a running virtual machine from one host to another without altering its power-state. The beauty of the whole process is that it is transparent to the applications running inside the virtual machine. In this section we will understand the inner workings of vMotion and learn how to configure it. There are different types of vMotion, such as: Compute vMotion Storage vMotion Unified vMotion Enhanced vMotion (X-vMotion) Cross vSwitch vMotion Cross vCenter vMotion Long Distance vMotion (For more resources related to this topic, see here.) Compute vMotion is the default vMotion method and is employed by other features such as DRS, FT and Maintenance Mode. When you initiate a vMotion, it initiates an iterative copy of all memory pages. After the first pass, all the dirtied memory pages are copied again by doing another pass and this is done iteratively until the amount of pages left over to be copied is small enough to be transferred and to switch over the state of the VM to the destination host. During the switch over, the virtual machine's device state are transferred and resumed at the destination host.You can initiate up to 8 simultaneous vMotion operations on a single host. Storage vMotion is used to migrate the files backing a virtual machine (virtual disks, configuration files, logs) from one datastore to another while the virtual machine is still running. When you initiate a storage vMotion, it starts a sequential copy of source disk in 64 MB chunks. While a region is being copied, all the writes issued to that region are deferred until the region is copied. An already copied source region is monitored for further writes. If there is a write I/O, then it will be mirrored to the destination disk as well. This process of mirror writes to the destination virtual disk continues until the sequential copy of the entire source virtual disk is complete. Once the sequential copy is complete, all subsequent READS/WRITES are issued to the destination virtual disk. Keep in mind though that while the sequential copy is still in progress all the READs are issued to the source virtual disk. Storage vMotion is used be Storage DRS. You initiate up to 2 simultaneous SvMotion operations on a single host. Unified vMotion is used to migrate both the running state of a virtual machine and files backing it from one host and datastore to another. Unified vMotion uses a combination of both Compute and Storage vMotion to achieve the migration. First, the configuration files and the virtual disks are migrated and only then the migration of live state of the virtual machine will begin. You can initiate up to 2 simultaneous Unified vMotion operations on a single host. Enhanced vMotion (X-vMotion) is used to migrate virtual machine between hosts that do not share storage. Both the virtual machine's running state and the files backing it are transferred over the network to the destination. The migration procedure is same as the compute and storage vMotion. In fact, Enhanced vMotion uses Unified vMotion to achieve the migration. Since the memory and disk states are transferred over vMotion network, ESXi hosts maintain a transmit buffer at the source and a receive buffer at the destination. The transmit buffer collects and places data on to the network, while the receive buffer will collect data received via the network and flushes it to the storage. You can initiate up to 2 simultaneous X-vMotion operations on a single host. Cross vSwitch vMotion allows you to choose a destination port group for the virtual machine. It is important to note that unless the destination port group supports the same L2 network, the virtual machine will not be able to communicate over the network. Cross vSwitch vMotion allows changing from Standard vSwitch to VDS, but not from VDS to Standard vSwitch. vSwitch to vSwitch and VDS to VDS is supported. Cross vCenter vMotion allows migrating virtual machines beyond the vCenter's boundary. This is a new enhancement with vSphere 6.0. However, for this to be possible both the vCenter's should be in the same SSO Domain and should be in Enhanced Linked Mode. Infrastructure requirement for Cross vCenter vMotion has been detailed in the VMware Knowledge Base article 2106952 at the following link:http://kb.vmware.com/kb/2106952. Long Distance vMotion allows migrating virtual machines over distances with a latency not exceeding 150 milliseconds. Prior to vSphere 6.0, the maximum supported network latency for vMotion was 10 milliseconds. Using the provisioning interface You can configure a Provisioning Interface to send all non-active data of the virtual machine being migrated. Prior to vSphere 6.0, vMotion used the vmkernel interface which has the default gateway configured on it (which in most cases is the management interface vmk0) to transfer non-performance impacting vMotion data. Non-performance impacting vMotion data includes the Virtual Machine's home directory, older delta in the snapshot chain, base disks etc. Only the live data will hit the vMotion interface. The Provisioning Interface is nothing but a vmkernel interface with Provisioning Traffic enabled on this. The procedure to do this is very similar to how you would configure a vmkernel interface for Management or vMotion traffic. You will have to edit the settings of the intended vmk interface and set Provisioning traffic as the enabled service: It is important to keep in mind that the provisioning interface is not just meant for VMotion data, but if enabled it will be used for cold migrations, cloning operations and virtual machine snapshots. The provisioning interface can be configured to use a different gateway other than vmkernel's default gateway. Further resources on this subject: Cloning and Snapshots in VMware Workstation [article] Essentials of VMware vSphere [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 12197

article-image-monitoring-and-troubleshooting-networking
Packt
21 Oct 2015
21 min read
Save for later

Monitoring and Troubleshooting Networking

Packt
21 Oct 2015
21 min read
This article by Muhammad Zeeshan Munir, author of the book VMware vSphere Troubleshooting, includes troubleshooting vSphere virtual distributed switches, vSphere standard virtual switches, vLANs, uplinks, DNS, and routing, which is one of the core issues a seasonal system engineer has to deal with on a daily basis. This article will cover all these topics and give you hands-on step-by-step instructions to manage and monitor your network resources. The following topics will be covered in this article: Different network troubleshooting commands VLANs troubleshooting Verification of physical trunks and VLAN configuration Testing of VM connectivity VMkernel interface troubleshooting Configuration command (Vicfg-vmknic and esxcli network ip interface) Use of Direct Console User Interface (DCUI) to verify configuration (For more resources related to this topic, see here.) Network troubleshooting commands Some of the commands that can be used for networking troubleshooting include net-dvs, Esxcli network, vicfg-route, vicfg-vmknic, vicfg-dns, vicfg-nics, and vicfg-vswitch. You can use the net-dvs command to troubleshoot VMware distributed dvSwitches. The command shows all the information regarding the VMware distributed dvSwtich configuration. The net-dvs command reads the information from the /etc/vmware/dvsdata.db file and displays all the data in the console. A vSphere host keeps updating its dvsdata.db file every five minutes. Connect to a vSphere host using PuTTY. Enter your user name and password when prompted. Type the following command in the CLI: net-dvs You will see something similar to the following screenshot: In the preceding screenshot, you can see that the first line represents the UUID of a VMware distributed switch. The second line shows the maximum number of ports a distributed switch can have. The line com.vmware.common.alias = dvswitch-Network-Pools represents the name of a distributed switch. The next line com.vmware.common.uplinkPorts: dvUplink1 to dvUplinkn shows the uplink ports a distributed switch has. The distributed switch MTU is set to 1,600 and you can see the information about CDP just below it. CDP information can be useful to troubleshoot connectivity issues. You can see com.vmware.common.respools.list listing networking resource pools, while com.vmware.common.host.uplinkPorts shows the ports numbers assigned to uplink ports. Further details about these uplink ports are explained as follows for each uplink port by their port number. You can also see the port statistics as displayed in the following screenshot. When you perform troubleshooting, these statistics can help you to check the behavior of the distributed switch and the ports. From these statistics, you can diagnose if the data packets are going in and out. As you can see in the following screenshot, all the metrics regarding packet drops are zero. If you find in your troubleshooting that the packets are being dropped, you can easily start finding the root cause of the problem: Unfortunately, the net-dvs command is very poorly documented, and usually, it is hard to find useful references. Moreover, it is not supported by VMware. However, you can use it with –h switch to display more options. Repairing a dvsdata.db file Sometimes, the dvsdata.db file of a vSphere host becomes corrupted and you face different types of distributed switch errors, for example, unable to create proxy DVS. In this case, when you try to run the net-dvs command on a vSphere host, it will fail with an error as well. As I have mentioned earlier, the net-dvs command reads data from the /etc/vmware/dvsdata.db file—it fails because it is unable to read data from the file. The possible cause for the corruption of the dvsdata.db file could be network outage; or when a vSphere host is disconnected from vCenter and deleted, it might have the information in its cache. You can resolve this issue by restoring the dvsdata.db file by following these steps: Through PuTTY, connect to a functioning vSphere host in your infrastructure. Copy the dvsdata.db file from the vSphere host. The file can be found in /etc/vmware/dvsdata.db. Transfer the copied dvsdata.db file to the corrupted vSphere host and overwrite it. Restart your vSphere host. Once the vSphere host is up and running, use PuTTY to connect to it. Run the net-dvs command. The command should be executed successfully this time without any errors. ESXCLI network The esxcli network command is a longtime friend of the system administrator and the support staff for troubleshooting network related issues. The esxcli network command will be used to examine different network configurations and to troubleshoot problems. You can type esxcli network to quickly see a help reference and the different options that can be used with the command. Let's walk through some useful esxcli network troubleshooting commands. Type the following command into your vSphere CLI to list all the virtual machines and the networks they are on. You can see that the command returned World ID, virtual machine name, number of ports, and the network: esxcli network vm list World ID  Name  Num Ports  Networks --------  ---------------------------------------------------  ---------  --------------- 14323012  cluster08_(5fa21117-18f7-427c-84d1-c63922199e05)          1  dvportgroup-372 Now use the World ID of a virtual machine returned by the last command to list all the ports the virtual machine is currently using. You can see the virtual switch name, MAC address of the NIC, IP address, and uplink port ID: esxcli network vm port list -w 14323012 Port ID: 50331662 vSwitch: dvSwitch-Network-Pools Portgroup: dvportgroup-372 DVPort ID: 1063 MAC Address: 00:50:56:01:00:7e IP Address: 0.0.0.0 Team Uplink: all(2) Uplink Port ID: 0 Active Filters: Type the following command in the CLI to list the statistics of the virtual switch—you need to replace the port ID as returned by the last command after –p flag: esxcli network port stats get -p 50331662 Packet statistics for port 50331662 Packets received: 10787391024 Packets sent: 7661812086 Bytes received: 3048720170788 Bytes sent: 154147668506 Broadcast packets received: 17831672 Broadcast packets sent: 309404 Multicast packets received: 656 Multicast packets sent: 52 Unicast packets received: 10769558696 Unicast packets sent: 7661502630 Receive packets dropped: 92865923 Transmit packets dropped: 0 Type the following command to list complete information about the network card of the virtual machine: esxcli network nic stats get -n vmnic0 NIC statistics for vmnic0 Packets received: 2969343419 Packets sent: 155331621 Bytes received: 2264469102098 Bytes sent: 46007679331 Receive packets dropped: 0 Transmit packets dropped: 0 Total receive errors: 78507 Receive length errors: 0 Receive over errors: 22 Receive CRC errors: 0 Receive frame errors: 0 Receive FIFO errors: 78485 Receive missed errors: 0 Total transmit errors: 0 Transmit aborted errors: 0 Transmit carrier errors: 0 Transmit FIFO errors: 0 Transmit heartbeat errors: 0 Transmit window errors: 0 A complete reference of the ESXCli Network command can be found here at https://goo.gl/9OMbVU. All the vicfg-* commands are very helpful and easy to use. I will encourage you to learn in order to make your life easier. Here are some of the vicfg-* commands relevant to network troubleshooting: vicfg-route: We will use this command for how to add or remove IP routes and how to create and delete default IP Gateways. vicfg-vmknic: We will use this command to perform different operations on VMkernel NICs for vSphere hosts. vicfg-dns: This command will be used to manipulate DNS information. vicfg-nics: We will use this command to manipulate vSphere Physical NICs. vicfg-vswitch: We will use this command to to create, delete, and modify vswitch information. Troubleshooting uplinks We will use the vicfg-nics command to manage physical network adapters of vSphere hosts. The vicfg-nics command can also be used to set up the speed, VMkernel name for the uplink adapters, duplex setting, driver information, and link state information of the NIC. Connect to your vMA appliance console and set up the target vSphere host: vifptarget --set crimv3esx001.linxsol.com List all the network cards available in the vSphere host. See the following screenshot for the output: vicfg-nics –l You can see that my vSphere host has five network cards from vmnic0 to vmnic5. You are able to see the PCI and driver information. The link state for the all the network cards is up. You can also see two types of network card speeds: 1000 Mbs and 9000 Mbs. There is also a card name in the Description field, MTU, and the Mac address for the network cards. You can set up a network card to auto-negotiate as follows: vicfg-nics --auto vimnic0 Now let's set the speed of vmnic0 to 1000 and its duplex settings to full: vicfg-nics --duplex full --speed 1000 vmnic0 Troubleshooting virtual switches The last command we will discuss in this article is vicfg-vswitch. The vicfg-vswitch command is a very powerful command that can be used to manipulate the day-to-day operations of a virtual switch. I will show you how to create and configure port groups and virtual switches. Set up a vSphere host in the vMA appliance in which you want to get information about virtual switches: vifptarget --set crimv3esx001.linxsol.com Type the following command to list all the information about the switches the vSphere host has. You can see the command output in the screenshot that follows: vicfg-vswitch -l You can see that the vSphere host has one virtual switch and two virtual NICs carrying traffic for the management network and for the vMotion. The virtual switch has 128 ports, and 7 of them are in used state. There are two uplinks to the switch with MTU set to 1500, while two VLANS are being used: one for the management network and one for the vMotion traffic. You can also see three distributed switches named OpenStack, dvSwitch-External-Networks, and dvSwitch-Network-Pools. Prefixing dv with the distributed switch name is a command practice, and it can help you to easily recognize a distributed switch. I will go through adding a new virtual switch: vicfg-vswitch --add vSwitch002 This creates a virtual switch with 128 ports and MTU of 1500. You can use the --mtu flag to specify a different MTU. Now add an uplink adapter vnic02 to the newly created virtual switch vSwitch002: vicfg-vswitch --link vmnic0 vSwitch002 To add a port group to the virtual switch, use the following command: vicfg-vswitch --add-pg portgroup002 vSwitch002 Now add an uplink adapter to the port group: vicfg-vswitch --add-pg-uplink vmnic0 --pg portgroup002 vSwitch002 We have discussed all the commands to create a virtual switch and its port groups and to add uplinks. Now we will see how to delete and edit the configuration of a virtual switch. An uplink NIC from the port group can be deleted using –N flag. Remove vmnic0 from the portgroup002: vicfg-vswitch --del-pg-uplink vmnic0 --pg portgroup002 vSwitch002 You can delete the recently created port group as follows: vicfg-vswitch --del-pg portgroup002 vSwitch002 To delete a switch, you first need to remove an uplink adapter from the virtual switch. You need to use the –U flag, which unlinks the uplink from the switch: vicfg-vswitch --unlink vmnic0 vSwitch002 You can delete a virtual switch using the –d flag. Here is how you do it: vicfg-vswitch --delete vSwitch002 You can check the Cisco Discovery Protocol (CDP) settings by using the --get-cdp flag with the vicfg-vswitch command. The following command resulted in putting the CDP in the Listen state, which indicates that the vSphere host is configured to receive CDP information from the physical switch: vi-admin@vma:~[crimv3esx001.linxsol.com]> vicfg-vswitch --get-cdp vSwitch0 listen You can configure CDP options for the vSphere host to down, listen, or advertise. In the Listen mode, the vSphere host tries to discover and publish this information received from a Cisco switch port, though the information of the vSwitch cannot be seen by the Cisco device. In the Advertise mode, the vSphere host doesn't discover and publish the information about the Cisco switch; instead, it publishes information about its vSwitch to the Cisco switch device. vicfg-vswitch --set-cdp both vSwitch0 Troubleshooting VLANs Virtual LANS or VLANs are used to separate the physical switching segment into different logical switching segments in order to segregate the broadcast domains. VLANs not only provide network segmentation but also provide us a method of effective network management. It also increases the overall network security, and nowadays, it is very commonly used in infrastructure. If not set up correctly, it can lead your vSphere host to no connectivity, and you can face some very common problems where you are unable to ping or resolve the host names anymore. Some common errors are exposed, such as Destination host unreachable and Connection failed. A Private VLAN (PVLAN) is an extended version of VLAN that divides logical broadcast domain into further segments and forms private groups. PVLANs are divided into primary and secondary PVLANs. Primary PVLAN is the VLAN distributed into smaller segments that are called primary. These then host all the secondary PVLANs within them. Secondary PVLANs live within primary VLANS, and individual secondary VLANs are recognized by VLAN IDs linked to them. Just like their ancestor VLANs, the packets that travel within secondary VLANS are tagged with their associated IDs. Then, the physical switch recognizes if the packets are tagged as isolated, community, or promiscuous. As network troubleshooting involves taking care of many different aspects, one aspect you will come across in the troubleshooting cycle is actually troubleshooting VLANS. vSphere Enterprise Plus licensing is a requirement to connect a host using a virtual distributed switch and VLANs. You can see the three different network segments in the following screenshot. VLAN A connects all the virtual machines on different vSphere hosts; VLAN B is responsible for carrying out management network traffic; and VLAN C is responsible for carrying out vMotion-related traffic. In order to create PVLANs on your vSphere host, you also need the support of a physical switch: For detailed information about the vSphere network, refer to the VMware official networking guide for vSphere 5.5 at http://goo.gl/SYySFL. Verifying physical trunks and VLAN configuration The first and most important step to troubleshooting your VLAN problem is to look into the VLAN configuration of your vSphere host. You should always start by verifying it. Let's walk through how to verify the network configuration of the management network and VLAN configuration from the vSphere client: Open and log in to your vSphere client. Click on the vSphere host you are trying to troubleshoot. Click on the Configuration menu and choose Networking and then Properties of the switch you are troubleshooting. Choose the network you are troubleshooting from the list, and click on Edit. This will open a new window. Verify the VLAN ID for Management Network. Match the ID of the VLAN provided by your network administrator. Verifying VLAN configuration from CLI Following are the steps for verifying VLAN configuration from CLI: Log in to vSphere CLI. Type the following command in the console: esxcfg-vswitch -l Alternatively, in the vMA appliance, type the vicfg-vswitch command—the output is similar for both commands: vicfg-vswitch –l The output of the excfg-vswitch –l command is as follows: Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks vSwitch0         128         7           128               1500    vmnic3,vmnic2   PortGroup Name        VLAN ID  Used Ports  Uplinks   vMotion               2231     1           vmnic3,vmnic2   Management Network    2230     1           vmnic3,vmnic2  ---Omitted output--- The output of the vicfg-vswitch –l command is as follows: Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks vSwitch0        128             7               128                 1500    vmnic2,vmnic3    PortGroup Name                VLAN ID   Used Ports      Uplinks    vMotion                       2231      1               vmnic2,vmnic3    Management Network            2230      1               vmnic3,vmnic2 --Omitted output--- Match it with your network configuration. If the VLAN ID is incorrect or missing, you can add or edit it using the following command from the vSphere CLI: esxcfg-vswitch –v 2233 –p "Management Network" vSwitch0 To add or edit the VLAN ID from the vMA appliance, use the following command: vicfg-vswitch --vlan 2233 --pg "Management Network" vSwitch0 Verifying VLANs from PowerCLI Verifying information about VLANs from the PowerCLI is fairly simple. Type the following command into the console after connecting with vCenter using Connect-VIServer: Get-VirtualPortGroup –VMHost crimv3esx001.linxsol.com | select Name, VirtualSwitch VLanID Name                                           VirtualSwitch                                  VlanId ----                                                -------------                                     ----- vMotion                                        vSwitch0                                      2231 Management Network                 vSwitch0                                       2233 Verifying PVLANs and secondary PVLANs When you have configured PVLANs or secondary PVLANs in your vSphere infrastructure, you may arrive at a situation where you need to troubleshoot them. This topic will provide you some tips to obtain and view information about PVLANs and secondary PVLANs, as follows: Log in to the vSphere client and click on Networking. Select a distributed switch and right-click on it. From the menu, choose Edit Settings and click on it. This will open the Distributed Switch Settings window. Click on the third tab named Private VLAN. In the section on the left named Primary private VLAN ID, verify the VLAN ID provided by your network engineer. You can verify the VLAN ID of the secondary PVLAN in the next section on the right. Testing virtual machine connectivity Whenever you are troubleshooting, virtual-machine-to-virtual-machine testing is very important. It helps you to isolate the problem domain to a smaller scope. When performing virtual-machine-to-virtual-machine testing, you should always move virtual machines to a single vSphere host. You can then start troubleshooting the network using basic commands, such as ping. If ping works, you are ready to test it further and move the virtual machines to other hosts, and if it still doesn't work, it most likely is a configuration problem of a physical switch or is likely to be a mismatched physical trunk configuration. The most common problem in this scenario is a problematic physical switch configuration. Troubleshooting VMkernel interfaces In this section, we will see how to troubleshoot VMkernel interfaces: Confirm VLAN tagging Ping to check connectivity Vicfg-vmknic Escli network ip interface for local configuration Escli network ip interface list Add or remove Set Escli network ip interface ipv4 get You should know how to use these commands to test if everything is working. You should be able to ping to ensure connectivity exists. We will use the vicfg-vmknic command to configure vSphere VMkernel NICs. Let's create a new VMkernel NIC in a vSphere host using the following steps: Log in to your VMware vSphere CLI. Type the following command to create a new VMkernel NIC: vicfg-vmknic –h crimv3esx001.linxsol.com --add --ip 10.2.0.10 –n 255.255.255.0 'portgroup01' You can enable vMotion using the vicfg-vmknic command as follows: vicfg-vmknic –enable-vmotion. You will not be able to enable vMotion from ESXCLI.vMotion protect migration of your virtual machines with zero down time. You can delete an existing VMkernel NIC as follows: vicfg-vmknic –h crimv3esx001.linxsol.com --delete 'portgroup01' Now check by typing the following command which VMkernel NICs are available in the system: vicfg-vmknic -l Verifying configuration from DCUI When you successfully install vSphere, the first yellow screen that you see is called the vSphere DCUI. DCUI is a frontend management system that helps perform some basic system administration tasks. It also offers the best way to troubleshoot some problems that may be difficult to troubleshoot through vMA, vCLI, or PowerCLI. Further, it is very useful when your host becomes irresponsive from the vCenter or is not accessible from any of the management tools. Some useful tasks that can be performed using the DCUI are as follows: Configuring the Lockdown mode Checking connectivity of Management Network by Ping Configuring and restarting network settings Restarting management agents Viewing logs Resetting vSphere configuration Changing root password Verifying network connectivity from DCUI The vSphere host automatically assigns the first network card available to the system for the management network. Moreover, the default installation of the vSphere host does not let you set up VLAN tags until the VMkernel has been loaded. Verifying network connectivity from the DCUI is important but easy. To do so, follow these steps: Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to the Test Management Network option. Click Enter, and you will see a new screen. Here you can enter up to three IP addresses and the host name to be resolved. You can also type your gateway address on this screen to see if you are able to reach to your gateway. In the host name, you can enter your DNS server name to test if the name resolves successfully. Press Esc to get back and Esc again to log off from the vSphere DCUI. Verifying management network from DCUI You can also verify the settings of your management network from the DCUI. Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to option Configure Management Network option and click Enter. Click Enter again after selecting the first option Network Adapters. On the next screen, you will see a list of all the network adapters your system has. It will show you the Device Name, Hardware Type, Label, Mac Address of the network card, and the status as Connected or Disconnected. From the given network cards, you can select or deselect any of the network card by pressing the space Bar on your keyboard. Press Esc to get back and Esc again to log off from the vSphere DCUI. As you can see in the preceding screenshot, you can also configure the IP address and DNS settings for your vSphere host. You can also use DCUI to configure VLANs and DNS Suffix for your vSphere host. Summary In this article, for troubleshooting, we took a deep dive into the troubleshooting commands and some of the monitoring tools to monitor network performance. The various platforms to execute different commands help you to isolate your troubleshooting techniques. For example, for troubleshooting a single vSphere host, you may like to use esxcli, but for a bunch of vSphere hosts you would like to automate scripting tasks from PowerCLI or from a vMA appliance. Resources for Article: Further resources on this subject: UPGRADING VMWARE VIRTUAL INFRASTRUCTURE SETUPS [article] VMWARE VREALIZE OPERATIONS PERFORMANCE AND CAPACITY MANAGEMENT [article] WORKING WITH VIRTUAL MACHINES [article]
Read more
  • 0
  • 0
  • 11803
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-overview-horizon-view-architecture-and-components
Packt
06 Sep 2016
18 min read
Save for later

An Overview of Horizon View Architecture and Components

Packt
06 Sep 2016
18 min read
In this article by Peter von Oven, author of the book Mastering VMware Horizon 7 - Second Edition, we will introduce you to the architecture and infrastructure components that make up the core VMware Horizon solution, concentrating on the virtual desktop elements of Horizon with Horizon Standard edition, plus the Instant Clone technology that is available in the Horizon Enterprise edition. We are going to concentrate on the core Horizon View functionality of brokering virtual desktop machines that are hosted on a VMware vSphere platform. Throughout the sections of this article we will discuss the role of each of the Horizon View components, explaining how they fit into the overall infrastructure, their role, and the benefits they bring. Once we have explained the high-level concept, we will then take a deeper dive into how that particular component works. As we work through the sections we will also highlight some of the best practices as well as useful hints and tips along the way. We will also cover some of the third-party technologies that integrate and compliment Horizon View, such as antivirus solutions, storage acceleration technologies, and high-end graphics solutions that help deliver a complete end-to-end solution. After reading this article, you will be able to describe each of the components and what part they play within the solution, and why you would use them. (For more resources related to this topic, see here.) Introducing the key Horizon components To start with, we are going to introduce, at a high level, the core infrastructure components and the architecture that make up the Horizon View product. We will start with the high-level architecture, as shown in the following diagram, before going on to drill down into each part in greater detail. All of the VMware Horizon components described are included as part of the licensed product, and the features that are available to you depend on whether you have the Standard Edition, the Advanced Edition, or the Enterprise Edition. It’s also worth remembering that Horizon licensing also includes ESXi and vCenter licensing to support the abilityto deploy the core hosting infrastructure. You can deploy as many ESXi hosts andvCenter servers as you require to host the desktop infrastructure. High-level architectural overview In this section, we will cover the core Horizon View features and functionality for brokeringvirtual desktop machines that are hosted on the VMware vSphere platform. The Horizon View architecture is pretty straightforward to understand, as its foundations lie in the standard VMware vSphere products (ESXi and vCenter). So, if you have the necessary skills and experience of working with this platform, then you are already nearly halfway there. Horizon View builds on the vSphere infrastructure, taking advantage of some of the features of the ESXi hypervisor and vCenter Server. Horizon View requires adding a number of virtual machines to perform the various View roles and functions. An overview of the View architecture for delivery virtual desktops is shown in the following diagram: View components run as applications that are installed on the Microsoft WindowsServer operating system,with the exception of the Access Point which is a hardened Linux appliance, so they could actually run on physical hardware as well.However, there are a great number of benefits available when you run them asvirtual machines, such as delivering HA and DR, as well as the typical cost savingsthat can be achieved through virtualization. The following sections will cover each of these roles/components of the Viewarchitecture in greater detail, starting with the Horizon View Connection Server. The Horizon View Connection Server The Horizon View Connection Server, sometimes referred to as Connection Broker or View Manager, is the central component of the View infrastructure. Its primary role is to connect a user to their virtual desktop by means of performing user authentication and then delivering the appropriate desktop resources based on the user's profile and user entitlement. When logging on to your virtual desktop, it is the Connection Server that you are communicating with How does the Connection Server work? A user will typically connect to their virtual desktop machine from their end point device by launching the View Client, but equally they could use browser-based access. So how does the login process work? Once the View Client has launched (shown as 1 in the diagram on the following page), the user enters the address details of the View Connection Server, which in turn responds (2) by asking them to provide their network login details (their Active Directory (AD) domain username and password). It's worth noting that Horizon View now supports the following different AD Domain functionallevels: Windows Server 2003 Windows Server 2008 and 2008 R2 Windows Server 2012 and 2012 R2 Based on the user’s entitlements, these credentials are authenticated with AD (3) and, if successful, the user is able to continue the logon process. Depending on what they are entitled to, the user could see a launch screen that displays a number of different virtual desktop machine icons that are available for them to login to. These desktop icons represent the desktop poolsthat the user has been entitled to use. A pool is basically a collection of like virtual desktop machines; for example, it could be a pool for the marketing department where the virtual desktop machines contain specific applications/software for that department. Once authenticated, the View Manager or Connection Server makes a call to the vCenter Server (4) to create avirtual desktop machine and then vCenter makes a call (5) to either View Composer (if you areusing linked clones) or will create an Instant Clone using the VM Fork feature of vSphere to start the build process of the virtual desktop if there is not onealready available for the user to login to. When the build process has completed, and the virtual desktop machine is available to the end user,it is displayed/delivered within the View Client window (6) using the chosen display protocol (PCoIP, Blast, or RDP). This process is described pictorially in the following diagram: There are other ways to deploy VDI solutions that do not require a connection broker, although you could argue that strictly speaking this is not a true VDI solution. This is actually what the first VDI solutions looked like, and just allowed a user to connect directly to their own virtual desktop via RDP. If you think about it there are actually some specific use cases for doing just this. For example, if you have a large number of remote branches or offices, you could deploy local infrastructure allowingusers to continue working in the event of a WAN outageor poor network communication between the branch and head office. The infrastructure required would be a sub-set of what you deploy centrally in order to keep costs minimal. It just so happens that VMware have also thought of this use case and have a solution that’s referred to as a Brokerless View, which uses the VMware Horizon View Agent Direct-Connection Plugin. However, don't forget that, in a Horizon View environment, the View Connection Server provides greater functionality and does much more than just connecting users to desktops, as we will see later in this article. As we previously touched on, the Horizon View Connection Server runs as an application on a Windows Server which could be either be a physical or a virtual machine. Running as a virtual machine has many advantages; for example, it means that you can easily add high-availability features, which are critical in this environment, as you could potentially have hundreds or maybe even thousands of virtual desktop machines running on a single host server. Along with brokering the connections between the users and virtual desktop machines, the Connection Server also works with vCenter Server to manage the virtual desktop machines. For example, when using Linked Clones or Instant Clones and powering on virtual desktops, these tasks are initiated by the Connection Server, but they are executed at the vCenter Server level. Now that we have covered what the Connection Server is and how it works, in the next section we are going to look at the requirements you need for it to run. Minimum requirements for the Connection Server To install the View Connection Server, you need to meet the following minimum requirements to run on physical or virtual machines: Hardware requirements: The following table shows the hardware required: Supported operating systems: The View Connection Server must be installed on one of the following operating systems listed in the table below: In the next section we are going to look at the Horizon View Security Server. The Horizon View Security Server The Horizon View Security Server is another component in the architecture and is essentially another version of the View Connection Server but, this time, it sits within your DMZ so that you can allow end users to securely connect to their virtual desktop machine from an external network or the Internet. How does the Security Server work? To start with, the user login process at the beginning is the same as when connecting to a View Connection Server, essentially because the Security Server is just another version of the Connection Server running a subset of the features. The difference being is that you connect to the address of the Security Server.The Security Server sits inside your DMZ and communicates with a Connection Server sitting on the internal network that it ispaired with. So now we have added an extra security layer as the internal Connection Server is not exposed externally, with theidea being that users can now access their virtual desktop machines externally withoutneeding to first connect to a VPN on the network first.The Security Server should not be joined to the Domain. This process is described pictorially in the following diagram: We mentioned previously that the Security Server is paired with a Connection Servers. The pairing is configured by the use of a one-time password during installation. It's a bit like pairing your smart phone with the hands-free kit in your car using Bluetooth. When the user logs in from the View Client, they now use the external URL of the Security Server to access the Connection Server, which in turn authenticates the user against AD. If the Connection Server is configured as a PCoIP gateway, then it will pass the connection and addressing information to the View Client. This connection information will allow the View Client to connect to the Security Server using PCoIP. This is shown in the diagram by the green arrow (1). The Security Server will then forward the PCoIP connection to the virtual desktop machine, (2) creating the connection for the user. The virtual desktop machine is displayed/delivered within the View Client window (3) using the chosen display protocol (PCoIP, Blast, or RDP). The Horizon View Replica Server The Horizon View Replica Server, as the name suggests, is a replica or copy of a View Connection Server and serves two key purposes. The first is that it is used to enable high availability to your HorizonView environment. Having a replica of your View Connection Server means that, if the Connection Server fails, users are still able to connect to their virtualdesktop machines. Secondly, adding Replica Servers allows you to scale up the number of users and virtual desktop connections. An individual instance of a Connection Server can support 2000 connections, so by adding additional Connection Servers allows you to add another 2000 users at a time, up to the maximum of five connection servers and 10,000 users per Horizon View Pod. When deploying a Replica Server, you will need to change the IP address or update the DNSrecord to match this server if you are not using a load balancer. How does the Replica Server work? So, the first question is, what actually gets replicated? The Connection Broker stores all its information relating to the end users, desktop pools, virtual desktopmachines, and other View-related objects, in an Active Directory Application Mode(ADAM) database. Then, using the Lightweight Directory Access Protocol (LDAP) (it uses a method similar to the one AD uses for replication), this View information getscopied from the original Connection Server to the Replica Server. As both, the Connection Server and the Replica Server are now identical to each other, if your Connection Server fails, then you essentially have a backup that steps in and takes over so that end users can still continue to connect to their virtual desktop machines. Just like with the other components, you cannot install the Replica Server role on the same machine that is running as a Connection Server or any of the other Horizon View components. The Horizon View Enrollment Server and True SSO The Horizon View Enrollment Server is the final component that is part of the Horizon View Connection Server installation options, and is selected from the drop-down menu from the installation options screen.So what does the Enrollment Server do? Horizon 7 sees the introduction to a new feature called True SSO. True SSO is a solution that allows a user to authenticate to a Microsoft Windows environment without them having to enter their AD credentials.It integrates into another VMware product, VMware Identity Manager which forms part of both Horizon 7 Advanced and Enterprise Editions. Its job is to sit between the Connection Server and the Microsoft Certificate Authority and to request temporary certificates from the certificate store. This process is described pictorially in the following diagram: A user first logs in to VMware Identity Manager either using their credentials or other authentication methods such as smartcards or biometric devices. Once successfully authenticated, the user will be presented with the virtual desktop machines or hosted applications that they are entitled to use. They can launch any of these by simply double clicking, which by doing so will launch the Horizon View Client as shown by the red arrow (1) in the previous diagram. The user’s credentials will then be passed to the Connection Server (2) which in turn will verify them by sending a Security Assertion Markup Language (SAML)assertion back to the Identity Manager (3). If the user’s credentials are verified, then the Connection Server passes them on to the Enrollment Server (4). The Enrollment Server then makes a request to the Microsoft Certificate Authority (CA) to generate a short-lived, temporary certificate for that user to use (5). With the certificate now generated, the Connection Server presents it to the operating system of the virtual desktop machine (6), which in turn validates with Active Directory as to whether or not the certificate is authentic (7). When the certificate has been authenticated then the user is logged on to theirvirtual desktop machine which will be displayed/delivered to the View Client using the chosen display protocol (8). True SSO is supported with all Horizon 7 supported desktop operating systems for desktops as well Windows Server 2008 R2 and Windows Server 2012 R2. It also supports PCoIP, HTML, and Blast Extreme delivery protocols. VMware Access Point VMware Access Point performs exactly the same functionality as the View Security Server, as shown in the following diagram, however with one key difference. Instead of being a Windows application and another role of the Connection Server, the Access Point is a separate virtual appliance that runs a hardened, locked-down Linux operating system. Although the Access Point appliance delivers pretty much the same functionality as the Security Server, it does not yet completely replace it. Especially if you already have a production deployment that uses the Security Server for external access. You can continue to use this architecture. If you are using the secure tunnel function, PCoIP Secure Gateway, or the Blast Secure Gateway features of the Connection Server, then these features will need to be disabled on the Connection Server if you are using the Access Point. They are all enabled by default on the Access Point appliance. A key difference between the Access Point appliance and the Security Server is in the way it scales. Before you had to pair a Security Server with a Connection Server, which was a limitation, but this is now no longer the case. As such you can now scale to as many Access Point appliances as you need for your environment, with the maximum limit being around 2000 sessions for a single appliance. Adding additional appliances is simply a case of deploying the appliance as appliances don’t depend on other appliances and do not communicate with them. They communicate directly with the Connection Servers. Persistent or non-persistent desktops In this section, we are going to talk about the different types of desktop assignments, and the way a virtual desktop machine is delivered to an end user. This is an important design consideration as the chosen method could potentially impact on the storage requirements (covered in the next section), the hosting infrastructure, and also which technology or solution is used to provision the desktop to the end users. One of the questions that always get asked is whether you should deploy a dedicated (persistent) assignment, or a floating desktop assignment (non-persistent). Desktops can either be individual virtual machines, which are dedicated to a user on a 1:1 basis (as we have in a physical desktop deployment, where each user effectively owns their own desktop), or a user has a new, vanilla desktop that gets provisioned, built, personalized, and then assigned at the time of login. The virtual desktop machine is chosen at random from a pool of available desktops that the end user is entitled to use. The two options are described in more detail as follows: Persistent desktop: Users are allocated a desktop that retains all of their documents, applications, and settings between sessions. The desktop is statically assigned the first time that the user connects and is then used for all subsequent sessions. No other user is permitted access to the desktop. Non-persistent desktop: Users might be connected to different desktops from the pool, each time that they connect. Environmental or user data does not persist between sessions and instead is delivered as the user logs on to their desktop. The desktop is refreshed or reset when the user logs off. In most use cases, a non-persistent configuration is the best option, the key reason is that, in this model, you don't need to build all the desktops upfront for each user. You only need to power on a virtual desktop as and when it's required. All users start with the same basic desktop, which then gets personalized before delivery. This helps with concurrency rates. For example, you might have 5,000 people in your organization, but only 2,000 ever login at the same time; therefore, you only need to have 2,000 virtual desktops available. Otherwise, you would have to build a desktop for each one of the 5,000 users that might ever log in, resulting in more server infrastructure and certainly a lot more storage capacity. We will talk about storage in the next section. The one thing that used to be a bit of a show-stopper for non-persistent desktops was around how to deliver the applications to the virtual desktop machine. Now application layering solutions such as VMware App Volumes is becoming a more main stream technology, the applications can now be delivered on demand as the desktop is built and the user logs in. Another thing that we often see some confusion over is the difference between dedicated and floating desktops, and how linked clones fit in. Just to make it clear, linked clones, full clones, and Instant Clones are not what we are talking about when we refer to dedicated and floating desktops. Cloning operations refer to how a desktop is built and provisioned, whereas the terms persistent and non-persistent refer to how a desktop is assigned to an end user. Dedicated and floating desktops are purely about user assignment and whether a user has a dedicated desktop or one allocated from a pool on-demand. Linked clones and full clones are features of Horizon View, which uses View Composer to create the desktop image for each user from a master or parent image. This means, regardless of having a floating or dedicated desktop assignment, the virtual desktop machine could still be a linked or full clone. So, here's a summary of the benefits: It is operationally efficient: All users start from a single or smaller number of desktop images. Organizations reduce the amount of image and patch management. It is efficient storage-wise: The amount of storage required to host the non-persistent desktop images will be smaller than keeping separate instances of unique user desktop images. In the next sections, we are going to cover an in-depth overview of the cloning technologies available in Horizon 7, starting with Horizon View Composer and linked clones, and the advantages the technology delivers. Summary In this article, we dscussed the Horizon View architecture and the different components that make up the complete solution. We covered the key technologies, such as how linked clones and Instant Clones work to optimize storage, and then introduced some of the features that go toward delivering a great end user experience, such as delivering high-end graphics, unified communications, profile management, and how the protocols deliver the desktop to the end user. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [article] Upgrading VMware Virtual Infrastructure Setups [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 11764

article-image-metrics-vrealize-operations
Packt
26 Dec 2014
25 min read
Save for later

Metrics in vRealize Operations

Packt
26 Dec 2014
25 min read
 In this article by Iwan 'e1' Rahabok, author of VMware vRealize Operations Performance and Capacity Management, we will learn that vSphere 5.5 comes with many counters, many more than what a physical server provides. There are new counters that do not have a physical equivalent, such as memory ballooning, CPU latency, and vSphere replication. In addition, some counters have the same name as their physical world counterpart but behave differently in vSphere. Memory usage is a common one, resulting in confusion among system administrators. For those counters that are similar to their physical world counterparts, vSphere may use different units, such as milliseconds. (For more resources related to this topic, see here.) As a result, experienced IT administrators find it hard to master vSphere counters by building on their existing knowledge. Instead of trying to relate each counter to its physical equivalent, I find it useful to group them according to their purpose. Virtualization formalizes the relationship between the infrastructure team and application team. The infrastructure team changes from the system builder to service provider. The application team no longer owns the physical infrastructure. The application team becomes a consumer of a shared service—the virtual platform. Depending on the Service Level Agreement (SLA), the application team can be served as if they have dedicated access to the infrastructure, or they can take a performance hit in exchange for a lower price. For SLAs where performance matters, the VM running in the cluster should not be impacted by any other VMs. The performance must be as good as if it is the only VM running in the ESXi. Because there are two different counter users, there are two different purposes. The application team (developers and the VM owner) only cares about their own VM. The infrastructure team has to care about both the VM and infrastructure, especially when they need to show that the shared infrastructure is not a bottleneck. One set of counters is to monitor the VM; the other set is to monitor the infrastructure. The following diagram shows the two different purposes and what we should check for each. By knowing what matters on each layer, we can better manage the virtual environment. The two-tier IT organization At the VM layer, we care whether the VM is being served well by the platform. Other VMs are irrelevant from the VM owner's point of view. A VM owner only wants to make sure his or her VM is not contending for a resource. So the key counter here is contention. Only when we are satisfied that there is no contention can we proceed to check whether the VM is sized correctly or not. Most people check for utilization first because that is what they are used to monitoring in the physical infrastructure. In a virtual environment, we should check for contention first. At the infrastructure layer, we care whether it serves everyone well. Make sure that there is no contention for resource among all the VMs in the platform. Only when the infrastructure is clear from contention can we troubleshoot a particular VM. If the infrastructure is having a hard time serving majority of the VMs, there is no point troubleshooting a particular VM. This two-layer concept is also implemented by vSphere in compute and storage architectures. For example, there are two distinct layers of memory in vSphere. There is the individual VM memory provided by the hypervisor and there is the physical memory at the host level. For an individual VM, we care whether the VM is getting enough memory. At the host level, we care whether the host has enough memory for everyone. Because of the difference in goals, we look for a different set of counters. In the previous diagram, there are two numbers shown in a large font, indicating that there are two main steps in monitoring. Each step applies to each layer (the VM layer and infrastructure layer), so there are two numbers for each step. Step 1 is used for performance management. It is useful during troubleshooting or when checking whether we are meeting performance SLAs or not. Step 2 is used for capacity management. It is useful as part of long-term capacity planning. The time period for step 2 is typically 3 months, as we are checking for overall utilization and not a one off spike. With the preceding concept in mind, we are ready to dive into more detail. Let's cover compute, network, and storage. Compute The following diagram shows how a VM gets its resource from ESXi. It is a pretty complex diagram, so let me walk you through it. The tall rectangular area represents a VM. Say this VM is given 8 GB of virtual RAM. The bottom line represents 0 GB and the top line represents 8 GB. The VM is configured with 8 GB RAM. We call this Provisioned. This is what the Guest OS sees, so if it is running Windows, you will see 8 GB RAM when you log into Windows. Unlike a physical server, you can configure a Limit and a Reservation. This is done outside the Guest OS, so Windows or Linux does not know. You should minimize the use of Limit and Reservation as it makes the operation more complex. Entitlement means what the VM is entitled to. In this example, the hypervisor entitles the VM to a certain amount of memory. I did not show a solid line and used an italic font style to mark that Entitlement is not a fixed value, but a dynamic value determined by the hypervisor. It varies every minute, determined by Limit, Entitlement, and Reservation of the VM itself and any shared allocation with other VMs running on the same host. Obviously, a VM can only use what it is entitled to at any given point of time, so the Usage counter does not go higher than the Entitlement counter. The green line shows that Usage ranges from 0 to the Entitlement value. In a healthy environment, the ESXi host has enough resources to meet the demands of all the VMs on it with sufficient overhead. In this case, you will see that the Entitlement, Usage, and Demand counters will be similar to one another when the VM is highly utilized. This is shown by the green line where Demand stops at Usage, and Usage stops at Entitlement. The numerical value may not be identical because vCenter reports Usage in percentage, and it is an average value of the sample period. vCenter reports Entitlement in MHz and it takes the latest value in the sample period. It reports Demand in MHz and it is an average value of the sample period. This also explains why you may see Usage a bit higher than Entitlement in highly-utilized vCPU. If the VM has low utilization, you will see the Entitlement counter is much higher than Usage. An environment in which the ESXi host is resource constrained is unhealthy. It cannot give every VM the resources they ask for. The VMs demand more than they are entitled to use, so the Usage and Entitlement counters will be lower than the Demand counter. The Demand counter can go higher than Limit naturally. For example, if a VM is limited to 2 GB of RAM and it wants to use 14 GB, then Demand will exceed Limit. Obviously, Demand cannot exceed Provisioned. This is why the red line stops at Provisioned because that is as high as it can go. The difference between what the VM demands and what it gets to use is the Contention counter. Contention is Demand minus Usage. So if the Contention is 0, the VM can use everything it demands. This is the ultimate goal, as performance will match the physical world. This Contention value is useful to demonstrate that the infrastructure provides a good service to the application team. If a VM owner comes to see you and says that your shared infrastructure is unable to serve his or her VM well, both of you can check the Contention counter. The Contention counter should become a part of your SLA or Key Performance Indicator (KPI). It is not sufficient to track utilization alone. When there is contention, it is possible that both your VM and ESXi host have low utilization, and yet your customers (VMs running on that host) perform poorly. This typically happens when the VMs are relatively large compared to the ESXi host. Let me give you a simple example to illustrate this. The ESXi host has two sockets and 20 cores. Hyper-threading is not enabled to keep this example simple. You run just 2 VMs, but each VM has 11 vCPUs. As a result, they will not be able to run concurrently. The hypervisor will schedule them sequentially as there are only 20 physical cores to serve 22 vCPUs. Here, both VMs will experience high contention. Hold on! You might say, "There is no Contention counter in vSphere and no memory Demand counter either." This is where vRealize Operations comes in. It does not just regurgitate the values in vCenter. It has implicit knowledge of vSphere and a set of derived counters with formulae that leverage that knowledge. You need to have an understanding of how the vSphere CPU scheduler works. The following diagram shows the various states that a VM can be in: The preceding diagram is taken from The CPU Scheduler in VMware vSphere® 5.1: Performance Study (you can find it at http://www.vmware.com/resources/techresources/10345). This is a whitepaper that documents the CPU scheduler with a good amount of depth for VMware administrators. I highly recommend you read this paper as it will help you explain to your customers (the application team) how your shared infrastructure juggles all those VMs at the same time. It will also help you pick the right counters when you create your custom dashboards in vRealize Operations. Storage If you look at the ESXi and VM metric groups for storage in the vCenter performance chart, it is not clear how they relate to one another at first glance. You have storage network, storage adapter, storage path, datastore, and disk metric groups that you need to check. How do they impact on one another? I have created the following diagram to explain the relationship. The beige boxes are what you are likely to be familiar with. You have your ESXi host, and it can have NFS Datastore, VMFS Datastore, or RDM objects. The blue colored boxes represent the metric groups. From ESXi to disk NFS and VMFS datastores differ drastically in terms of counters, as NFS is file-based while VMFS is block-based. For NFS, it uses the vmnic, and so the adapter type (FC, FCoE, or iSCSI) is not applicable. Multipathing is handled by the network, so you don't see it in the storage layer. For VMFS or RDM, you have more detailed visibility of the storage. To start off, each ESXi adapter is visible and you can check the counters for each of them. In terms of relationship, one adapter can have many devices (disk or CDROM). One device is typically accessed via two storage adapters (for availability and load balancing), and it is also accessed via two paths per adapter, with the paths diverging at the storage switch. A single path, which will come from a specific adapter, can naturally connect one adapter to one device. The following diagram shows the four paths: Paths from ESXi to storage A storage path takes data from ESXi to the LUN (the term used by vSphere is Disk), not to the datastore. So if the datastore has multiple extents, there are four paths per extent. This is one reason why I did not use more than one extent, as each extent adds four paths. If you are not familiar with extent, Cormac Hogan explains it well on this blog post: http://blogs.vmware.com/vsphere/2012/02/vmfs-extents-are-they-bad-or-simply-misunderstood.html For VMFS, you can see the same counters at both the Datastore level and the Disk level. Their value will be identical if you follow the recommended configuration to create a 1:1 relationship between a datastore and a LUN. This means you present an entire LUN to a datastore (use all of its capacity). The following screenshot shows how we manage the ESXi storage. Click on the ESXi you need to manage, select the Manage tab, and then the Storage subtab. In this subtab, we can see the adapters, devices, and the host cache. The screen shows an ESXi host with the list of its adapters. I have selected vmhba2, which is an FC HBA. Notice that it is connected to 5 devices. Each device has 4 paths, so I have 20 paths in total. ESXi adapter Let's move on to the Storage Devices tab. The following screenshot shows the list of devices. Because NFS is not a disk, it does not appear here. I have selected one of the devices to show its properties. ESXi device If you click on the Paths tab, you will be presented with the information shown in the next screenshot, including whether a path is active. Note that not all paths carry I/O; it depends on your configuration and multipathing software. Because each LUN typically has four paths, path management can be complicated if you have many LUNs. ESXi paths The story is quite different on the VM layer. A VM does not see the underlying shared storage. It sees local disks only. So regardless of whether the underlying storage is NFS, VMFS, or RDM, it sees all of them as virtual disks. You lose visibility in the physical adapter (for example, you cannot tell how many IOPSs on vmhba2 are coming from a particular VM) and physical paths (for example, how many disk commands travelling on that path are coming from a particular VM). You can, however, see the impact at the Datastore level and the physical Disk level. The Datastore counter is especially useful. For example, if you notice that your IOPS is higher at the Datastore level than at the virtual Disk level, this means you have a snapshot. The snapshot IO is not visible at the virtual Disk level as the snapshot is stored on a different virtual disk. From VM to disk Counters in vCenter and vRealize Operations We compared the metric groups between vCenter and vRealize Operations. We know that vRealize Operations provides a lot more detail, especially for larger objects such as vCenter, data center, and cluster. It also provides information about the distributed switch, which is not displayed in vCenter at all. This makes it useful for the big-picture analysis. We will now look at individual counters. To give us a two-dimensional analysis, I would not approach it from the vSphere objects' point of view. Instead, we will examine the four key types of metrics (CPU, RAM, network, and storage). For each type, I will provide my personal take on what I think is a good guidance for their value. For example, I will give guidance on a good value for CPU contention based on what I have seen in the field. This is not an official VMware recommendation. I will state the official recommendation or popular recommendation if I am aware of it. You should spend time understanding vCenter counters and esxtop counters. This section of the article is not meant to replace the manual. I would encourage you to read the vSphere documentation on this topic, as it gives you the required foundation while working with vRealize Operations. The following are the links to this topic: The link for vSphere 5.5 is http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.monitoring.doc/GUID-12B1493A-5657-4BB3-8935-44B6B8E8B67C.html. If this link does not work, visit https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html and then navigate to ESXi and vCenter Server 5.5 Documentation | vSphere Monitoring and Performance | Monitoring Inventory Objects with Performance Charts. The counters are documented in the vSphere API. You can find it at http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.PerformanceManager.html. If this link has changed and no longer works, open the vSphere online documentation and navigate to vSphere API/SDK Documentation | vSphere Management SDK | vSphere Web Services SDK Documentation | VMware vSphere API Reference | Managed Object Types | P. Here, choose Performance Manager from the list under the letter P. The esxtop manual provides good information on the counters. You can find it at https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html. You should also be familiar with the architecture of ESXi, especially how the scheduler works. vCenter has a different collection interval (sampling period) depending upon the timeline you are looking at. Most of the time you are looking at the real-time statistic (chart), as other timelines do not have enough counters. You will notice right away that most of the counters become unavailable once you choose a timeline. In the real-time chart, each data point has 20 seconds' worth of data. That is as accurate as it gets in vCenter. Because all other performance management tools (including vRealize Operations) get their data from vCenter, they are not getting anything more granular than this. As mentioned previously, esxtop allows you to sample down to a minimum of 2 seconds. Speaking of esxtop, you should be aware that not all counters are exposed in vCenter. For example, if you turn on 3D graphics, there is a separate SVGA thread created for that VM. This can consume CPU and it will not show up in vCenter. The Mouse, Keyboard, Screen (MKS) threads, which give you the console, also do not show up in vCenter. The next screenshot shows how you lose most of your counters if you choose a timespan other than real time. In the case of CPU, you are basically left with two counters, as Usage and Usage in MHz cover the same thing. You also lose the ability to monitor per core, as the target objects only list the host now and not the individual cores. Counters are lost beyond 1 hour Because the real-time timespan only lasts for 1 hour, the performance troubleshooting has to be done at the present moment. If the performance issue cannot be recreated, there is no way to troubleshoot in vCenter. This is where vRealize Operations comes in, as it keeps your data for a much longer period. I was able to perform troubleshooting for a client on a problem that occurred more than a month ago! vRealize Operations takes data every 5 minutes. This means it is not suitable to troubleshoot performance that does not last for 5 minutes. In fact, if the performance issue only lasts for 5 minutes, you may not get any alert, because the collection may happen exactly in the middle of those 5 minutes. For example, let's assume the CPU is idle from 08:00:00 to 08:02:30, spikes from 08:02:30 to 08:07:30, and then again is idle from 08:07:30 to 08:10:00. If vRealize Operations is collecting at exactly 08:00, 08:05, and 08:10, you will not see the spike as it is spread over two data points. This means, for vRealize Operations to pick up the spike in its entirety without any idle data, the spike has to last for 10 minutes or more. In some metrics, the unit is actually 20 seconds. vRealize Operations averages a set of 20-second data points into a single 5-minute data point. The Rollups column is important. Average means the average of 5 minutes in the case of vRealize Operations. The summation value is actually an average for those counters where accumulation makes more sense. An example is CPU Ready time. It gets accumulated over the sampling period. Over a period of 20 seconds, a VM may accumulate 200 milliseconds of CPU ready time. This translates into 1 percent, which is why I said it is similar to average, as you lose the peak. Latest, on the other hand, is different. It takes the last value of the sampling period. For example, in the sampling for 20 seconds, it takes the value between 19 and 20 seconds. This value can be lower or higher than the average of the entire 20-second period. So what is missing here is the peak of the sampling period. In the 5-minute period, vRealize Operations does not collect low, average, and high from vCenter. It takes average only. Let's talk about the Units column now. Some common units are milliseconds, MHz, percent, KBps, and KB. Some counters are shown in MHz, which means you need to know your ESXi physical CPU frequency. This can be difficult due to CPU power saving features, which lower the CPU frequency when the demand is low. In large environments, this can be operationally difficult as you have different ESXi hosts from different generations (and hence, are likely to sport a different GHz). This is also the reason why I state that the cluster is the smallest logical building block. If your cluster has ESXi hosts with different frequencies, these MHz-based counters can be difficult to use, as the VMs get vMotion-ed by DRS. vRealize Operations versus vCenter I mentioned earlier that vRealize Operations does not simply regurgitate what vCenter has. Some of the vSphere-specific characteristics are not properly understood by traditional management tools. Partial understanding can lead to misunderstanding. vRealize Operations starts by fully understanding the unique behavior of vSphere, then simplifying it by consolidating and standardizing the counters. For example, vRealize Operations creates derived counters such as Contention and Workload, then applies them to CPU, RAM, disk, and network. Let's take a look at one example of how partial information can be misleading in a troubleshooting scenario. It is common for customers to invest in an ESXi host with plenty of RAM. I've seen hosts with 256 to 512 GB of RAM. One reason behind this is the way vCenter displays information. In the following screenshot, vCenter is giving me an alert. The host is running on high memory utilization. I'm not showing the other host, but you can see that it has a warning, as it is high too. The screenshots are all from vCenter 5.0 and vCenter Operations 5.7, but the behavior is still the same in vCenter 5.5 Update 2 and vRealize Operations 6.0. vSphere 5.0 – Memory alarm I'm using vSphere 5.0 and vCenter Operations 5.x to show the screenshots as I want to provide an example of the point I stated earlier, which is the rapid change of vCloud Suite. The first step is to check if someone has modified the alarm by reducing the threshold. The next screenshot shows that utilization above 95 percent will trigger an alert, while utilization above 90 percent will trigger a warning. The threshold has to be breached by at least 5 minutes. The alarm is set to a suitably high configuration, so we will assume the alert is genuinely indicating a high utilization on the host. vSphere 5.0 – Alarm settings Let's verify the memory utilization. I'm checking both the hosts as there are two of them in the cluster. Both are indeed high. The utilization for vmsgesxi006 has gone down in the time taken to review the Alarm Settings tab and move to this view, so both hosts are now in the Warning status. vSphere 5.0 – Hosts tab Now we will look at the vmsgesxi006 specification. From the following screenshot, we can see it has 32 GB of physical RAM, and RAM usage is 30747 MB. It is at 93.8 percent utilization. vSphere – Host's summary page Since all the numbers shown in the preceding screenshot are refreshed within minutes, we need to check with a longer timeline to make sure this is not a one-time spike. So let's check for the last 24 hours. The next screenshot shows that the utilization was indeed consistently high. For the entire 24-hour period, it has consistently been above 92.5 percent, and it hits 95 percent several times. So this ESXi host was indeed in need of more RAM. Deciding whether to add more RAM is complex; there are many factors to be considered. There will be downtime on the host, and you need to do it for every host in the cluster since you need to maintain a consistent build cluster-wide. Because the ESXi is highly utilized, I should increase the RAM significantly so that I can support more VMs or larger VMs. Buying bigger DIMMs may mean throwing away the existing DIMMs, as there are rules restricting the mixing of DIMMs. Mixing DIMMs also increases management complexity. The new DIMM may require a BIOS update, which may trigger a change request. Alternatively, the large DIMM may not be compatible with the existing host, in which case I have to buy a new box. So a RAM upgrade may trigger a host upgrade, which is a larger project. Before jumping in to a procurement cycle to buy more RAM, let's double-check our findings. It is important to ask what is the host used for? and who is using it?. In this example scenario, we examined a lab environment, the VMware ASEAN lab. Let's check out the memory utilization again, this time with the context in mind. The preceding graph shows high memory utilization over a 24-hour period, yet no one was using the lab in the early hours of the morning! I am aware of this as I am the lab administrator. We will now turn to vCenter Operations for an alternative view. The following screenshot from vCenter Operations 5 tells a different story. CPU, RAM, disk, and network are all in the healthy range. Specifically for RAM, it has 97 percent utilization but 32 percent demand. Note that the Memory chart is divided into two parts. The upper one is at the ESXi level, while the lower one shows individual VMs in that host. The upper part is in turn split into two. The green rectangle (Demand) sits on top of a grey rectangle (Usage). The green rectangle shows a healthy figure at around 10 GB. The grey rectangle is much longer, almost filling the entire area. The lower part shows the hypervisor and the VMs' memory utilization. Each little green box represents one VM. On the bottom left, note the KEY METRICS section. vCenter Operations 5 shows that Memory | Contention is 0 percent. This means none of the VMs running on the host is contending for memory. They are all being served well! vCenter Operations 5 – Host's details page I shared earlier that the behavior remains the same in vCenter 5.5. So, let's take a look at how memory utilization is shown in vCenter 5.5. The next screenshot shows the counters provided by vCenter 5.5. This is from a different ESXi host, as I want to provide you with a second example. Notice that the ballooning is 0, so there is no memory pressure for this host. This host has 48 GB of RAM. About 26 GB has been mapped to VM or VMkernel, which is shown by the Consumed counter (the highest line in the chart; notice that the value is almost constant). The Usage counter shows 52 percent because it takes from Consumed. The active memory is a lot lower, as you can see from the line at the bottom. Notice that the line is not a simple straight line, as the value goes up and down. This proves that the Usage counter is actually the Consumed counter. vCenter 5.5 Update 1 memory counters At this point, some readers might wonder whether that's a bug in vCenter. No, it is not. There are situations in which you want to use the consumed memory and not the active memory. In fact, some applications may not run properly if you use active memory. Also, technically, it is not a bug as the data it gives is correct. It is just that additional data will give a more complete picture since we are at the ESXi level and not at the VM level. vRealize Operations distinguishes between the active memory and consumed memory and provides both types of data. vCenter uses the Consumed counter for utilization for the ESXi host. As you will see later in this article, vCenter uses the Active counter for utilization for VM. So the Usage counter has a different formula in vCenter depending upon the object. This makes sense as they are at different levels. vRealize Operations uses the Active counter for utilization. Just because a physical DIMM on the motherboard is mapped to a virtual DIMM in the VM, it does not mean it is actively used (read or write). You can use that DIMM for other VMs and you will not incur (for practical purposes) performance degradation. It is common for Microsoft Windows to initialize pages upon boot with zeroes, but never use them subsequently. For further information on this topic, I would recommend reviewing Kit Colbert's presentation on Memory in vSphere at VMworld, 2012. The content is still relevant for vSphere 5.x. The title is Understanding Virtualized Memory Performance Management and the session ID is INF-VSP1729. You can find it at http://www.vmworld.com/docs/DOC-6292. If the link has changed, the link to the full list of VMworld 2012 sessions is http://www.vmworld.com/community/sessions/2012/. Not all performance management tools understand this vCenter-specific characteristic. They would have given you a recommendation to buy more RAM. Summary In this article, we covered the world of counters in vCenter and vRealize Operations. The counters were analyzed based on their four main groupings (CPU, RAM, disk, and network). We also covered each of the metric groups, which maps to the corresponding objects in vCenter. For the counters, we also shared how they are related, and how they differ. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 11746

article-image-importance-windows-rds-horizon-view
Packt
30 Oct 2014
15 min read
Save for later

Importance of Windows RDS in Horizon View

Packt
30 Oct 2014
15 min read
In this article by Jason Ventresco, the author of VMware Horizon View 6 Desktop Virtualization Cookbook, has explained about the Windows Remote Desktop Services (RDS) and how they are implemented in Horizon View. He will discuss about configuring the Windows RDS server and also about creating RDS farm in Horizon View. (For more resources related to this topic, see here.) Configuring the Windows RDS server for use with Horizon View This recipe will provide an introduction to the minimum steps required to configure Windows RDS and integrate it with our Horizon View pod. For a more in-depth discussion on Windows RDS optimization and management, consult the Microsoft TechNet page for Windows Server 2012 R2 (http://technet.microsoft.com/en-us/library/hh801901.aspx). Getting ready VMware Horizon View supports the following versions of Window server for use with RDS: Windows Server 2008 R2: Standard, Enterprise, or Datacenter, with SP1 or later installed Windows Server 2012: Standard or Datacenter Windows Server 2012 R2: Standard or Datacenter The examples shown in this article were performed on Windows Server 2012 R2. Additionally, all of the applications required have already been installed on the server, which in this case included Microsoft Office 2010. Microsoft Office has specific licensing requirements when used with a Windows Server RDS. Consult Microsoft's Licensing of Microsoft Desktop Application Software for Use with Windows Server Remote Desktop Services document (http://www.microsoft.com/licensing/about-licensing/briefs/remote-desktop-services.aspx), for additional information. The Windows RDS feature requires a licensing server component called the Remote Desktop Licensing role service. For reasons of availability, it is not recommended that you install it on the RDS host itself, but rather, on an existing server that serves some other function or even on a dedicated server if possible. Ideally, the RDS licensing role should be installed on multiple servers for redundancy reasons. The Remote Desktop Licensing role service is different from the Microsoft Windows Key Management System (KMS), as it is used solely for Windows RDS hosts. Consult the Microsoft TechNet article, RD Licensing Configuration on Windows Server 2012 (http://blogs.technet.com/b/askperf/archive/2013/09/20/rd-licensing-configuration-on-windows-server-2012.aspx), for the steps required to install the Remote Desktop Licensing role service. Additionally, consult Microsoft document Licensing Windows Server 2012 R2 Remote Desktop Services (http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServerRDS_VLBrief.pdf) for information about the licensing options for Windows RDS, which include both per-user and per-device options. Windows RDS host – hardware recommendations The following resources represent a starting point for assigning CPU and RAM resources to Windows RDS hosts. The actual resources required will vary based on the applications being used and the number of concurrent users; so, it is important to monitor server utilization and adjust the CPU and RAM specifications if required. The following are the requirements: One vCPU for each of the 15 concurrent RDS sessions 2 GB RAM, base RAM amount equal to 2 GB per vCPU, plus 64 MB of additional RAM for each concurrent RDS session An additional RAM equal to the application requirements, multiplied by the estimated number of concurrent users of the application Sufficient hard drive space to store RDS user profiles, which will vary based on the configuration of the Windows RDS host: Windows RDS supports multiple options to control user profiles' configuration and growth, including a RD user home directory, RD roaming user profiles, and mandatory profiles. For information about these and other options, consult the Microsoft TechNet article, Manage User Profiles for Remote Desktop Services, at http://technet.microsoft.com/en-us/library/cc742820.aspx. This space is only required if you intend to store user profiles locally on the RDS hosts. Horizon View Persona Management is not supported and will not work with Windows RDS hosts. Consider native Microsoft features such as those described previously in this recipe, or third-party tools such as AppSense Environment Manager (http://www.appsense.com/products/desktop/desktopnow/environment-manager). Based on these values, a Windows Server 2012 R2 RDS host running Microsoft Office 2010 that will support 100 concurrent users will require the following resources: Seven vCPU to support upto 105 concurrent RDS sessions 45.25 GB of RAM, based on the following calculations: 20.25 GB of base RAM (2 GB for each vCPU, plus 64 MB for each of the 100 users) A total of 25 GB additional RAM to support Microsoft Office 2010 (Office 2010 recommends 256 MB of RAM for each user) While the vCPU and RAM requirements might seem excessive at first, remember that to deploy a virtual desktop for each of these 100 users, we would need at least 100 vCPUs and 100 GB of RAM, which is much more than what our Windows RDS host requires. By default, Horizon View allows only 150 unique RDS user sessions for each available Windows RDS host; so, we need to deploy multiple RDS hosts if users need to stream two applications at once or if we anticipate having more than 150 connections. It is possible to change the number of supported sessions, but it is not recommended due to potential performance issues. Importing the Horizon View RDS AD group policy templates Some of the settings configured throughout this article are applied using AD group policy templates. Prior to using the RDS feature, these templates should be distributed to either the RDS hosts in order to be used with the Windows local group policy editor, or to an AD domain controller where they can be applied using the domain. Complete the following steps to install the View RDS group policy templates: When referring to VMware Horizon View installation packages, y.y.y refers to the version number and xxxxxx refers to the build number. When you download packages, the actual version and build numbers will be in a numeric format. For example, the filename of the current Horizon View 6 GPO bundle is VMware-Horizon-View-Extras-Bundle-3.1.0-2085634.zip. Obtain the VMware-Horizon-View-GPO-Bundle-x.x.x-yyyyyyy.zip file, unzip it, and copy the en-US folder, the vmware_rdsh.admx file, and the vmware_rdsh_server.admx file to the C:WindowsPolicyDefinitions folder on either an AD domain controller or your target RDS host, based on how you wish to manage the policies. Make note of the following points while doing so: If you want to set the policies locally on each RDS host, you will need to copy the files to each server If you wish to set the policies using domain-based AD group policies, you will need to copy the files to the domain controllers, the group policy Central Store (http://support.microsoft.com/kb/929841), or to the workstation from which we manage these domain-based group policies. How to do it… The following steps outline the procedure to enable RDS on a Windows Server 2012 R2 host. The host used in this recipe has already been connected to the domain and has logged in with an AD account that has administrative permissions on the server. Perform the following steps: Open the Windows Server Manager utility and go to Manage | Add Roles and Features to open the Add Roles and Features Wizard. On the Before you Begin page, click on Next. On the Installation Type page, shown in the following screenshot, select Remote Desktop Services installation and click on Next. This is shown in the following screenshot: On the Deployment Type page, select Quick Start and click on Next. You can also implement the required roles using the standard deployment method outlined in the Deploy the Session Virtualization Standard deployment section of the Microsoft TechNet article, Test Lab Guide: Remote Desktop Services Session Virtualization Standard Deployment (http://technet.microsoft.com/en-us/library/hh831610.aspx). If you use this method, you will complete the component installation and proceed to step 9 in this recipe. On the Deployment Scenario page, select Session-based desktop deployment and click on Next. On the Server Selection page, select a server from the list under Server Pool, click the red, highlighted button to add the server to the list of selected servers, and click on Next. This is shown in the following screenshot: On the Confirmation page, check the box marked Restart the destination server automatically if required and click on Deploy. On the Completion page, monitor the installation process and click on Close when finished in order to complete the installation. If a reboot is required, the server will reboot without the need to click on Close. Once the reboot completes, proceed with the remaining steps. Set the RDS licensing server using the Set-RDLicenseConfiguration Windows PowerShell command. In this example, we are configuring the local RDS host to point to redundant license servers (RDS-LIC1 and RDS-LIC2) and setting the license mode to PerUser. This command must be executed on the target RDS host. After entering the command, confirm the values for the license mode and license server name by answering Y when prompted. Refer to the following code: Set-RDLicenseConfiguration -LicenseServer @("RDS-LIC1.vjason.local","RDS-LIC2.vjason.local") -Mode PerUser This setting might also be set using group policies applied either to the local computer or using Active Directory (AD). The policies are shown in the following screenshot, and you can locate them by going to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Licensing when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path: Use local computer or AD group policies to limit users to one session per RDS host using the Restrict Remote Desktop Services users to a single Remote Desktop Services session policy. The policy is shown in the following screenshot, and you can locate it by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Connections: Use local computer or AD group policies to enable Timezone redirection. You can locate the policy by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Horizon View RDSH Services | Remote Desktop Session Host | Device and Resource Redirection when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To enable the setting, set Allow time zone redirection to Enabled. Use local computer or AD group policies to enable Windows Basic Aero-Styled Theme. You can locate the policy by going to User Configuration | Policies | Administrative Templates | Control Panel | Personalization when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the theme, set Force a specific visual style file or force Windows Classic to Enabled and set Path to Visual Style to %windir%resourcesThemesAeroaero.msstyles. Use local computer or AD group policies to start Runonce.exe when the RDS session starts. You can locate the policy by going to User Configuration | Policies | Windows Settings | Scripts (Logon/Logoff) when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the logon settings, double-click on Logon, click on Add, enter runonce.exe in the Script Name box, and enter /AlternateShellStartup in the Script Parameters box. On the Windows RDS host, double-click on the 64-bit Horizon View Agent installer to begin the installation process. The installer should have a name similar to VMware-viewagent-x86_64-y.y.y-xxxxxx.exe. On the Welcome to the Installation Wizard for VMware Horizon View Agent page, click on Next. On the License Agreement page, select the I accept the terms in the license agreement radio check box and click on Next. On the Custom Setup page, either leave all the options set to default, or if you are not using vCenter Operations Manager, deselect this optional component of the agent and click on Next. On the Register with Horizon View Connection Server page, shown in the following screenshot, enter the hostname or IP address of one of the Connection Servers in the pod where the RDS host will be used. If the user performing the installation of the agent software is an administrator in the Horizon View environment, leave the Authentication setting set to default; otherwise, select the Specify administrator credentials radio check box and provide the username and password of an account that has administrative rights in Horizon View. Click on Next to continue: On the Ready to Install the Program page, click on Install to begin the installation. When the installation completes, reboot the server if prompted. The Windows RDS service is now enabled, configured with the optimal settings for use with VMware Horizon View, and has the necessary agent software installed. This process should be repeated on additional RDS hosts, as needed, to support the target number of concurrent RDS sessions. How it works… The following resources provide detailed information about the configuration options used in this recipe: Microsoft TechNet's Set-RDLicenseConfiguration article at http://technet.microsoft.com/en-us/library/jj215465.aspx provides the complete syntax of the PowerShell command used to configure the RDS licensing settings. Microsoft TechNet's Remote Desktop Services Client Access Licenses (RDS CALs) article at http://technet.microsoft.com/en-us/library/cc753650.aspx explains the different RDS license types, which reveals that an RDS per-user Client Access License (CAL) allows our Horizon View clients to access the RDS servers from an unlimited number of endpoints while still consuming only one RDS license. The Microsoft TechNet article, Remote Desktop Session Host, Licensing (http://technet.microsoft.com/en-us/library/ee791926(v=ws.10).aspx) provides additional information on the group policies used to configure the RDS licensing options. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/index.jsp?topic=%2Fcom.vmware.horizon-view.desktops.doc%2FGUID-931FF6F3-44C1-4102-94FE-3C9BFFF8E38D.html) explains that the Windows Basic aero-styled theme is the only theme supported by Horizon View, and demonstrates how to implement it. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-443F9F6D-C9CB-4CD9-A783-7CC5243FBD51.html) explains why time zone redirection is required, as it ensures that the Horizon View RDS client session will use the same time zone as the client device. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-85E4EE7A-9371-483E-A0C8-515CF11EE51D.html) explains why we need to add the runonce.exe /AlternateShellStartup command to the RDS logon script. This ensures that applications which require Windows Explorer will work properly when streamed using Horizon View. Creating an RDS farm in Horizon View This recipe will discuss the steps that are required to create an RDS farm in our Horizon View pod. An RDS farm is a collection of Windows RDS hosts and serves as the point of integration between the View Connection Server and the individual applications installed on each RDS server. Additionally, key settings concerning client session handling and client connection protocols are set at the RDS farm level within Horizon View. Getting ready To create an RDS farm in Horizon View, we need to have at least one RDS host registered with our View pod. Assuming that the Horizon View Agent installation completed successfully in the previous recipe, we should see the RDS hosts registered in the Registered Machines menu under View Configuration of our View Manager Admin console. The tasks required to create the RDS pod are performed using the Horizon View Manager Admin console. How to do it… The following steps outline the procedure used to create a RDS farm. In this example, we have already created and registered two Window RDS hosts named WINRDS01 and WINRDS02. Perform the following steps: Navigate to Resources | Farms and click on Add, as shown in the following screenshot: On the Identification and Settings page, shown in the following screenshot, provide a farm ID, a description if desired, make any desired changes to the default settings, and then click on Next. The settings can be changed to On if needed: On the Select RDS Hosts page, shown in the following screenshot, click on the RDS hosts to be added to the farm and then click on Next: On the Ready to Complete page, review the configuration and click on Finish. The RDS farm has been created, which allows us to create application. How it works… The following RDS farm settings can be changed at any time and are described in the following points: Default display protocol: PCoIP (default) and RDP are available. Allow users to choose protocol: By default, Horizon View Clients can select their preferred protocol; we can change this setting to No in order to enforce the farm defaults. Empty session timeout (applications only): This denotes the amount of time that must pass after a client closes all RDS applications before the RDS farm will take the action specified in the When timeout occurs setting. The default setting is 1 minute. When timeout occurs: This determines which action is taken by the RDS farm when the session's timeout deadline passes; the options are Log off or Disconnect (default). Log off disconnected sessions: This determines what happens when a View RDS session is disconnected; the options are Never (default), Immediate, or After. If After is selected, a time in minutes must be provided. Summary We have learned about configuring the Windows RDS server for use in Horizon View and also about creating RDS farm in Horizon View. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] An Introduction to VMware Horizon Mirage [Article] Designing and Building a Horizon View 6.0 Infrastructure [Article]
Read more
  • 0
  • 0
  • 11468

article-image-nsx-core-components
Packt
05 Jan 2016
16 min read
Save for later

NSX Core Components

Packt
05 Jan 2016
16 min read
In this article by Ranjit Singh Thakurratan, the author of the book, Learning VMware NSX, we have discussed some of the core components of NSX. The article begins with a brief introduction of the NSX core components followed by a detailed discussion of these core components. We will go over three different control planes and see how each of the NSX core components fit in this architecture. Next, we will cover the VXLAN architecture and the transport zones that allow us to create and extend overlay networks across multiple clusters. We will also look at NSX Edge and the distributed firewall in greater detail and take a look at the newest NSX feature of multi-vCenter or cross-vCenterNSX deployment. By the end of this article, you will have a thorough understanding of the NSX core components and also their functional inter-dependencies. In this article, we will cover the following topics: An introduction to the NSX core components NSX Manager NSX Controller clusters VXLAN architecture overview Transport zones NSX Edge Distributed firewall Cross-vCenterNSX (For more resources related to this topic, see here.) An introduction to the NSX core components The foundational core components of NSX are divided across three different planes. The core components of a NSX deployment consist of a NSX Manager, Controller clusters, and hypervisor kernel modules. Each of these are crucial for your NSX deployment; however, they are decoupled to a certain extent to allow resiliency during the failure of multiple components. For example if your controller clusters fail, your virtual machines will still be able to communicate with each other without any network disruption. You have to ensure that the NSX components are always deployed in a clustered environment so that they are protected by vSphere HA. The high-level architecture of NSX primarily describes three different planes wherein each of the core components fit in. They are the Management plane, the Control plane, and the Data plane. The following figure represents how the three planes are interlinked with each other. The management plane is how an end user interacts with NSX as a centralized access point, while the data plane consists of north-south or east-west traffic. Let's look at some of the important components in the preceding figure: Management plane: The management plane primarily consists of NSX Manager. NSX Manager is a centralized network management component and primarily allows a single management point. It also provides the REST API that a user can use to perform all the NSX functions and actions. During the deployment phase, the management plane is established when the NSX appliance is deployed and configured. This management plane directly interacts with the control plane and also with the data plane. The NSX Manager is then managed via the vSphere web client and CLI. The NSX Manager is configured to interact with vSphere and ESXi, and once configured, all of the NSX components are then configured and managed via the vSphere web GUI. Control plane: The control plane consists of the NSX Controller that manages the state of virtual networks. NSX Controllers also enable overlay networks (VXLAN) that are multicast-free and make it easier to create new VXLAN networks without having to enable multicast functionality on physical switches. The controllers also keep track of all the information about the virtual machines, hosts, and VXLAN networks and can perform ARP suppression as well. No data passes through the control plane, and a loss of controllers does not affect network functionality between virtual machines. Overlay networks and VXLANs can be used interchangeably. They both represent L2 over L3 virtual networks. Data plane: The NSX data plane primarily consists of NSX logical switch. The NSX logical switch is a part of the vSphere distributed switch and is created when a VXLAN network is created. The logical switch and other NSX services such as logical routing and logical firewall are enabled at the hypervisor kernel level after the installation of hypervisor kernel modules (VIBs). This logical switch is the key to enabling overlay networks that are able to encapsulate and send traffic over existing physical networks. It also allows gateway devices that allow L2 bridging between virtual and physical workloads.The data plane receives its updates from the control plane as hypervisors maintain local virtual machines and VXLAN (Logical switch) mapping tables as well. A loss of data plane will cause a loss of the overlay (VXLAN) network, as virtual machines that are part of a NSX logical switch will not be able to send and receive data. NSX Manager NSX Manager, once deployed and configured, can deploy Controller cluster appliances and prepare the ESXi host that involves installing various vSphere installation bundles (VIB) that allow network virtualization features such as VXLAN, logical switching, logical firewall, and logical routing. NSX Manager can also deploy and configure Edge gateway appliances and its services. The NSX version as of this writing is 6.2 that only supports 1:1 vCenter connectivity. NSX Manager is deployed as a single virtual machine and relies on VMware's HA functionality to ensure its availability. There is no NSX Manager clustering available as of this writing. It is important to note that a loss of NSX Manager will lead to a loss of management and API access, but does not disrupt virtual machine connectivity. Finally, the NSX Manager's configuration UI allows an administrator to collect log bundles and also to back up the NSX configuration. NSX Controller clusters NSX Controller provides a control plane functionality to distribute Logical Routing, VXLAN network information to the underlying hypervisor. Controllers are deployed as Virtual Appliances, and they should be deployed in the same vCenter to which NSX Manager is connected. In a production environment, it is recommended to deploy minimum three controllers. For better availability and scalability, we need to ensure that DRS ant-affinity rules are configured to deploy Controllers on a separate ESXI host. The control plane to management and data plane traffic is secured by a certificate-based authentication. It is important to note that controller nodes employ a scale-out mechanism, where each controller node uses a slicing mechanism that divides the workload equally across all the nodes. This renders all the controller nodes as Active at all times. If one controller node fails, then the other nodes are reassigned the tasks that were owned by the failed node to ensure operational status. The VMware NSX Controller uses a Paxos-based algorithm within the NSX Controller cluster. The Controller removes dependency on multicast routing/PIM in the physical network. It also suppresses broadcast traffic in VXLAN networks. The NSX version 6.2 only supports three controller nodes. VXLAN architecture overview One of the most important functions of NSX is enabling virtual networks. These virtual networks or overlay networks have become very popular due to the fact that they can leverage existing network infrastructure without the need to modify it in any way. The decoupling of logical networks from the physical infrastructure allows users to scale rapidly. Overlay networks or VXLAN was developed by a host of vendors that include Arista, Cisco, Citrix, Red Hat, and Broadcom. Due to this joint effort in developing its architecture, it allows the VXLAN standard to be implemented by multiple vendors. VXLAN is a layer 2 over layer 3 tunneling protocol that allows logical network segments to extend on routable networks. This is achieved by encapsulating the Ethernet frame with additional UPD, IP, and VXLAN headers. Consequently, this increases the size of the packet by 50 bytes. Hence, VMware recommends increasing the MTU size to a minimum of 1600 bytes for all the interfaces in the physical infrastructure and any associated vSwitches. When a virtual machine generates traffic meant for another virtual machine on the same virtual network, the hosts on which these source and destination virtual machines run are called VXLAN Tunnel End Point (VTEP). VTEPs are configured as separate VM Kernel interfaces on the hosts. The outer IP header block in the VXLAN frame contains the source and the destination IP addresses that contain the source hypervisor and the destination hypervisor. When a packet leaves the source virtual machine, it is encapsulated at the source hypervisor and sent to the target hypervisor. The target hypervisor, upon receiving this packet, decapsulates the Ethernet frame and forwards it to the destination virtual machine. Once the ESXI host is prepared from NSX Manager, we need to configure VTEP. NSX supports multiple VXLAN vmknics per host for uplink load balancing features. In addition to this, Guest VLAN tagging is also supported. A sample packet flow We face a challenging situation when a virtual machine generates traffic—Broadcast, Unknown Unicast, or Multicast (BUM)—meant for another virtual machine on the same virtual network (VNI) on a different host. Control plane modes play a crucial factor in optimizing the VXLAN traffic depending on the modes selected for the Logical Switch/Transport Scope: Unicast Hybrid Multicast By default, a Logical Switch inherits its replication mode from the transport zone. However, we can set this on a per-Logical-Switch basis. Segment ID is needed for Multicast and Hybrid Modes. The following is a representation of the VXLAN-encapsulated packet showing the VXLAN headers: As indicated in the preceding figure, the outer IP header identifies the source and the destination VTEPs. The VXLAN header also has the Virtual Network Identifier (VNI) that is a 24-bit unique network identifier. This allows the scaling of virtual networks beyond the 4094 VLAN limitation placed by the physical switches. Two virtual machines that are a part of the same virtual network will have the same virtual network identifier, similar to how two machines on the same VLAN share the same VLAN ID. Transport zones A group of ESXi hosts that are able to communicate with one another over the physical network by means of VTEPs are said to be in the same transport zone. A transport zone defines the extension of a logical switch across multiple ESXi clusters that span across multiple virtual distributed switches. A typical environment has more than one virtual distributed switch that spans across multiple hosts. A transport zone enables a logical switch to extend across multiple virtual distributed switches, and any ESXi host that is a part of this transport zone can have virtual machines as a part of that logical network. A logical switch is always created as part of a transport zone and ESXi hosts can participate in them. The following is a figure that shows a transport zone that defines the extension of a logical switch across multiple virtual distributed switches: NSX Edge Services Gateway The NSX Edge Services Gateway (ESG) offers a feature rich set of services that include NAT, routing, firewall, load balancing, L2/L3 VPN, and DHCP/DNS relay. NSX API allows each of these services to be deployed, configured, and consumed on-demand. The ESG is deployed as a virtual machine from NSX Manager that is accessed using the vSphere web client. Four different form factors are offered for differently-sized environments. It is important that you factor in enough resources for the appropriate ESG when building your environment. The ESG can be deployed in different sizes. The following are the available size options for an ESG appliance: X-Large: The X-large form factor is suitable for high performance firewall, load balancer, and routing or a combination of multiple services. When an X-large form factor is selected, the ESG will be deployed with six vCPUs and 8GB of RAM. Quad-Large: The Quad-large form factor is ideal for a high performance firewall. It will be deployed with four vCPUs and 1GB of RAM. Large: The large form factor is suitable for medium performance routing and firewall. It is recommended that, in production, you start with the large form factor. The large ESG is deployed with two vCPUs and 1GB of RAM. Compact: The compact form factor is suitable for DHCP and DNS replay functions. It is deployed with one vCPU and 512MB of RAM. Once deployed, a form factor can be upgraded by using the API or the UI. The upgrade action will incur an outage. Edge gateway services can also be deployed in an Active/Standby mode to ensure high availability and resiliency. A heartbeat network between the Edge appliances ensures state replication and uptime. If the active gateway goes down and the "declared dead time" passes, the standby Edge appliance takes over. The default declared dead time is 15 seconds and can be reduced to 6 seconds. Let's look at some of the Edge services as follows: Network Address Translation: The NSX Edge supports both source and destination NAT and NAT is allowed for all traffic flowing through the Edge appliance. If the Edge appliance supports more than 100 virtual machines, it is recommended that a Quad instance be deployed to allow high performance translation. Routing: The NSX Edge allows centralized routing that allows the logical networks deployed in the NSX domain to be routed to the external physical network. The Edge supports multiple routing protocols including OSPF, iBGP, and eBGP. The Edge also supports static routing. Load balancing: The NSX Edge also offers a load balancing functionality that allows the load balancing of traffic between the virtual machines. The load balancer supports different balancing mechanisms including IP Hash, least connections, URI-based, and round robin. Firewall: NSX Edge provides a stateful firewall functionality that is ideal for north-south traffic flowing between the physical and the virtual workloads behind the Edge gateway. The Edge firewall can be deployed alongside the hypervisor kernel-based distributed firewall that is primarily used to enforce security policies between workloads in the same logical network. L2/L3VPN: The Edge also provides L2 and L3 VPNs that makes it possible to extend L2 domains between two sites. An IPSEC site-to-site connectivity between two NSX Edges or other VPN termination devices can also be set up. DHCP/DNS relay: NSX Edge also offers DHCP and DNS relay functions that allows you to offload these services to the Edge gateway. Edge only supports DNS relay functionality and can forward any DNS requests to the DNS server. The Edge gateway can be configured as a DHCP server to provide and manage IP addresses, default gateway, DNS servers and, search domain information for workloads connected to the logical networks. Distributed firewall NSX provides L2-L4stateful firewall services by means of a distributed firewall that runs in the ESXi hypervisor kernel. Because the firewall is a function of the ESXi kernel, it provides massive throughput and performs at a near line rate. When the ESXi host is initially prepared by NSX, the distributed firewall service is installed in the kernel by deploying the kernel VIB – VMware Internetworking Service insertion platform or VSIP. VSIP is responsible for monitoring and enforcing security policies on all the traffic flowing through the data plane. The distributed firewall (DFW) throughput and performance scales horizontally as more ESXi hosts are added. DFW instances are associated to each vNIC, and every vNIC requires one DFW instance. A virtual machine with 2 vNICs has two DFW instances associated with it, each monitoring its own vNIC and applying security policies to it. DFW is ideally deployed to protect virtual-to-virtual or virtual-to-physical traffic. This makes DFW very effective in protecting east-west traffic between workloads that are a part of the same logical network. DFW policies can also be used to restrict traffic between virtual machines and external networks because it is applied at the vNIC of the virtual machine. Any virtual machine that does not require firewall protection can be added to the exclusion list. A diagrammatic representation is shown as follows: DFW fully supports vMotion and the rules applied to a virtual machine always follow the virtual machine. This means any manual or automated vMotion triggered by DRS does not cause any disruption in its protection status. The VSIP kernel module also adds spoofguard and traffic redirection functionalities as well. The spoofguard function maintains a VM name and IP address mapping table and prevents against IP spoofing. Spoofguard is disabled by default and needs to be manually enabled per logical switch or virtual distributed switch port group. Traffic redirection allows traffic to be redirected to a third-party appliance that can do enhanced monitoring, if needed. This allows third-party vendors to be interfaced with DFW directly and offer custom services as needed. Cross-vCenterNSX With NSX 6.2, VMware introduced an interesting feature that allows you to manage multiple vCenterNSX environments using a primary NSX Manager. This allows to have easy management and also enables lots of new functionalities including extending networks and other features such as distributed logical routing. Cross-vCenterNSX deployment also allows centralized management and eases disaster recovery architectures. In a cross-vCenter deployment, multiple vCenters are all paired with their own NSX Manager per vCenter. One NSX Manager is assigned as the primary while other NSX Managers become secondary. This primary NSX Manager can now deploy a universal controller cluster that provides the control plane. Unlike a standalone vCenter-NSX deployment, secondary NSX Managers do not deploy their own controller clusters. The primary NSX Manager also creates objects whose scope is universal. This means that these objects extend to all the secondary NSX Managers. These universal objects are synchronized across all the secondary NSX Managers and can be edited and changed by the primary NSX Manager only. This does not prevent you from creating local objects on each of the NSX Managers. Similar to local NSX objects, a primary NSX Manager can create global objects such as universal transport zones, universal logical switches, universal distributed routers, universal firewall rules, and universal security objects. There can be only one universal transport zone in a cross-vCenterNSX environment. After it is created, it is synchronized across all the secondary NSX Managers. When a logical switch is created inside a universal transport zone, it becomes a universal logical switch that spans layer 2 network across all the vCenters. All traffic is routed using the universal logical router, and any traffic that needs to be routed between a universal logical switch and a logical switch (local scope) requires an ESG. Summary We began the article with a brief introduction of the NSX core components and looked at the management, control, and the data plane. We then discussed NSX Manager and the NSX Controller clusters. This was followed by a VXLAN architecture overview discussion, where we looked at the VXLAN packet. We then discussed transport zones and NSX Edge gateway services. We ended the article with NSX Distributed firewall services and also an overview of Cross-vCenterNSX deployment. Resources for Article: Further resources on this subject: vRealize Automation and the Deconstruction of Components [article] Monitoring and Troubleshooting Networking [article] Managing Pools for Desktops [article]
Read more
  • 0
  • 0
  • 11229
article-image-installing-vertica
Packt
13 May 2014
9 min read
Save for later

Installing Vertica

Packt
13 May 2014
9 min read
(For more resources related to this topic, see here.) Massively Parallel Processing (MPP) databases are those which partition (and optionally replicate) data into multiple nodes. All meta-information regarding data distribution is stored in master nodes. When a query is issued, it is parsed and a suitable query plan is developed as per the meta-information and executed on relevant nodes (nodes that store related user data). HP offers one such MPP database called Vertica to solve pertinent issues of Big Data analytics. Vertica differentiates itself from other MPP databases in many ways. The following are some of the key points: Column-oriented architecture: Unlike traditional databases that store data in a row-oriented format, Vertica stores its data in columnar fashion. This allows a great level of compression on data, thus freeing up a lot of disk space. Design tools: Vertica offers automated design tools that help in arranging your data more effectively and efficiently. The changes recommended by the tool not only ease pressure on the designer, but also help in achieving seamless performance. Low hardware costs: Vertica allows you to easily scale up your cluster using just commodity servers, thus reducing hardware-related costs to a certain extent. This article will guide you through the installation and creation of a Vertica cluster. This article will also cover the installation of Vertica Management Control, which is shipped with the Vertica Enterprise edition only. It should be noted that it is possible to upgrade Vertica to a higher version but vice versa is not possible. Before installing Vertica, you should bear in mind the following points: Only one database instance can be run per cluster of Vertica. So, if you have a three-node cluster, then all three nodes will be dedicated to one single database. Only one instance of Vertica is allowed to run per node/host. Each node requires at least 1 GB of RAM. Vertica can be deployed on Linux only and has the following requirements: Only the root user or the user with all privileges (sudo) can run the install_vertica script. This script is very crucial for installation and will be used at many places. Only ext3/ext4 filesystems are supported by Vertica. Verify whether rsync is installed. The time should be synchronized in all nodes/servers of a Vertica cluster; hence, it is good to check whether NTP daemon is running. Understanding the preinstallation steps Vertica has various preinstallation steps that are needed to be performed for the smooth running of Vertica. Some of the important ones are covered here. Swap space Swap space is the space on the physical disk that is used when primary memory (RAM) is full. Although swap space is used in sync with RAM, it is not a replacement for RAM. It is suggested to have 2 GB of swap space available for Vertica. Additionally, Vertica performs well when swap-space-related files and Vertica data files are configured to store on different physical disks. Dynamic CPU frequency scaling Dynamic CPU frequency scaling, or CPU throttling, is where the system automatically adjusts the frequency of the microprocessor dynamically. The clear advantage of this technique is that it conserves energy and reduces the heat generated. It is believed that CPU frequency scaling reduces the number of instructions a processor can issue. Additional theories state that when frequency scaling is enabled, the CPU doesn't come to full throttle promptly. Hence, it is best that dynamic CPU frequency scaling is disabled. CPU frequency scaling can be disabled from Basic Input/Output System (BIOS). Please note that different hardware might have different settings to disable CPU frequency scaling. Understanding disk space requirements It is suggested to keep a buffer of 20-30 percent of disk space per node. Vertica uses buffer space to store temporary data, which is data coming from the merge out operations, hash joins, and sorts, and data arising from managing nodes in the cluster. Steps to install Vertica Installing Vertica is fairly simple. With the following steps, we will try to understand a two-node cluster: Download the Vertica installation package from http://my.vertica.com/ according to the Linux OS that you are going to use. Now log in as root or use the sudo command. After downloading the installation package, install the package using the standard command: For .rpm (CentOS/RedHat) packages, the command will be: rpm -Uvh vertica-x.x.x-x.x.rpm For .deb (Ubuntu) packages, the command will be: dpkg -i vertica-x.x.x-x.x.deb Refer to the following screenshot for more details: Running the Vertica package In the previous step, we installed the package on only one machine. Note that Vertica is installed under /opt/vertica. Now, we will setup Vertica on other nodes as well. For that, run on the same node: /opt/vertica/sbin/install_vertica -s host_list -r rpm_package -u dba_username Here –s is the hostname/IP of all the nodes of the cluster including the one on which Vertica is already installed. –r is the path of Vertica package and –u is the username that we wish to create for working on Vertica. This user has sudo privileges. If prompted, provide a password for the new user. If we do not specify any username, then Vertica creates dbadmin as the user, as shown in the following example: [impetus@centos64a setups]$ sudo /opt/vertica/sbin/install_vertica -s 192.168.56.101,192.168.56.101,192.168.56.102 -r "/ilabs/setups/vertica-6.1.3-0.x86_64.RHEL5.rpm" -u dbadmin Vertica Analytic Database 6.1.3-0 Installation Tool Upgrading admintools meta data format.. scanning /opt/vertica/config/users Starting installation tasks... Getting system information for cluster (this may take a while).... Enter password for impetus@192.168.56.102 (2 attempts left): backing up admintools.conf on 192.168.56.101 Default shell on nodes: 192.168.56.101 /bin/bash 192.168.56.102 /bin/bash Installing rpm on 1 hosts.... installing node.... 192.168.56.102 NTP service not synchronized on the hosts: ['192.168.56.101', '192.168.56.102'] Check your NTP configuration for valid NTP servers. Vertica recommends that you keep the system clock synchronized using NTP or some other time synchronization mechanism to keep all hosts synchronized. Time variances can cause (inconsistent) query results when using Date/Time Functions. For instructions, see: * http://kbase.redhat.com/faq/FAQ_43_755.shtm * http://kbase.redhat.com/faq/FAQ_43_2790.shtm Info: the package 'pstack' is useful during troubleshooting. Vertica recommends this package is installed. Checking/fixing OS parameters..... Setting vm.min_free_kbytes to 37872 ... Info! The maximum number of open file descriptors is less than 65536 Setting open filehandle limit to 65536 ... Info! The session setting of pam_limits.so is not set in /etc/pam.d/su Setting session of pam_limits.so in /etc/pam.d/su ... Detected cpufreq module loaded on 192.168.56.101 Detected cpufreq module loaded on 192.168.56.102 CPU frequency scaling is enabled. This may adversely affect the performance of your database. Vertica recommends that cpu frequency scaling be turned off or set to 'performance' Creating/Checking Vertica DBA group Creating/Checking Vertica DBA user Password for dbadmin: Installing/Repairing SSH keys for dbadmin Creating Vertica Data Directory... Testing N-way network test. (this may take a while) All hosts are available ... Verifying system requirements on cluster. IP configuration ... IP configuration ... Testing hosts (1 of 2).... Running Consistency Tests LANG and TZ environment variables ... Running Network Connectivity and Throughput Tests... Waiting for 1 of 2 sites... ... Test of host 192.168.56.101 (ok) ==================================== Enough RAM per CPUs (ok) -------------------------------- Test of host 192.168.56.102 (ok) ==================================== Enough RAM per CPUs (FAILED) -------------------------------- Vertica requires at least 1 GB per CPU (you have 0.71 GB/CPU) See the Vertica Installation Guide for more information. Consistency Test (ok) ========================= Info: The $TZ environment variable is not set on 192.168.56.101 Info: The $TZ environment variable is not set on 192.168.56.102 Updating spread configuration... Verifying spread configuration on whole cluster. Creating node node0001 definition for host 192.168.56.101 ... Done Creating node node0002 definition for host 192.168.56.102 ... Done Error Monitor 0 errors 4 warnings Installation completed with warnings. Installation complete. To create a database: 1. Logout and login as dbadmin.** 2. Run /opt/vertica/bin/adminTools as dbadmin 3. Select Create Database from the Configuration Menu ** The installation modified the group privileges for dbadmin. If you used sudo to install vertica as dbadmin, you will need to logout and login again before the privileges are applied. After we have installed Vertica on all the desired nodes, it is time to create a database. Log in as a new user (dbadmin in default scenarios) and connect to admin panel. For that we have to run following command: /opt/vertica/bin/adminTools If you are connecting to admin tools for the first time, you will be prompted for a license key. If you have the license file, then enter its path; if you want to use the community edition, then just click on OK. License key prompt After the previous step, you will be asked to review and accept the End-user License Agreement (EULA). Prompt for EULA After reviewing and accepting the EULA, you will be presented with the main menu of the admin tools of Vertica. Admin Tools Main Menu Now, to create a database, navigate to Administration Tools | Configuration Menu | Create Database. Create database option in the Configuration menu Now, you will be asked to enter a database name and a comment that you will like to associate with the database. Name and Comment of the Database After entering the name and comment, you will be prompted to enter a password for this database. Password for the New database After entering and re-entering (for confirmation) the password, you need to provide pathnames where the files related to user data and catalog data will be stored. Catalog and Data Pathname After providing all the necessary information related to the database, you will be asked to select hosts on which the database needs to be deployed. Once all the desired hosts are selected, Vertica will ask for one final check. Final confirmation for a database creation Now, Vertica will be creating and deploying the database. Database Creation Once the database is created, we can connect to it using the VSQL tool or perform admin tasks. Summary As you can see and understand this article explains briefly about, Vertica installation. One can check further by creating sample tables and performing basic CRUD operations. For a clean installation, it is recommended to serve all the minimum requirements of Vertica. It should be noted that installation of client API(s) and Vertica Management console needs to be done separately and is not included in the basic package. Resources for Article: Further resources on this subject: Visualization of Big Data [Article] Limits of Game Data Analysis [Article] Learning Data Analytics with R and Hadoop [Article]
Read more
  • 0
  • 0
  • 11060

article-image-backups-vmware-view-infrastructure-0
Packt
17 Sep 2014
18 min read
Save for later

Backups in the VMware View Infrastructure

Packt
17 Sep 2014
18 min read
In this article, by Chuck Mills and Ryan Cartwright, authors of the book VMware Horizon 6 Desktop Virtualization Solutions, we will study about the back up options available in VMware. It also provides guidance on scheduling appropriate backups of a Horizon View environment. (For more resources related to this topic, see here.) While a single point of failure should not exist in the VMware View environment, it is still important to ensure regular backups are taken for a quick recovery when failures occur. Also, if a setting becomes corrupted or is changed, a backup could be used to restore to a previous point in time. The backup of the VMware View environment should be performed on a regular basis in line with an organization's existing backup methodology. A VMware View environment contains both files and databases. The main backup points of a VMware View environment are as follows: VMware View Connection Server—ADAM database VMware View Security Server VMware View Composer Database Remote Desktop Service host servers Remote Desktop Service host templates and virtual machines Virtual desktop templates and parent VMs Virtual desktops Linked clones (stateless) Full clones (stateful) ThinApp repository Persona Management VMware vCenter Restoring the VMware View environment Business Continuity and Disaster Recovery With a backup of all of the preceding components, the VMware View Server infrastructure can be recovered during a time of failure. To maximize the chances of success in a recovery environment, it is advised to take backups of the View ADAM database, View Composer, and vCenter database at the same time to avoid discrepancies. Backups can be scheduled and automated or can be manually executed; ideally, scheduled backups will be used to ensure that they are performed and completed regularly. Proper design dictates that there should always be two or more View Connection Servers. As all View Connection Servers in the same replica pool contain the same configuration data, it is only necessary to back up one View Connection Server. This backup is typically configured for the first View Connection Server installed in standard mode in an environment. VMware View Connection Server – ADAM Database backup View Connection Server stores the View Connection Server configuration data in the View LDAP repository. View Composer stores the configuration data for linked clone desktops in the View Composer database. When you use View Administrator to perform backups, the Connection Server backs up the View LDAP configuration data and the View Composer database. Both sets of backup files will be stored in the same location. The LDAP data is exported in LDAP data interchange format (LDIF). If you have multiple View Connection Server(s) in a replicated group, you only need to export data from one of the instances. All replicated instances contain the same configuration data. It is a not good practice to rely on replicated instances of View Connection Server as your backup mechanism. When the Connection Server synchronizes data across the instances of Connection Server, any data lost on one instance might be lost in all the members of the group. If the View Connection Server uses multiple vCenter Server instances and multiple View Composer services, then the View Connection Server will back up all the View Composer databases associated with the vCenter Server instances. View Connection Server backups are configured from the VMware View Admin console. The backups dump the configuration files and the database information to a location on the View Connection Server. Then, the data must be backed up through normal mechanisms, like a backup agent and scheduled job. The procedure for a View Connection Server backup is as follows: Schedule VMware View backup runs and exports to C:View_Backup. Use your third-party backup solution on the View Connection Server and have it back up the System State, Program Files, and C:View_Backup folders that were created in step 1. From within the View Admin console, there are three primary options that must be configured to back up the View Connection Server settings: Automatic backup frequency: This is the frequency at which backups are automatically taken. The recommendation is as follows: Recommendation (every day): As most server backups are performed daily, if the automatic View Connection Server backup is taken before the full backup of the Windows server, it will be included in the nightly backup. This is adjusted as necessary. Backup time: This displays the time based on the automatic backup frequency. (Every day produces the 12 midnight time.) Maximum number of backups: This is the maximum number of backups that can be stored on the View Connection Server; once the maximum number has been reached, backups will be rotated out based on age, with the oldest backup being replaced by the newest backup. The recommendation is as follows: Recommendation—30 days: This will ensure that approximately one month of backups are retained on the server. This is adjusted as necessary. Folder location: This is the location on the View Connection Server, where the backups will be stored. Ensure that the third-party backup solution is backing up this location. The following screenshot shows the Backup tab: Performing a manual backup of the View database Use the following steps to perform a manual backup of your View database: Log in to the View Administrator console. Expand the Catalog option under Inventory (on the left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning, as shown in the following screenshot: Continue to disable provisioning for each of the pools. This will assure that no new information will be added to the ADAM database. After you disable provisioning for all the pools, there are two ways to perform the backup: The View Administrator console Running a command using the command prompt The View Administrator console Follow these steps to perform a backup: Log in to the View Administrator console. Expand View Configuration found under Inventory. Select Servers, which displays all the servers found in your environment. Select the Connection Servers tab. Right-click on one of the Connection Servers and choose Backup Now, as shown in the following screenshot After the backup process is complete, enable provisioning to the pools. Using the command prompt You can export the ADAM database by executing a built-in export tool in the command prompt. Perform the following steps: Connect directly to the View Connection Server with a remote desktop utility such as RDP. Open a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Execute the vdmexport.exe command and use the –f option to specify a location and filename, as shown in the following screenshot (for this example, C:View_Backup is the location and vdmBackup.ldf is the filename): Once a backup has been either automatically run or manually executed, there will be two types of files saved in the backup location: LDF files: These are the LDIF exports from the VMware View Connection Server ADAM database and store the configuration settings of the VMware View environment SVI files: These are the backups of the VMware View Composer database The backup process of the View Connection Server is fairly straightforward. While the process is easy, it should not be overlooked. Security Server considerations Surprisingly, there is no option to back up the VMware View Security Server via the VMware View Admin console. For View Connection Servers, backup is configured by selecting the server, selecting Edit, and then clicking on Backup. Highlighting the View Security Server provides no such functionality. Instead, the security server should be backed up via normal third-party mechanisms. The installation directory is of primary concern, which is C:Program FilesVMwareVMware ViewServer by default. The .config file is in the …sslgatewayconf directory, and it includes the following settings: pcoipClientIPAddress: This is the public address used by the Security Server pcoipClientUDPPort: This is the port used for UDP traffic (the default is 4172) In addition, the settings file is located in this directory, which includes settings such as the following: maxConnections: This is the maximum number of concurrent connections the View Security Server can have at one time (the default is 2000) serverID: This is the hostname used by the security server In addition, custom certificates and logfiles are stored within the installation directory of the VMware View Security Server. Therefore, it is important to back up the data regularly if the logfile data is to be maintained (and is not being ingested into a larger enterprise logfile solution). The View Composer database The View Composer database used for linked clones is backed up using the following steps: Log in to the View Administrator console. Expand the Catalog option under Inventory (left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning. Connect directly to the server where the View Composer was installed, using a remote desktop utility such as RDP. Stop the View Composer service, as shown in the following screenshot. This will prevent provisioning request that would change the composer database. After the service is stopped, use the standard practice for backed up databases in the current environment. Restart the Composer service after the backup completes. Remote Desktop Service host servers VMware View 6 uses virtual machines to deliver hosted applications and desktops. In some cases, tuning and optimization, or other customer specific configurations to the environment or applications may be built on the Remote Desktop Service (RDS) host. Use the Windows Server Backup tool or the current backup software deployed in your environment. RDS Server host templates and virtual machines The virtual machine templates and virtual machines are an important part of the Horizon View infrastructure and need protection in the event that the system needs to be recovered. Back up the RDS host templates when changes are made and the testing/validation is completed. The production RDS host machines should be backed up if they contains user data or any other elements that require protection at frequent intervals. Third-party backup solutions are used in this case. Virtual desktop templates and parent VMs Horizon View uses virtual machine templates to create the desktops in pools for full virtual machines and uses parent VMs to create the desktops in a linked clone desktop pool. These virtual machine templates and the parent VMs are another important part of the View infrastructure that needs protection. These backups are a crucial part of being able to quickly restore the desktop pools and the RDS hosts in the event of data loss. While frequent changes occur for standard virtual machines, the virtual machine templates and parent VMs only need backing up after new changes have been made to the template and parent VM images. These backups should be readily available for rapid redeployment when required. For environments that use full cloning as the provisioning technique for the vDesktops, the gold template should be backed up regularly. The gold template is the master vDesktop that all other vDesktops are cloned from. The VMware KB article, Backing up and restoring virtual machine templates using VMware APIs, covers the steps to both back up and restore a template. In short, most backup solutions will require that the gold template is converted from a template to a regular virtual machine and it can then be backed up. You can find more information at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009395. Backing up the parent VM can be tricky as it is a virtual machine, often with many different point-in-time snapshots. The most common technique is to collapse the virtual machine snapshot tree at a given point-in-time snapshot, and then back up or copy the newly created virtual machine to a second datastore. By storing the parent VM on a redundant storage solution, it is quite unlikely that the parent VM will be lost. What's more likely is that a point-in-time snapshot of the parent VM may be created while it's in a nonfunctional or less-than-ideal state. Virtual desktops There are three types of virtual desktops in a Horizon View environment, which are as follows: Linked clone desktops Stateful desktops Stateless desktops Linked clone desktops Virtual desktops that are created by View Composer using the linked clone technology present special challenges with backup and restoration. In many cases, a linked clone desktop will also be considered as a stateless desktop. The dynamic nature of a linked clone desktop and the underlying structure of the virtual machine itself means the linked clone desktops are not a good candidate for backup and restoration. However, the same qualities that impede the use of a standard backup solution provide an advantage for rapid reprovisioning of virtual desktops. When the underlying infrastructure for things such as the delivery of applications and user data, along with the parent VMs, are restored, then linked clone desktop pools can be recreated and made available within a short amount of time, and therefore lessening the impact of an outage or data loss. Stateful desktops In the stateful desktop pool scenario, all of the virtual desktops retain user data when the user logs back in to the virtual desktop. So, in this case, backing up the virtual machines with third-party tools like any other virtual machine in vSphere is considered the optimal method for protection and recovery. Stateless desktops With the stateless desktop architecture, the virtual desktops do not retain the desktop state when the user logs back in to the virtual desktop. The nature of the stateless desktops does not require and nor do they directly contain any data that requires a backup. All the user data in a stateless desktop is stored on a file share. The user data includes any files the user creates, changes, or copies within the virtual infrastructure, along with the user persona data. Therefore, because no user data is stored within the virtual desktop, there will be no need to back up the desktop. File shares should be included in the standard backup strategy and all user data and persona information will be included in the existing daily backups. The ThinApp repository The ThinApp repository is similar in nature to the user data on the stateless desktops in that it should reside on a redundant file share that is backed up regularly. If the ThinApp packages are configured to preserve each user's sandbox, the ThinApp repository should likely be backed up nightly. Persona Management With the View Persona Management feature, the user's remote profile is dynamically downloaded after the user logs in to a virtual desktop. The secure, centralized repository can be configured in which Horizon View will store user profiles. The standard practice is to back up network shares on which View Persona Management stores the profile repository. View Persona Management will ensure that user profiles are backed up to the remote profile share, eliminating the need for additional tools to back up user data on the desktops. Therefore, backup software to protect the user profile on the View desktop is unnecessary. VMware vCenter Most established IT departments are using backup tools from the storage or backup vendor to protect the datastores where the VM's are stored. This will make the recovery of the base vSphere environment faster and easier. The central piece of vCenter is the vCenter database. If there is a total loss of database you will lose all your configuration information of vSphere, including the configuration specific to View (for examples, users, folders, and many more). Another important item to understand is that even if you rebuild your vCenter using the same folder and resource pool names, your View environment will not reconnect and use the new vCenter. The reason is that each object in vSphere has what is called a Managed object Reference (MoRef) and they are stored in the vSphere database. View uses the MoRef information to talk to vCenter. As View and vSphere rely on each other, making a backup of your View environment without backing up your vSphere environment doesn't make sense. Restoring the VMware View environment If your environment has multiple Connection Servers, the best thing to do would be delete all the servers but one, and then use the following steps to restore the ADAM database: Connect directly to the server where the View Connection Server is located using a remote desktop utility such as RDP. Stop the View Connection service, as shown in the following screenshot: Locate the backup (or exported) ADAM database file that has the .ldf extension. The first step of the import is to decrypt the file by opening a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Use the following command: vdmimport –f View_BackupvdmBackup.ldf –d >View_BackupvmdDecrypt.ldf You will be prompted to enter the password from the account you used to create the backup file. Now use the vdmimport –f [decrypted file name] command (from the preceding example, the filename will be vmdDecrypt.ldf). After the ADAM database is updated, you can restart the View Connection Server service. Replace the delete Connection Servers by running the Connection Server installation and using the Replica option. To reinstall the View Composer database, you can connect to the server where Composer is installed. Stop the View Composer service and use your standard procedure for restoring a database. After the restore, start the View Composer service. While this provides the steps to restore the main components of the Connection server, the steps to perform a complete View Connection Server restore can be found in the VMware KB article, Performing an end-to-end backup and restore for VMware View Manager, at http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1008046. Reconciliation after recovery One of the main factors to consider when performing a restore in a Horizon View infrastructure is the possibility that the Connection Server environment could be out of sync with the current View state and a reconciliation is required. After restoring the Connection Server ADAM database, there may be missing desktops that are shown in the Connection Server Admin user interface if the following actions are executed after the backup but before a restore: The administrator deleted pools or desktops The desktop pool was recomposed, which resulted in the removal of the unassigned desktops Missing desktops or pools can be manually removed from the Connection Server Admin UI. Some of the automated desktops may become disassociated with their pools due to the creation of a pool between the time of the backup and the restore time. View administrators may be able to make them usable by cloning the linked clone desktop to a full clone desktop using vCenter Server. They would be created as an individual desktop in the Connection Server and then assign those desktops to a specific user. Business Continuity and Disaster Recovery It's important to ensure that the virtual desktops along with the application delivery infrastructure is included and prioritized as a Business Continuity and Disaster Recovery plan. Also, it's important to ensure that the recovery procedures are tested and validated on a regular cycle, as well as having the procedures and mechanisms in place that ensure critical data (images, software media, data backup, and so on) is always stored and ready in an alternate location. This will ensure an efficient and timely recovery. It would be ideal to have a disaster recovery plan and business continuity plan that recovers the essential services to an alternate "standby" data center. This will allow the data to be backed up and available offsite to the alternate facility for an additional measure of protection. The alternate data center could have "hot" standby capacity for the virtual desktops and application delivery infrastructure. This site would then address 50 percent capacity in the event of a disaster and also 50 percent additional capacity in the event of a business continuity event that prevents users from accessing the main facility. The additional capacity will also provide a rollback option if there were failed updates to the main data center. Operational procedures should ensure the desktop and server images are available to the alternate facility when changes are made to the main VMware View system. Desktop and application pools should also be updated in the alternate data center whenever maintenance procedures are executed and validated in the main data center. Summary As expected, it is important to back up the fundamental components of a VMware View solution. While a resilient design should mitigate most types of failure, there are still occasions when a backup may be needed to bring an environment back up to an operational level. This article covered the major components of View and provided some of the basic options for creating backups of those components. The Connection Server and Composer database along with vCenter were explained. There was a good overview of the options used to protect the different types of virtual desktops. The ThinApp repository and Persona Management was also explained. The article also covered the basic recovery options and where to find information on the complete View recovery procedures. Resources for Article: Further resources on this subject: Introduction to Veeam® Backup & Replication for VMware [article] Design, Install, and Configure [article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article]
Read more
  • 0
  • 0
  • 10866

article-image-virtual-machine-virtual-world
Packt
11 Jul 2014
15 min read
Save for later

A Virtual Machine for a Virtual World

Packt
11 Jul 2014
15 min read
(For more resources related to this topic, see here.) Creating a VM from a template Let us start by creating our second virtual machine from the Ubuntu template. Right-click on the template and select Clone, as shown in the following screenshot: Use the settings shown in the following screenshot for the new virtual machine. You can also use any virtual machine name you like. A VM name can only be alphanumeric without any special characters. You can also use any other VM you have already created in your own virtual environment. Access the virtual machine through the Proxmox console after cloning and setting up network connectivity such as IP address, hostname, and so on. For our Ubuntu virtual machine, we are going to edit interfaces in /etc/network/, hostname in /etc/, and hosts in /etc/. Advanced configuration options for a VM We will now look at some of the advanced configuration options we can use to extend the capability of a KVM virtual machine. The hotplugging option for a VM Although it is not a very common occurrence, a virtual machine can run out of storage unexpectedly whether due to over provisioning or improper storage requirement planning. For a physical server with hot swap bays, we can simply add a new hard drive and then partition it, and you are up and running. Imagine another situation when you have to add some virtual network interface to the VM right away, but you cannot afford shutting down the VM to add the vNICs. The hotplug option also allows hotplugging virtual network interfaces without shutting down a VM. Proxmox virtual machines by default do not support hotplugging. There are some extra steps needed to be followed in order to enable hotplugging for devices such as virtual disks and virtual network interfaces. Without the hotplugging option, the virtual machine needs to be completely powered off and then powered on after adding a new virtual disk or virtual interface. Simply rebooting the virtual machine will not activate the newly added virtual device. In Proxmox 3.2 and later, the hotplug option is not shown on the Proxmox GUI. It has to be done through CLI by adding options to the <vmid>.conf file. Enabling the hotplug option for a virtual machine is a three-step process: Shut down VM and add the hotplug option into the <vmid>.conf file. Power up VM and then load modules that will initiate the actual hotplugging. Add a virtual disk or virtual interface to be hotplugged into the virtual machine. The hotplugging option for <vmid>.conf Shut down the cloned virtual machine we created earlier and then open the configuration file from the following location. Securely log in to the Proxmox node or use the console in the Proxmox GUI using the following command: # nano /etc/pve/nodes/<node_name>/qemu-server/102.conf With default options added during the virtual machine creation process, the following code is what the VM configuration file looks like: ballon: 512 bootdisk: virtio0 cores: 1 ide2: none, media=cdrom kvm: 0 memory: 1024 name: pmxUB01 net0: e1000=56:63:C0:AC:5F:9D,bridge=vmbr0 ostype: l26 sockets: 1 virtio0: vm-nfs-01:102/vm-102-disk-1.qcow2,format=qcow2,size=32G Now, at the bottom of the 102.conf configuration file located under /etc/pve/nodes/<node_name>/qemu-server/, we will add the following option to enable hotplugging in the virtual machine: hotplug: Save the configuration file and power up the virtual machine. Loading modules After the hotplug option is added and the virtual machine is powered up, it is now time to load two modules into the virtual machine, which will allow hotplugging a virtual disk anytime without rebooting the VM. Securely log in to VM or use the Proxmox GUI console to get into the command prompt of the VM. Then, run the following commands to load the acpiphp and pci_hotplug modules. Do not load these modules to the Proxmox node itself: # sudo modprobe acpiphp # sudo modprobe pci_hotplug   The acpiphp and pci_hotplug modules are two hot plug drivers for the Linux operating system. These drivers allow addition of a virtual disk image or virtual network interface card without shutting down the Linux-based virtual machine. The modules can also be loaded automatically during the virtual machine boot by inserting them in /etc/modules. Simply add acpiphp and pci_hotplug on two separate lines in /etc/modules. Adding virtual disk/vNIC After loading both the acpiphp and pci_hotplug modules, all that remains is adding a new virtual disk or virtual network interface in the virtual machine through a web GUI. On adding a new disk image, check that the virtual machine operating system recognizes the new disk through the following command: #sudo fdisk -l For a virtual network interface, simply add a new virtual interface from a web GUI and the operating system will automatically recognize a new vNIC. After adding the interface, check that the vNIC is recognized through the following command: #sudo ifconfig –a Please note that while the hotplugging option works great with Linux-based virtual machines, it is somewhat problematic on Windows XP/7-based VMs. Hotplug seems to work great with both 32- and 64-bit versions of the Windows Server 2003/2008/2012 VMs. The best practice for a Windows XP/7-based virtual machine is to just power cycle the virtual machine to activate newly added virtual disk images. Forcing the Windows VM to go through hotplugging will cause an unstable operating environment. This is a limitation of the KVM itself. Nested virtual environment In simple terms, a virtual environment inside another virtual environment is known as a nested virtual environment. If the hardware resource permits, a nested virtual environment can open up whole new possibilities for a company. The most common scenario of a nested virtual environment is to set up a fully isolated test environment to test software such as hypervisor, or operating system updates/patches before applying them in a live environment. A nested environment can also be used as a training platform to teach computer and network virtualization, where students can set up their own virtual environment from the ground without breaking the main system. This eliminates the high cost of hardware for each student or for the test environment. When an isolated test platform is needed, it is just a matter of cloning some real virtual machines and giving access to authorized users. A nested virtual environment has the potential to give the network administrator an edge in the real world by allowing cost cutting and just getting things done with limited resources. One very important thing to keep in mind is that a nested virtual environment will have a significantly lower performance than a real virtual environment. If the nested virtual environment also has virtualized storage, performance will degrade significantly. The loss of performance can be offset by creating a nested environment with an SSD storage backend. When a nested virtual environment is created, it usually also contains virtualized storage to provide virtual storage for nested virtual machines. This allows for a fully isolated nested environment with its own subnet and virtual firewall. There are many debates about the viability of a nested virtual environment. Both pros and cons can be argued equally. But it will come down to the administrator's grasp on his or her existing virtual environment and good understanding of the nature of requirement. This allowed us to build a fully functional Proxmox cluster from the ground up without using additional hardware. The following screenshot is a side-by-side representation of a nested virtual environment scenario: In the previous comparison, on the right-hand side we have our basic cluster we have been building so far. On the left-hand side we have the actual physical nodes and virtual machines used to create the nested virtual environment. Our nested cluster is completely isolated from the rest of the physical cluster with a separate subnet. Internet connectivity is provided to the nested environment by using a virtualized firewall 1001-scce-fw-01. Like the hotplugging option, nesting is also not enabled in the Proxmox cluster by default. Enabling nesting will allow nested virtual machines to have KVM hardware virtualization, which increases the performance of nested virtual machines. To enable KVM hardware virtualization, we have to edit the modules in /etc/ of the physical Proxmox node and <vmid>.conf of the virtual machine. We can see that the option is disabled for our cloned nested virtual machine in the following screenshot: Enabling KVM hardware virtualization KVM hardware virtualization can be added just by performing the following few additional steps: In each Proxmox node, add the following line in the /etc/modules file: kvm-amd nested=1 Migrate or shut down all virtual machines of Proxmox nodes and then reboot. After the Proxmox nodes reboot, add the following argument in the <vmid>.conf file of the virtual machines used to create a nested virtual environment: args: -enable-nesting Enable KVM hardware virtualization from the virtual machine option menu through GUI. Restart the nested virtual machine. Network virtualization Network virtualization is a software approach to set up and maintain network without physical hardware. Proxmox has great features to virtualize the network for both real and nested virtual environments. By using virtualized networking, management becomes simpler and centralized. Since there is no physical hardware to deal with, the network ability can be extended within a minute's notice. Especially in a nested virtual environment, the use of virtualized network is very prominent. In order to set up a successful nested virtual environment, a better grasp of the Proxmox network feature is required. With the introduction of Open vSwitch (www.openvswitch.org) in Proxmox 3.2 and later, network virtualization is now much more efficient. Backing up a virtual machine A good backup strategy is the last line of defense against disasters, such as hardware failure, environmental damages, accidental deletions, and misconfigurations. In a virtual environment, a backup strategy turns into a daunting task because of the number of machines needed to be backed up. In a busy production environment, virtual machines may be created and discarded whenever needed or not needed. Without a proper backup plan, the entire backup task can go out of control. Gone are those days when we only had few server hardware to deal with and backing them up was an easy task. Today's backup solutions have to deal with several dozens or possibly several hundred virtual machines. Depending on the requirement, an administrator may have to backup all the virtual machines regularly instead of just the files inside them. Backing up an entire virtual machine takes up a very large amount of space after a while depending on how many previous backups we have. A granular file backup helps to quickly restore just the file needed but sure is a bad choice if the virtual server is badly damaged to a point that it becomes inaccessible. Here, we will see different backup options available in Proxmox, their advantages, and disadvantages. Proxmox backup and snapshot options Proxmox has the following two backup options: Full backup: This backs up the entire virtual machine. Snapshot: This only creates a snapshot image of the virtual machine. Proxmox 3.2 and above can only do a full backup and cannot do any granular file backup from inside a virtual machine. Proxmox also does not use any backup agent. Backing up a VM with a full backup All full backups are in the .tar format containing both the configuration file and virtual disk image file. The TAR file is all you need to restore the virtual machine on any nodes and on any storage. Full backups can also be scheduled on a daily and weekly basis. Full virtual backup files are named based on the following format: vzdump-qemu-<vm_id>-YYYY_MM_DD-HH_MM_SS.vma.lzo The following screenshot shows what a typical list of virtual machine backups looks like: Proxmox 3.2 and above cannot do full backups on LVM and Ceph RBD storage. Full backups can only occur on local, Ceph FS, and NFS-based storages, which are defined as backup during storage creation. Please note that Ceph FS and RBD are not the same type of storage even though they both coexist on the same Ceph cluster. The following screenshot shows the storage feature through the Proxmox GUI with backup-enabled attached storages: The backup menu in Proxmox is a true example of simplicity. With only three choices to select, it is as easy as it can get. The following screenshot is an example of a Proxmox backup menu. Just select the backup storage, backup mode, and compression type and that's it: Creating a schedule for Backup Schedules can be created from the virtual machine backup option. We will see each option box in detail in the following sections. The options are shown in the following screenshot: Node By default, a backup job applies to all nodes. If you want to apply the backup job to a particular node, then select it here. With a node selected, backup job will be restricted to that node only. If a virtual machine on node 1 was selected for backup and later on the virtual machine was moved to node 2, it will not be backed up since only node 1 was selected for this backup task. Storage Select a backup storage destination where all full backups will be stored. Typically an NFS server is used for backup storage. They are easy to set up and do not require a lot of upfront investment due to their low performance requirements. Backup servers are much leaner than computing nodes since they do not have to run any virtual machines. Backups are supported on local, NFS, and Ceph FS storage systems. Ceph FS storages are mounted locally on Proxmox nodes and selected as a local directory. Both Ceph FS and RBD coexist on the same Ceph cluster. Day of Week Select which day or days the backup task applies to. Days' selection is clickable in a drop-down menu. If the backup task should run daily, then select all the days from the list. Start Time Unlike Day of Week, only one time slot can be selected. Multiple selections of time to backup different times of the day are not possible. If the backup must run multiple times a day, create a separate task for each time slot. Selection mode The All selection mode will select all the virtual machines within the whole Proxmox cluster. The Exclude selected VMs mode will back up all VMs except the ones selected. Include selected VMs will back up only the ones selected. Send email to Enter a valid e-mail address here so that the Proxmox backup task can send an e-mail upon backup task completion or if there was any issue during backup. The e-mail includes the entire log of the backup tasks. It is highly recommended to enter the e-mail address here so that an administrator or backup operator can receive backup task feedback e-mails. This will allow us to find out if there was an issue during backup or how much time it actually takes to see if any performance issue occurred during backup. The following screenshot is a sample of a typical e-mail received after a backup task: Compression By default, the LZO compression method is selected. LZO (http://en.wikipedia.org/wiki/Lempel–Ziv–Oberhumer) is based on a lossless data compression algorithm, designed with the decompression ratio in mind. LZO is capable to do fast compression and even faster decompressions. GZIP will create smaller backup files at the cost of high CPU usage to achieve a higher compression ratio. Since higher compression ratio is the main focal point, it is a slow backup process. Do not select the None compression option, since it will create large backups without compression. With the None method, a 200 GB RAW disk image with 50 GB used will have a 200 GB backup image. With compression turned on, the backup image size will be around 70-80 GB. Mode Typically, all running virtual machine backups occur with the Snapshot option. Do not confuse this Snapshot option with Live Snapshots of VM. The Snapshot mode allows live backup while the virtual machine is turned on, while Live Snapshots captures the state of the virtual machine for a certain point in time. With the Suspend or Stop mode, the backup task will try to suspend the running virtual machine or forcefully stop it prior to commencing full backup. After backup is done, Proxmox will resume or power up the VM. Since Suspend only freezes the VM during backup, it has less downtime than the Stop mode because VM does not need to go through the entire reboot cycle. Both the Suspend and Stop modes backup can be used for VM, which can have partial or full downtime without disrupting regular infrastructure operation, while the Snapshot mode is used for VMs that can have a significant impact due to their downtime.
Read more
  • 0
  • 0
  • 10779
article-image-openvz-container-administration
Packt
11 Nov 2014
11 min read
Save for later

OpenVZ Container Administration

Packt
11 Nov 2014
11 min read
In this article by Mark Furman, the author of OpenVZ Essentials, we will go over the various aspects of OpenVZ administration. Some of the things we are going to go over in this article are as follows: Listing the containers that are running on the server Starting, stopping, suspending, and resuming containers Destroying, mounting, and unmounting containers Setting quota on and off Creating snapshots of the containers in order to back up and restore the container to another server (For more resources related to this topic, see here.) Using vzlist The vzlist command is used to list the containers on a node. When you run vzlist on its own without any options, it will only list the containers that are currently running on the system: vzlist In the previous example, we used the vzlist command to list the containers that are currently running on the server. Listing all the containers on the server If you want to list all the containers on the server instead of just the containers that are currently running on the server, you will need to add -a after vzlist. This will tell vzlist to include all of the containers that are created on the node inside its output: vzlist -a In the previous example, we used the vzlist command with an -a flag to tell vzctl that we want to list all of the containers that have been created on the server. The vzctl command The next command that we are going to cover is the vzctl command. This is the primary command that you are going to use when you want to perform tasks with the containers on the node. The initial functions of the vzctl command that we will go over are how to start, stop, and restart the container. Starting a container We use vzctl to start a container on the node. To start a container, run the following command: vzctl start 101Starting Container ...Setup slm memory limitSetup slm subgroup (default)Setting devperms 20002 dev 0x7d00Adding IP address(es) to pool:Adding IP address(es): 192.168.2.101Hostname for Container set: gotham.example.comContainer start in progress... In the previous example, we used the vzctl command with the start option to start the container 101. Stopping a container To stop a container, run the following command: vzctl stop 101Stopping container ...Container was stoppedContainer is unmounted In the previous example, we used the vzctl command with the stop option to stop the container 101. Restarting a container To restart a container, run the following command: vzctl restart 101Stopping Container ...Container was stoppedContainer is unmountedStarting Container... In the previous example, we used the vzctl command with the restart option to restart the container 101. Using vzctl to suspend and resume a container The following set of commands will use vzctl to suspend and resume a container. When you use vzctl to suspend a container, it creates a save point of the container to a dump file. You can then use vzctl to resume the container to the saved point it was in before the container was suspended. Suspending a container To suspend a container, run the following command: vzctl suspend 101 In the previous example, we used the vzctl command with the suspend option to suspend the container 101. Resuming a container To resume a container, run the following command: vzctl resume 101 In the previous example, we used the vzctl command with the resume option to resume operations on the container 101. In order to get resume or suspend to work, you may need to enable several kernel modules by running the following:modprobe vzcptmodprobe vzrst Destroying a container You can destroy a container that you created by using the destroy argument with vzctl. This will remove all the files including the configuration file and the directories created by the container. In order to destroy a container, you must first stop the container from running. To destroy a container, run the following command: vzctl destroy 101Destroying container private area: /vz/private/101 Container private area was destroyed. In the previous example, we used the vzctl command with the destroy option to destroy the container 101. Using vzctl to mount and unmount a container You are able to mount and unmount a container's private area located at /vz/root/ctid, which provides the container with root filesystem that exists on the server. Mounting and unmounting containers come in handy when you have trouble accessing the filesystem for your container. Mounting a container To mount a container, run the following command: vzctl mount 101 In the previous example, we used the vzctl command with the mount option to mount the private area for the container 101. Unmounting a container To unmount a container, run the following command: vzctl umount 101 In the previous example, we used the vzctl command with the umount option to unmount the private area for the container 101. Disk quotas Disk quotas allow you to define special limits for your container, including the size of the filesystem or the number of inodes that are available for use. Setting quotaon and quotaoff for a container You can manually start and stop the containers disk quota by using the quotaon and quotaoff arguments with vzctl. Turning on disk quota for a container To turn on disk quota for a container, run the following command: vzctl quotaon 101 In the previous example, we used the vzctl command with the quotaon option to turn disk quota on for the container 101. Turning off disk quota for a container To turn off disk quota for a container, run the following command: vzctl quotaoff 101 In the previous example, we used the vzctl command with the quotaoff option to turn off disk quota for the container 101. Setting disk quotas with vzctl set You are able to set the disk quotas for your containers on your server using the vzctl set command. With this command, you can set the disk space, disk inodes, and the quota time. To set the disk space for container 101 to 2 GB, use the following command: vzctl set 101 --diskspace 2000000:2200000 --save In the previous example, we used the vzctl set command to set the disk space quota to 2 GB with a 2.2 GB barrier. The two values that are separated with a : symbol and are the soft limit and the hard limit. The soft limit in the example is 2000000 and the hard limit is 2200000. The soft limit can be exceeded up to the value of the hard limit. The hard limit should never exceed its value. OpenVZ defines soft limits as barriers and hard limits as limits. To set the inode disk for container 101 to 1 million inodes, use the following command: vzctl set 101 --diskinodes 1000000:1100000 --save In the previous example, we used the vzctl set command to set the disk inode limits to a soft limit or barrier of 1 million inodes and a hard limit or limit or 1.1 million inodes. To set the quota time or the period of time in seconds that the container is allowed to exceed the soft limit values of disk quota and inode quota, use the following command: vzctl set 101 --quotatime 900 --save In the previous example, we used the vzctl command to set the quota time to 900 seconds or 15 minutes. This means that once the container soft limit is broken, you will be able to exceed the quota to the value of the hard limit for 15 minutes before the container reports that the value is over quota. Further use of vzctl set The vzctl set command allows you to make modifications to the container's config file without the need to manually edit the file. We are going to go over a few of the options that are essential to administer the node. --onboot The --onboot flag allows you to set whether or not the container will be booted when the node is booted. To set the onboot option, use the following command: vzctl set 101 --onboot In the previous example, we used the vzctl command with the set option and the --onboot flag to enable the container to boot automatically when the server is rebooted, and then saved to the container configuration file. --bootorder The --bootorder flag allows you to change the boot order priority of the container. The higher the value given, the sooner the container will start when the node is booted. To set the bootorder option, use the following command: vzctl set 101 --bootorder 9 --save In the previous example, we used the vzctl command with the set option and the bootorder flag to tell that we would like to change the priority of the order that the container is booted in, and then we save the option to the container's configuration file. --userpasswd The --userpasswd flag allows you to change a user's password that belongs to the container. If the user does not exist, then the user will be created. To set the userpasswd option, use the following command: vzctl set 101 --userpasswd admin:changeme In the previous example, we used the vzctl command with the set option and the --userpasswd flag and change the password for the admin user to the password changeme. --name The --name flag allows you to give the container a name that when assigned, can be used in place of the CTID value when using vzctl. This allows for an easier way to memorize your containers. Instead of focusing on the container ID, you will just need to remember the container name to access the container. To set the name option, use the following command: vzctl set 101 --name gotham --save In the previous example, we used the vzctl command with the set option to set our container 101 to use the name gotham and then save the changes to containers configuration file. --description The --description flag allows you to add a description for the container to give an idea of what the container is for. To use the description option, use the following command: vzctl set 101 --description "Web Development Test Server" --save In the previous example, we used the vzctl command with the set option and the --description flag to add a description of the container "Web Development Test Server". --ipadd The --ipadd flag allows you to add an IP address to the specified container. To set the ipadd option, use the following command: vzctl set 101 --ipadd 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipadd flag to add the IP address 192.168.2.103 to container 101 and then save the changes to the containers configuration file. --ipdel The --ipdel flag allows you to remove an IP address from the specified container. To use the ipdel option, use the following command: vzctl set 101 --ipdel 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipdel flag to remove the IP address 192.168.2.193 from the container 101 and then save the changes to the containers configuration file. --hostname The --hostname flag allows you to set or change the hostname for your container. To use the hostname option, use the following command: vzctl set 101 --hostname gotham.example.com --save In the previous example, we used the vzctl command with the set option and the --hostname flag to change the hostname of the container to gotham.example.com. --disable The --disable flag allows you to disable a containers startup. When this option is in place, you will not be able to start the container until this option is removed. To use the disable option, use the following command: vzctl set 101 --disable In the preceding example, we used the vzctl command with the set option and the --disable flag to prevent the container 101 from starting and then save the changes to the container's configuration file. --ram The --ram flag allows you to set the value for the physical page limit of the container and helps to regulate the amount of memory that is available to the container. To use the ram option, use the following command: vzctl set 101 --ram 2G --save In the previous example, we set the physical page limit to 2 GB using the --ram flag. --swap The --swap flag allows you to set the value of the amount of swap memory that is available to the container. To use the swap option, use the following command: vzctl set 101 --swap 1G --save In the preceding example, we set the swap memory limit for the container to 1 GB using the --swap flag. Summary In this article, we learned to administer the containers that are created on the node by using the vzctl command, and the vzlist command to list containers on the server. The vzctl command has a broad range of flags that can be given to it to allow you to perform many actions to a container. It allows you to start, stop, and restart, create, and destroy a container. You can also suspend and unsuspend the current state of the container, mount and unmount a container, issue changes to the container's config file by using vzctl set. Resources for Article: Further resources on this subject: Basic Concepts of Proxmox Virtual Environment [article] A Virtual Machine for a Virtual World [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 10359

article-image-content-switching-using-citrix-security
Packt
10 Apr 2013
8 min read
Save for later

Content Switching using Citrix Security

Packt
10 Apr 2013
8 min read
(For more resources related to this topic, see here.) Getting ready We will start with the packet flow of NetScaler and where content switching comes into play. The following diagram is self-explanatory (it is not the entire packet flow to the receiver's endpoint; the focus here is only to CS and LB): The content switching vserver can be used for HTTP/HTTPS/TCP and UDP protocols, and it can direct it only to another vserver, not to the backend service directly. The content switching vserver doesn't need an LB vserver to be bound to it for its status to be UP. Even with nothing bound to the CS vserver, the status would show UP (this comes in handy when you want to blackhole unwanted traffic).Hence, it is always recommended to check whether the load balancing vservers that are bound to the content switching vserver are up and running. If you want to avoid the preceding condition, the following CLI command will help you achieve it (by default, the value is disabled): root@ NetScaler> add cs vserver <name> <serviceType> (<IPAddress>) [-stateupdate ( ENABLED | DISABLED )] Content switching can be done based on the following client attributes: Mobile user/PC Images/videos Dynamic/static content Client with/without cookies Geographical locations. Per VLAN Similarly, server-side differentiations can also be made based on the following attributes: Server speed and capacity Source/destination port Source/destination IP SSL/HTTP Citrix also has an additional feature (starting from NetScaler version 9.3) that dynamically selects the load balancing feature based on any criteria or condition provided in the CS action/policy: >add cs action <name> -targetLBVserver <string-expression> >add cs policy <policyName> -rule <RULEValue> -action <actionName> The policy is then bound to the CS vserver CS vservers can be configured to process URLs in a case-sensitive manner. By default, this option is ON: >set cs vserver CSVserver -caseSensitive ON The load balancing vserver bound to the CS vserver need not have any IP address configured unless it is used in a different access as well. How to do it... We shall focus on a few case studies that we commonly come across, and that can be solved with the help of content switching: Case 1: Customer ABC accesses an online shopping portal and gets redirected to a secure connection at the payment gateway. For this scenario, an HTTP LB vserver is used and is bound to the CS vserver, which is on HTTPS: The configuration in the preceding screenshot shows that a CS policy as well as a responder policy is bound to the CS vserver named testVserver. The CS policy works on directing the traffic to the target LB vserver (if there are no CS policies bound at all, it goes to the default LB vserver; this default LB vserver should be configured on the CS Vserver). The responder policy, if bound to the CS vserver works on HTTP requests before matching any CS policy. The configuration is verified by using show cs vserver <vserver name>. A packet capture taken on NetScaler will clearly show the redirect from HTTP to HTTPS as <HTTP 302>. If there is any traffic that doesn't match any specific CS policies that are bound, then it uses the default policy. If there is no default policy, the user will get an error – HTTP 1.1 Service Unavailable error message. Case 2: The customer Star Networks has a single web application that contains two domains, namely www.starnetworks.com and www.starnetworks.com.edu and has a content switching setup, which works fine when accessing www.starnetworks.com, but throws an error when accessing www.starnetworks.com.edu. This happens because the peceding domains are not the same; they are different and the certificate that is bound to the CS vserver would be of type www.starnetworks.com only. To resolve this issue, we can bind multiple certificates to the CS vserver with the Server Name Indication (SNI) option enabled. The SNI option can be enabled in the SSL Parameters tab (this would pop up only if the SSL protocol is chosen while creating the vserver). The CLI command to enable SNI is as follows: >bind sslvserver star_cs_vserver -certkeyname -SNICert > bind sslvserver star_cs_vserver -certkeyname -SNICert For each domain added, NetScaler will establish a secure channel between itself and the client. With this solution, you can avoid configuring multiple CS vservers. Case 3: A Customer has a large pool of IP subnets that needs categorizing, and it would be a next to impossible task to configure that number of content switching policies; how does he go about deploying this scenario? The solution is as follows: A database file should be created that includes the IP address range and the domain: >shell #cd /var/ NetScaler/locdb # vi test.db Run the following command to apply the changes made to the database file: > add locationfile aol.db Bind the CS policy with an expression stating, for example, as follows: "CLIENT.IP.SRC.MATCHES_LOCATION ("star.*.*.*.*.*")"" How it works... The working of NetScaler in all three preceding scenarios is that it analyzes the incoming traffic directed to the CS VIP and parses through the bound CS policies, if any. If a match is found, it goes to the target LB vserver. If there are any other policies that are bound (for example, a responder policy or a rewrite policy), then the responder policy gets executed even before the CS policy is executed (since responder policies are usually applied to the HTTP requests).However, rewrite policies can be bound either at the CS or LB level, depending on whether the request or response needs to be modified. To recap what we have seen in the case studies mentioned before, the first case helps us to do a simple redirect from HTTP to HTTPS using a responder policy bound at the CS level. The second case shows us how multiple certificates with the SNI option are used to solve domain differences that would otherwise cause issues. The final case study shows us the basic but handy setting to map IP address ranges to target load balancing vservers. An important thing to note – there are scenarios where the vserver and the services that are bound to them may be different ports altogether (for example, HTTP LB VIP would be listening on port 80, but the services would be on port 8080). In such cases, the redirectPortRewrite feature should be enabled. There's more... This section concentrates on tidbits and troubleshooting techniques: Tips and troubleshooting We can start with checking the output of show cs and show lb vservers, to see if the services bound to them are up and running: root@ns > show cs vserver cs_star_vserver 1) cs_star_vserver (IP_ADDRESS_HERE:80) - HTTP Type: CONTENT State: UP Client Idle Timeout: 180 sec Down state flush: ENABLED Port Rewrite : DISABLED Default: lb_vserver Content Precedence: RULE Vserver IP and Port insertion: OFF Case Sensitivity: OFF If there are responder and rewrite policies, then we can check whether the number of hits on that policy are incrementing or not. Packet captures (using Wireshark) on the server and NetScaler. In some cases, the client would show us the packet flow in depth. The Down state flush feature of the NetScaler is useful for admins planning their downtimes in advance. This feature is enabled, by default, on the vserver and service level. When the feature is enabled, the connections that are already open and established will be terminated and the users will have to retry their connections again. The requests that are already being processed alone would be honored. When the feature is disabled, the open and established connections are honored, and no new connections will be accepted at this time. If enabled at the vserver level, and if the state of the vserver is DOWN, then the vserver will flush the client and server connections that are linked. Otherwise, it would terminate only the client facing connections. At the server level, if the service is marked as DOWN, then only the server facing connections would be flushed. There is another option on the Advanced tab of the CS/LB vserver to direct the excess traffic to a backup vserver. In cases where the backup server also overflows, there is an option to use the redirect URL, which is also found in the Advanced tab of the CS/LB vserver. Summary This article has explained the implementation of content switching using Citrix Security. Resources for Article : Further resources on this subject: Managing Citrix Policies [Article] Getting Started with XenApp 6 [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 1
  • 10088
Modal Close icon
Modal Close icon