Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-interactive-dashboard-with-vrealize-operations-manager
Vijin Boricha
03 Jul 2018
14 min read
Save for later

Interactive dashboard with vRealize Operations Manager [Tutorial]

Vijin Boricha
03 Jul 2018
14 min read
Creating a dashboard is a relatively simple exercise, creating a good dashboard will require tuning and some tweaking. The tricky part is displaying the information needed on a single screen, this is one of the biggest challenges when creating a dashboard; placing all the relevant information on a single pane of glass. The number one goal when creating a dashboard is to get all the information across in a glance. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater. Out of the 46 widgets vRealize Operations 6.6 has available, we will only use a handful of them regularly. The most commonly used widgets, from experience, are the scoreboard, metric selector, heat map, object list and metric chart. The rest are generally only used for specific use cases. There are basically two types of dashboards that we can create, an interactive dashboard or a static dashboard. An interactive dashboard is typically used for troubleshooting or similar activities where you are expecting the user to interact with widgets to get the information they are after. A static or display dashboard typically uses self-providing widgets such as scoreboards and heatmaps that are designed for display monitors, or other situations where an administrator is keeping an eye on environment changes. Each of the widgets has the ability to be a self-provider that means we set the information we want to display directly in the widget. The other option is to set up interactions and have other widgets provide information based on an object or metric selection in another widget. In this article, we will focus on the interactive dashboard. We will be looking at creating a dashboard that looks at vSphere cluster information, which at a glance will show us the overall health and general cluster information an administrator would need. Working through this will give you the knowledge needed to create any type of dashboard. The dashboard we are about to create will show how to configure the more common widgets in a way that can be replicated on a greater scale. When creating a dashboard, you will generally go through the following steps: Start the New Dashboard wizard from the Actions menu. Configure the general dashboard settings. Add and configure individual widgets. (Optional) Configure widget interactions. (Optional) Configure dashboard navigation. Creating an interactive dashboard You can create a dashboard by using the New Dashboard wizard. Alternatively, you can clone an existing dashboard and modify the clone. Perform the following steps to create a new dashboard: Navigate to the Dashboards page, click Actions, and then click Create Dashboard, as shown in the following screenshot: Under Dashboard Configuration, we need to give it a meaningful name and provide a description. If you click Yes for the Is default setting, the dashboard appears on the homepage when you log in. By default, the Recommendations dashboard is the dashboard that appears on the home page when a user logs in. You can change the default dashboard. Next, we click on Widget List to bring up all the available widgets. Here we will click and drag the widgets we need from the left pane to the right. We will use the following: Object List Metric Picker Metric Chart Generic Scoreboard Heat Map You can arrange widgets in the dashboard by dragging them to the desired column position. The left pane and the right pane are collapsed so that you have more room for your dashboard workspace, which is the center pane. To edit the widgets, we click on the little pen icon sitting at the top of the widget. The Object List The Object List widget configuration options, as shown in the following screenshot, include some of the more common options, such as Title, Refresh Content, and Refresh Interval. Options also exist that are specific to this widget: Mode: You can select Self, Children, or Parent mode. This setting is used by widget interactions. Auto Select First Row: This enables you to select whether or not to start with the first row of data. Select which tags to filter: This enables you to select objects from an object tree to observe. For example, you can choose to observe information about objects managed by the vCenter Server instance named VCVA01. You can add different metrics using the Additional Column option during widget configuration. Using the Additional Column pane, you can add metrics that are specific for each object in the data grid columns. Perform the following steps to edit the Object List widget in our example dashboard: Click this on Object List and the Edit Object List window will appear. In here edit the Title, select On for Auto Select First Row, and select Cluster Compute Resource under Object Types Tag. We should have something similar to the following screenshot. Click Save to continue. With tag selection, multiple tags can be selected, if this is done then only objects that fall under both tag types will be shown in the widget. The next thing we want to do is click on Widget Interactions on the left pane; this is where we go to link the widgets, for example, we select a virtual machine from an object list widget and it would change any linked widgets to display the information of that object. We will see a Selected Object(s) with a drop-down list followed by a green arrow pointing to our widgets. This is saying that what we select in the drop-down list will be linked to the associated widget. Here our new Cluster List will feed Metric Picker, Scoreboard, and Heatmap, while Metric Picker will feed Metric Chart. Also we will notice that a widget like Metric Chart can be fed by more than one widget. Click APPLY INTERACTIONS and we should end up with something similar to the following screenshot: The Metric Picker Now, if we select a metric under the Metric Picker widget it should show the metric in the Metric Chart widget, as displayed in the following screenshot: Metric Picker will contain all the available metrics for the selected object, such as an ESXi host or a Virtual Machine. The Heatmap Next up, we will edit the Heatmap widget. For this example, we will use the Heatmap widget to display capacity remaining for the datastores attached to the vSphere cluster. This is the best way to see at a glance that none of the datastores are over 90% used or getting close. We need to make the following changes: Give the widget a new name describing what we are trying to achieve. Change Group By to Cluster Compute Resource - This is what we want the parent container to be. Change mode to Instance - This mode type is best used when interacting with other widgets to get its objects. Change Object Type to Datastore - This is the object that we want displayed. Change Attribute Kind to Disk Space | Capacity Remaining (%) - The metric of the object that we want to use. Change Colors around to 0 (Min), 20 (Max) - Because we really only want to know if a datastore is getting close to the threshold, minimizing the range will give us more granular colors. Change the colors around making it red on the left and green on the right. This is done by clicking on the little color square at each end and picking a new color. The reason this is done is because we have capacity remaining, so we need 0% remaining as red. Click Save and we should now have something similar to the following screenshot, with each square representing a datastore: Move the mouse over each box to reveal more detail of the object. The Scoreboard Time to modify the last widget. This one will be a little more complicated due to how we display what we want while being interactive. When we configured the widget interactions, we noticed that the scoreboard widget was populated automatically with a bunch of metrics, as shown in the following screenshot: Now, let's go back to our dashboard creation and edit the Scoreboard widget. We will notice quite a lot of configuration options compared to others, most of which are how the boxes are laid out, such as number of columns, box size, and rounding out decimals. What we want to do for this widget is: Name the scoreboard something meaningful Round the decimals to 1 - this cuts down the amount of decimal places returned on the displayed value Under Metric Configuration choose the Host-Util file from the drop-down list We should now see something similar to the following screenshot: But what about the object selection you may have noticed in the lower half of the scoreboards widget? These are only used if we make the widget a self-provider, which we can see as an option to the top left of the edit window. We can choose objects and metrics, but they are ignored when Self Provider is set to Off. If we now click Save we should see the new configuration of the scoreboard widget, as shown in the following screenshot:  I’ve also changed the Visual Theme to Original in the scoreboard widget configuration options to change the way the scoreboard visualizes the information. The scoreboard widget may not always display the information we necessarily need. To get the widget to display the information we want while continuing to be interactive to our selections in the Cluster List widgets, we have to create a metric configuration (XML) file. Metric Configuration Files (XML) A lot of the widgets are edited through the GUI with the objects and metrics we want displayed, but some require a metric configuration file to define what metrics the widget should display. Metric configuration files can create a custom set of metrics for the customization of supported widgets with meaningful data. Metric configuration files store the metric attribute keys in XML format. These widgets support customization using metric configuration files: Scoreboard Metric Chart Property List Rolling View Chart Sparkline Chart Topology Graph To keep this simple, we will configure four metrics to be displayed, which are: CPU usage for the cluster in % CPU demand for the cluster Memory ballooning CPU usage for the cluster in MHz Perform the following steps to create a metric configuration file: Open a text editor, add the following code, and save it as an XML file; in this case we will call it clusterexample.xml: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <AdapterKinds> <AdapterKind adapterKindKey="VMWARE"> <ResourceKind resourceKindKey="ClusterComputeResource"> <Metric attrkey="cpu|capacity_usagepct_average" label="CPU" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|demandPct" label="CPU Demand" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|usagemhz_average" label="CPU Usage" unit="GHz" yellow="8" orange="16" red="20" /> <Metric attrkey="mem|vmmemctl_average" label="Balloon Mem" unit="GB" yellow="100" orange="150" red="200" /> </ResourceKind> </AdapterKind> </AdapterKinds> Using WinSCP or another similar product, upload this file to the following location on the vRealize Operations 6.0 virtual appliance: /usr/lib/vmware-vcops/tomcat-web-app/webapps/vcops-web-ent/WEB-INF/classes/resources/reskndmetrics In this location, you will notice some built in sample XML files. Alternatively, you can create the XML file from the vRealize Operations user interface. To do so, navigate to Administration | Configuration, and then Metric Configuration. Now let's go back to our dashboard creation and edit the Scoreboard widget. Under Metric Configuration choose the clusterexmaple.xml file that we just created from the drop-down list. Click Save to save the configuration. We have now completed the new dashboard; click Save on the bottom right to save the dashboard. We can go back and edit this dashboard whenever we need to. This new dashboard will now be available on the home page, this is shown in the following screenshot: For the Scoreboard widget we have used an XML file so the widget will display the metrics we would like to see when an object is selected in another widget. How can we get the correct metric and adapter names to be used in this file? Glad you asked. The simplest way to get the correct information we need for that XML file is to create a non-interactive dashboard with the widget we require with all the information we want to display for our interactive one. For example, let's quickly create a temp dashboard with only one scoreboard widget and populate it with what we want by manually selecting the objects and metrics with self-provider set to yes: Create another dashboard and drag and drop a single scoreboard widget. Edit the scoreboard widget and configure it with all the information we would like. Search for an object in the middle pane and select the widgets we want in the right pane. Configure the box label and Measurement Unit. A thing to note here is that we have selected memory balloon metric as shown in the following screenshot, but we have given it a label of GB. This is because of a new feature in 6.0 it will automatically upscale the metrics when shown on a scoreboard, this also goes for datastore GB to TB, CPU MHz to GHz, and network throughput from KBps to MBps. Typically in 5.x we would create super metrics to make this happen. The downside to this is that the badge color still has to be set in the metrics base format. Save this dashboard once we have the metrics we want. Locate it under our dashboard list and select it, click on the little cog, and select Export Dashboards as shown in the following screenshot. This will automatically download a file called Dashboard<Date>.json. Open this file in a text editor and have a look through it and we will see all the information we require to write our XML interaction file. First off is our resourceKindKey and adapterKindKey, as shown in the following screenshot. These are pretty self-explanatory, resourceKind being Cluster resource, and adapter is the adapter that's collecting the metrics, in this case the inbuilt vCenter one called VMWARE. Next are our resources, as we can see from the following screenshot we have metricKey, which is the most important one as well as the color settings, unit, and the label: There it is, how we can get the information we require for XML files: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <AdapterKinds> <AdapterKind adapterKindKey="VMWARE"> <ResourceKind resourceKindKey="ClusterComputeResource"> <Metric attrkey="cpu|capacity_usagepct_average" label="CPU" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|demandPct" label="CPU Demand" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|usagemhz_average" label="CPU Usage" unit="GHz" yellow="8" orange="16" red="20" /> <Metric attrkey="mem|vmmemctl_average" label="Balloon Mem" unit="GB" yellow="100" orange="150" red="200" /> </ResourceKind> </AdapterKind> </AdapterKinds> Any widget with the setting Metric Configuration available can use the XML files you create. The XML format is as per the preceding code. An XML file can also have multiple Adapter kinds as there could be different adapter metrics that you require. Today, we learned to create a dashboard that is interactive based on selections made within widgets. You also unraveled the mystery of the metric configuration XML file and how to get the information you require into it. To know more about Super metrics and when to use it, check out this book Mastering vRealize Operations Manager - Second Edition. What to expect from vSphere 6.7 How to ace managing the Endpoint Operations Management Agent with vROps Troubleshooting techniques in vRealize Operations components
Read more
  • 0
  • 0
  • 24087

article-image-chaos-conf-2018-recap-chaos-engineering-hits-maturity-as-community-moves-towards-controlled-experimentation
Richard Gall
12 Oct 2018
11 min read
Save for later

Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation

Richard Gall
12 Oct 2018
11 min read
Conferences can sometimes be confusing. Even at the most professional and well-planned conferences, you sometimes just take a minute and think what's actually the point of this? Am I learning anything? Am I meant to be networking? Will anyone notice if I steal extra food for the journey home? Chaos Conf 2018 was different, however. It had a clear purpose: to take the first step in properly forging a chaos engineering community. After almost a decade somewhat hidden in the corners of particularly innovative teams at Netflix and Amazon, chaos engineering might feel that its time has come. As software infrastructure becomes more complex, less monolithic, and as business and consumer demands expect more of the software systems that have become integral to the very functioning of life, resiliency has never been more important but more challenging to achieve. But while it feels like the right time for chaos engineering, it hasn't quite established itself in the mainstream. This is something the conference host, Gremlin, a platform that offers chaos engineering as a service, is acutely aware of. On the hand it's actively helping push chaos engineering into the hands of businesses, but on the other its growth and success, backed by millions of VC cash (and faith), depends upon chaos engineering becoming a mainstream discipline in the DevOps and SRE worlds. It's perhaps this reason that the conference felt so important. It was, according to Gremlin, the first ever public chaos engineering conference. And while it was relatively small in the grand scheme of many of today's festival-esque conferences attended by thousands of delegates (Dreamforce, the Salesforce conference, was also running in San Francisco in the same week), the fact that the conference had quickly sold out all 350 of its tickets - with more hoping on waiting lists - indicates that this was an event that had been eagerly awaited. And with some big names from the industry - notably Adrian Cockcroft from AWS and Jessie Frazelle from Microsoft - Chaos Conf had the air of an event that had outgrown its insider status before it had even began. The renovated cinema and bar in San Francisco's Mission District, complete with pinball machines upstairs, was the perfect container for a passionate community that had grown out of the clean corporate environs of Silicon Valley to embrace the chaotic mess that resembles modern software engineering. Kolton Andrus sets out a vision for the future of Gremlin and chaos engineering Chaos Conf was quick to deliver big news. They keynote speech, by Gremlin co-founder Kolton Andrus launched Gremlin's brand new Application Level Fault Injection (ALFI) feature, which makes it possible to run chaos experiments at an application level. Andrus broke the news by building towards it with a story of the progression of chaos engineering. Starting with Chaos Monkey, the tool first developed by Netflix, and moving from infrastructure to network, he showed how, as chaos engineering has evolved, it requires and faciliates different levels of control and insight on how your software works. "As we're maturing, the host level failures and the network level failures are necessary to building a robust and resilient system, but not sufficient. We need more - we need a finer granularity," Andrus explains. This is where ALFI comes in. By allowing Gremlin users to inject failure at an application level, it allows them to have more control over the 'blast radius' of their chaos experiments. The narrative Andrus was setting was clear, and would ultimately inform the ethos of the day - chaos engineering isn't just about chaos, it's about controlled experimentation to ensure resiliency. To do that requires a level of intelligence - technical and organizational - about how the various components of your software work, and how humans interact with them. Adrian Cockcroft on the importance of historical context and domain knowledge Adrian Cockcroft (@adrianco) VP at AWS followed Andrus' talk. In it he took the opportunity to set the broader context of chaos engineering, highlighting how tackling system failures is often a question of culture - how we approach system failure and think about our software. Developers love to learn things from first principles" he said. "But some historical context and domain knowledge can help illuminate the path and obstacles." If this sounds like Cockcroft was about to stray into theoretical territory, he certainly didn't. He offered plenty of practical frameworks for thinking through potential failure. But the talk wasn't theoretical - Cockcroft offered a taxonomy of failure that provides a useful framework for thinking through potential failure at every level. He also touched on how he sees the future of resiliency evolving, focusing on: Observability of systems Epidemic failure modes Automation and continuous chaos The crucial point Cockcroft makes is that cloud is the big driver for chaos engineering. "As datacenters migrate to the cloud, fragile and manual disaster recovery will be replaced by chaos engineering" read one of his slides. But more than that, the cloud also paves the way for the future of the discipline, one where 'chaos' is simply an automated part of the test and deployment pipeline. Selling chaos engineering to your boss Kriss Rochefolle, DevOps engineer and author of one of the best selling DevOps books in French, delivered a short talk on how engineers can sell chaos to their boss. He takes on the assumption that a rational proposal, informed by ROI is the best way to sell chaos engineering. He suggests instead that engineers need to play into emotions, and presenting chaos engineer as a method for tackling and minimizing the fear of (inevitable failure. Follow Kriss on Twitter: @crochefolle Walmart and chaos engineering Vilas Veraraghavan, the Director of Engineering was keen to clarify that Walmart doesn't practice chaos. Rather it practices resiliency - chaos engineering is simply a method the organization uses to achieve that. It was particularly useful to note the entire process that Vilas' team adopts when it comes to resiliency, which has largely developed out of Vilas' own work building his team from scratch. You can learn more about how Walmart is using chaos engineering for software resiliency in this post. Twitter's Ronnie Chen on diving and planning for failure Ronnie Chen (@rondoftw) is an engineering manager at Twitter. But she didn't talk about Twitter. In fact, she didn't even talk about engineering. Instead she spoke about her experience as a technical diver. By talking about her experiences, Ronnie was able to make a number of vital points about how to manage and tackle failure as a team. With mortality rates so high in diving, it's a good example of the relationship between complexity and risk. Chen made the point that things don't fail because of a single catalyst. Instead, failures - particularly fatal ones - happen because of a 'failure cascade'. Chen never makes the link explicit, but the comparison is clear - the ultimate outcome (ie. success or failure) is impacted by a whole range of situational and behavioral factors that we can't afford to ignore. Chen also made the point that, in diving, inexperienced people should be at the front of an expedition. "If you're inexperienced people are leading, they're learning and growing, and being able to operate with a safety net... when you do this, all kinds of hidden dependencies reveal themselves... every undocumented assumption, every piece of ancient team lore that you didn't even know you were relying on, comes to light." Charity Majors on the importance of observability Charity Majors (@mipsytipsy), CEO of Honeycomb, talked in detail about the key differences between monitoring and observability. As with other talks, context was important: a world where architectural complexity has grown rapidly in the space of a decade. Majors made the point that this increase in complexity has taken us from having known unknowns in our architectures, to many more unknown unknowns in a distributed system. This means that monitoring is dead - it simply isn't sophisticated enough to deal with the complexities and dependencies within a distributed system. Observability, meanwhile, allows you to to understand "what's happening in your systems just by observing it from the outside." Put simply, it lets you understand how your software is functioning from your perspective - almost turning it inside out. Majors then linked the concept to observability to the broader philosophy of chaos engineering - echoing some of the points raised by Adrian Cockcroft in his keynote. But this was her key takeaway: "Software engineers spend too much time looking at code in elaborately falsified environments, and not enough time observing it in the real world." This leads to one conclusion - the importance of testing in production. "Accept no substitute." Tammy Butow and Ana Medina on making an impact Tammy Butow (@tammybutow) and Ana Medina  (@Ana_M_Medina) from Gremlin took us through how to put chaos engineering into practice - from integrating it into your organizational culture to some practical tests you can run. One of the best examples of putting chaos into practice is Gremlin's concept of 'Failure Fridays', in which chaos testing becomes a valuable step in the product development process, to dogfood it and test out how a customer experiences it. Another way which Tammy and Ana suggested chaos engineering can be used was as a way of testing out new versions of technologies before you properly upgrade in production. To end, their talk, they demo'd a chaos battle between EKS (Kubernetes on AWS) and AKS (Kubernetes on Azure), doing an app container attack, a packet loss attack and a region failover attack. Jessie Frazelle on how containers can empower experimentation Jessie Frazelle (@jessfraz) didn't actually talk that much about chaos engineering. However, like Ronnie Chen's talk, chaos engineering seeped through what she said about bugs and containers. Bugs, for Frazelle, are a way of exploring how things work, and how different parts of a software infrastructure interact with each other: "Bugs are like my favorite thing... some people really hate when they get one of those bugs that turns out to be a rabbit hole and your kind of debugging it until the end of time... while debugging those bugs I hate them but afterwards, I'm like, that was crazy!" This was essentially an endorsement of the core concept of chaos engineering - injecting bugs into your software to understand how it reacts. Jessie then went on to talk about containers, joking that they're NOT REAL. This is because they're made up of  numerous different component parts, like Cgroups, namespaces, and LSMs. She contrasted containers with Virtual machines, zones and jails, which are 'first class concepts' - in other worlds, real things (Jessie wrote about this in detail last year in this blog post). In practice what this means is that whereas containers are like Lego pieces, VMs, zones, and jails are like a pre-assembled lego set that you don't need to play around with in the same way. From this perspective, it's easy to see how containers are relevant to chaos engineering - they empower a level of experimentation that you simply don't have with other virtualization technologies. "The box says to build the death star. But you can build whatever you want." The chaos ends... Chaos Conf was undoubtedly a huge success, and a lot of credit has to go to Gremlin for organizing the conference. It's clear that the team care a lot about the chaos engineering community and want it to expand in a way that transcends the success of the Gremlin platform. While chaos engineering might not feel relevant to a lot of people at the moment, it's only a matter of time that it's impact will be felt. That doesn't mean that everyone will suddenly become a chaos engineer by July 2019, but the cultural ripples will likely be felt across the software engineering landscape. But without Chaos Conf, it would be difficult to see chaos engineering growing as a discipline or set of practices. By sharing ideas and learning how other people work, a more coherent picture of chaos engineering started to emerge, one that can quickly make an impact in ways people wouldn't have expected six months ago. You can watch videos of all the talks from Chaos Conf 2018 on YouTube.
Read more
  • 0
  • 0
  • 23119

article-image-chaos-engineering-comes-to-kubernetes-thanks-to-gremlin
Richard Gall
18 Nov 2019
2 min read
Save for later

Chaos engineering comes to Kubernetes thanks to Gremlin

Richard Gall
18 Nov 2019
2 min read
Kubernetes causes problems. Just last week Cindy Sridharan wrote on Twitter that while Docker "succeeded... because it was a great developer tool," Kubernetes "decided to be all things tech and not much by way of UX. It was and remains a hostile piece of software to learn, run, operate, maintain." https://twitter.com/copyconstruct/status/1194701905248673792?s=20 That's just a drop in the ocean - you don't have to look hard to find more hot takes, jokes, and memes about how complicated working with Kubernetes can feel. Despite all this, it's certainly here to stay. That makes chaos engineering platform Gremlin's announcement that the platform will offer native support for Kubernetes particularly welcome. Citing container orchestration research done by Datadog in the press release, which indicates the rapid rate of Kubernetes adoption, Gremlin is hoping that it can provide some additional support for users that might be concerned about the platforms complexity. From last year: Gremlin makes chaos engineering with Docker easier with new container discovery feature Gremlin CTO Matt Fornaciari said "our goal is to provide SRE and DevOps teams that are building and deploying modern applications with the tools and processes necessary to understand how their systems handle failure, before that failure has the chance to impact customers and business." The new feature is designed to help engineers do exactly that by allowing them "to automate the process of identifying Kubernetes primitives such as nodes and pods," and to select and attack traffic from different services within Kubernetes. The other important element to all this is that Gremlin wants to make things as straightforward as possible for engineering teams. With a neat and easy to use UI, it would seem that, to return to Sridharan's words, the team are eager to make sure their product is "a great developer tool."   The tool has already been tried and tested in the wild. Simon Govier, Expedia's Director of Program Management described how performing chaos experiments on Kubernetes with Gremlin "significantly reduces the amount of time it takes to do fault injection and increases our systems' resilience to failure." Learn more on the Gremlin website.
Read more
  • 0
  • 0
  • 22879

article-image-creating-vm-using-virtualbox-ubuntu-linux
Packt
15 Apr 2010
12 min read
Save for later

Creating a VM using VirtualBox - Ubuntu Linux

Packt
15 Apr 2010
12 min read
Packt are due to launch a new Open Source brand, into which future VirtualBox titles will be published. For more information on that launch, look here. In this two-part article by Alfonso V. Romero, author of VirtualBox 3.1: Beginner's Guide, you shall: Create your first virtual machine in VirtualBox, using Ubuntu Linux Learn about your virtual machine's basic configuration Download and install Ubuntu Linux on your virtual machine Learn how to start and stop your virtual machine So let's get on with it... Getting started In this article, you'll need to download the Ubuntu Linux Desktop Live CD. It's a pretty big download (700 MB approximately), so I'd recommend you to start downloading it as soon as possible. With a 4 Mbps Internet connection, you'd need approximately 1 hour to download a copy of Ubuntu Linux Desktop. That's why I decided to change the order of some sections in this article. After all, action is what we're looking for, right? Downloading the Ubuntu Linux Live CD After finishing the exercise in this section, you can jump straight ahead into the next section while waiting for your Ubuntu Live CD to download. That way, you'll have to wait for less time, and your virtual machine will be ready for some action! Time for action – downloading the Ubuntu Desktop Live CD In the following exercise I'll show you how to download the Ubuntu Linux Desktop Edition from the official Ubuntu website. Open your web browser, and type http://www.ubuntu.com in the address bar. The Ubuntu Home Page will appear. Click on the Ubuntu Desktop Download link to continue: The Download Ubuntu page will show up next. Ubuntu's most recent version will be selected by default (Ubuntu Desktop 9.10, at the time of this writing). Select a location near you, and click on the Begin download button to continue. The Ubuntu Download page lets you download the 32-bit version automatically. If you want the 64-bit version of Ubuntu 9.10, you'll need to click on the Alternative download options link below the Download locations list box. The exercises in this article use the 32-bit version. You'll be taken to the download page. After a few seconds, your download will start automatically. Select the Save File option in your browser, and click on OK to continue. Now you just have to wait until the download process finishes. What just happened? I think the exercise pretty much explains itself, so I just want to add that you can also order a free Ubuntu CD or buy one from Ubuntu's website. Just click on the Get Ubuntu link at the front page and follow the instructions. I ordered mine when writing this article to see how long it takes to arrive at my front door. I hope it arrives before I finish the book! Have a go hero – doing more with the thing You can try downloading other Ubuntu versions, like Ubuntu 8.10 Hardy Heron. Just click on the Alternative download options link below the Download locations list box, and explore the other options available for download. Creating your Ubuntu Linux VM Now that you installed VirtualBox, it's time to learn how to work with it. I've used other virtualization products such as VMware besides VirtualBox, and in my humble opinion, the user interface in VirtualBox is a delight to work with. Time for action – creating a virtual machine At last you have the chance to use Windows and Linux side by side! This is one of the best features VirtualBox has to offer when you want the best of both worlds! Open VirtualBox, and click on the New button (or press Ctrl+N) to create a new virtual machine: The Welcome to the New Virtual Machine Wizard! dialog will show up. Click on Next to continue. Type UbuntuVB in the Name field, select Linux as the Operating System and Ubuntu as the Version in the VM Name and OS Type dialog. Click on Next to continue. You can leave the default 384 MB value in the Memory dialog box or choose a greater amount of RAM, depending on your hardware resources. Click on Next to continue. Leave the default values in the Virtual Hard Disk dialog, and click on Next twice to enter the Create New Virtual Disk wizard. Leave the default Dynamically Expanding Storage option in the Hard Disk Storage Type dialog, and click on Next to continue. Leave the default values chosen by VirtualBox for your Ubuntu Linux machine in the Virtual Disk Location and Size dialog (UbuntuVB and 8.00 GB), and click on Next to continue. A Summary dialog will appear, showing all the parameters you chose for your new virtual hard disk. Click on Finish to exit the Create New Virtual Hard Disk wizard. Now another Summary dialog will appear to show the parameters you chose for your new virtual machine. Click on Finish to exit the wizard and return to the VirtualBox main window: Your new UbuntuVB virtual machine will appear on the VirtualBox main screen, showing all its parameters in the Details tab. What just happened? Now your Ubuntu Linux virtual machine is ready to run! In this exercise, you created a virtual machine with all the parameters required for a typical Ubuntu Linux distribution. The first parameter to configure was the base memory or RAM. As you saw in step 3, the recommended size for a typical Ubuntu Linux installation is 384 MB. Exercise caution when selecting the RAM size for your virtual machine. The golden rule of thumb dictates that, if you have less than 1 GB of RAM, you can't assign more than one half of your physical RAM to a virtual machine because you'll get into trouble! Your host PC (the one that runs VirtualBox) will start to behave erratically and could crash! With 2 GB, for example, you can easily assign 1 GB to your virtual machine and 1 GB to your physical PC without any problems. Or you could even have three virtual machines with 512 MB of RAM each and leave 512 MB for your host PC. The best way to find out the best combination of RAM is to do some experimenting yourself and to know the minimum RAM requirements for your host and guest operating systems. Don't assume that assigning lots of RAM to your virtual machine will increase its performance. If, for example, you have 2 GB of RAM on your host and you assign 1 GB to an Ubuntu virtual machine, it's very unlikely there will be a bigger performance increase than if you were assigning only 512 MB to the same Ubuntu virtual machine. On the contrary, your host PC will work better if you only assign 512 MB to the Ubuntu virtual machine because it will use some of the extra RAM for disk caching, instead of recurring to the physical hard disk. However, it all depends on the guest operating system you plan to use and what applications you need to run. If you look closely at the Base Memory Size setting in the Memory dialog when creating a virtual machine, you'll notice that there are three memory areas below the slider control: the green-colored area indicates the amount of memory range you can safely choose for your virtual machine; the yellow-colored area indicates a dangerous memory range that you can choose, but nobody knows if your virtual machine and your host will be able to run without any problems; and the red-colored area indicates the memory range that your virtual machine can't use. It's wise to stick with the default values when creating a new virtual machine. Later on, if you need to run a memory-intensive application, you can add more RAM through the Settings button, as we'll see in the following section. Another setting to consider (besides memory) when creating a virtual machine is the virtual hard drive. Basically, a virtual hard disk is represented as a special file on your host computer's physical hard disk, and it's commonly known as a disk image file. This means that your host computer sees it as a 'large' file on its system, but your virtual machine sees it as a 'real' hard disk connected to it. Since virtual hard drives are completely independent from each other, there is no risk of accidental overwriting, and they can't be larger than the free space available on your real computer's physical hard drive. This is the most common way to handle virtual storage in VirtualBox; later on, we'll see more details about disk image files and the different formats available. VirtualBox assigns a default value based on the guest operating system you plan to install in your virtual machine. For Ubuntu Linux, the default value is 8 GB. That's enough space to experiment with Ubuntu and learn to use it, but if you really want to do some serious work —desktop publishing or movie production, for example—you should consider assigning your virtual machine more of your hard disk space. You can even get two hard drives on your physical machine, and assign one for your host system and the other one for your virtual machine! Or you can also create a new virtual hard disk image and add it as if it were a second hard drive! Before going to the next section, I'd like to talk about the two virtual hard disk storage types available in VirtualBox: dynamically expanding storage and fixed-size storage. The Hard Disk Storage Type dialog you saw in step 4 of the previous exercise contains a brief description for both types of storage. At first, the dynamically expanding option might seem more attractive because you don't see the space reduction in your hard drive immediately. When using dynamically expanding storage, VirtualBox needs to expand the storage size continuously, and that could mean a slight decrease in speed when compared to a fixed-size disk, but most of the time this is unnoticeable. Anyway, when the virtual hard disk is fully expanded, the differences between both types of storage disappear. Most people agree that a dynamically expanding disk represents a better choice than a fixed one, since it doesn't take up unnecessary space from your host's hard disk until needed. Personally, when experimenting with a new virtual machine, I use the dynamically expanding option, but when doing some real work, I like to set apart my virtual machine's hard disk space from the beginning, so I choose the fixed-size storage option in these cases. Have a go hero – experimenting with memory and hard disk storage types When creating a virtual machine, you can specify the amount of RAM to assign to it instead of using the default values suggested by VirtualBox. Create another virtual machine named UbuntuVB2, and try to assign the entire RAM available to it. You won't be able to continue until you select a lower value because the Next button will be grayed out, which means you're in the red-colored memory range. Now move back the slider until the Next button is active again; you'll probably be in the yellow-colored memory range. See if your virtual machine can start with that amount of memory and if you can use both your host PC and your VM without any problems. In case you encounter any difficulties, keep moving back the memory range until all problems disappear. Once you're done experimenting with the memory setting, use the UbuntuVB2 virtual machine with the same exact settings as the one you created in the previous exercise, but this time use a fixed-size hard drive. Just take into account that since VirtualBox must prepare all the storage space at once, this process may take a long time depending on the storage size you selected and the performance of your physical hard disk. Now go and try out different storage sizes with both types of disks: dynamically expanding and fixed size. Configuring basic settings for your Ubuntu Linux VM All right, you created your Ubuntu virtual machine and downloaded a copy of Ubuntu Desktop Live CD. Can you start installing Ubuntu now? I know you'll hate me, but nope, you can't. We need to tell your virtual machine where to boot the Live CD from, as if we were using a real PC. Follow me, and I'll show you the basic configuration settings for your VM, so you can start the Ubuntu installation ASAP! Time for action –basic configuration for your VM In this exercise you'll learn how to adjust some settings for your virtual machine, so you can install Ubuntu Linux on it. Open VirtualBox, select your UbuntuVB virtual machine, and click on the Settings button: The UbuntuVB – Settings dialog will appear, showing all the settings in the General tab. Click on the Storage category from the list in the left panel. Then select the Empty slot located just below the UbuntuVB.vdi hard disk image, under the IDE Controller element inside the Storage Tree panel, and click on the Invoke Virtual Media Manager button: The Virtual Media Manager dialog will appear next. Click on the Add button to add the Ubuntu Linux Live CD ISO image: The Select a CD/DVD-ROM disk image file dialog will show up next. Navigate to the directory where you downloaded the Ubuntu Desktop ISO image, select it, and click on the Open button to continue. The Ubuntu Desktop ISO image will appear selected in the CD/DVD Images tab from the Virtual Media Manager dialog. Click on the Select button to attach the Ubuntu ISO image to your virtual machine's CD/DVD drive. Next, the Ubuntu ISO image file will appear selected on the ISO Image File setting from the UbuntuVB – Settings dialog. Click on OK to continue. Now you're ready to start your virtual machine and install Ubuntu!
Read more
  • 0
  • 0
  • 22691

article-image-understanding-proxmox-ve-and-advanced-installation
Packt
13 Apr 2016
12 min read
Save for later

Understanding Proxmox VE and Advanced Installation

Packt
13 Apr 2016
12 min read
In this article by Wasim Ahmed, the author of the book Mastering Proxmox - Second Edition, we will see Virtualization as we all know today is a decade old technology that was first implemented in mainframes of the 1960s. Virtualization was a way to logically divide the mainframe's resources for different application processing. With the rise in energy costs, running under-utilized server hardware is no longer a luxury. Virtualization enables us to do more with less thus save energy and money while creating a virtual green data center without geographical boundaries. (For more resources related to this topic, see here.) A hypervisor is a piece software, hardware, or firmware that creates and manages virtual machines. It is the underlying platform or foundation that allows a virtual world to be built upon. In a way, it is the very building block of all virtualization. A bare metal hypervisor acts as a bridge between physical hardware and the virtual machines by creating an abstraction layer. Because of this unique feature, an entire virtual machine can be moved over a vast distance over the Internet and be made able to function exactly the same. A virtual machine does not see the hardware directly; instead, it sees the layer of the hypervisor, which is the same no matter on what hardware the hypervisor has been installed. The Proxmox Virtual Environment (VE) is a cluster-based hypervisor and one of the best kept secrets in the virtualization world. The reason is simple. It allows you to build an enterprise business-class virtual infrastructure at a small business-class price tag without sacrificing stability, performance, and ease of use. Whether it is a massive data center to serve millions of people, or a small educational institution, or a home serving important family members, Proxmox can handle configuration to suit any situation. If you have picked up this article, no doubt you will be familiar with virtualization and perhaps well versed with other hypervisors, such VMWare, Xen, Hyper-V, and so on. In this article and upcoming articles, we will see the mighty power of Proxmox from inside out. We will examine scenarios and create a complex virtual environment. We will tackle some heavy day-to-day issues and show resolutions, which might just save the day in a production environment. So, strap yourself and let's dive into the virtual world with the mighty hypervisor, Proxmox VE. Understanding Proxmox features Before we dive in, it is necessary to understand why one should choose Proxmox over the other main stream hypervisors. Proxmox is not perfect but stands out among other contenders with some hard to beat features. The following are some of the features that makes Proxmox a real game changer. It is free! Yes, Proxmox is free! To be more accurate, Proxmox has several subscription levels among which the community edition is completely free. One can simply download Proxmox ISO at no cost and raise a fully functional cluster without missing a single feature and without paying anything. The main difference between the paid and community subscription level is that the paid subscription receives updates, which goes through additional testing and refinement. If you are running a production cluster with real workload, it is highly recommended that you purchase support and licensing from Proxmox or Proxmox resellers. Built-in firewall Proxmox VE comes with a robust firewall ready to be configured out of the box. This firewall can be configured to protect the entire Proxmox cluster down to a virtual machine. The Per VM firewall option gives you the ability to configure each VM individually by creating individualized firewall rules, a prominent feature in a multi-tenant virtual environment. Open vSwitch Licensed under Apache 2.0 license, Open vSwitch is a virtual switch designed to work in a multi-server virtual environment. All hypervisors need a bridge between VMs and the outside network. Open vSwitch enhances features of the standard Linux bridge in an ever changing virtual environment. Proxmox fully supports Open vSwitch that allows you to create an intricate virtual environment all the while, reducing virtual network management overhead. For details on Open vSwitch, refer to http://openvswitch.org/. The graphical user interface Proxmox comes with a fully functional graphical user interface or GUI out of the box. The GUI allows an administrator to manage and configure almost all the aspects of a Proxmox cluster. The GUI has been designed keeping simplicity in mind with functions and features separated into menus for easier navigation. The following screenshot shows an example of the Proxmox GUI dashboard: KVM virtual machines KVM or Kernel-based virtual machine is a kernel module that is added to Linux for full virtualization to create isolated fully independent virtual machines. KVM VMs are not dependent on the host operating system in any way, but they do require the virtualization feature in BIOS to be enabled. KVM allows a wide variety of operating systems for virtual machines, such as Linux and Windows. Proxmox provides a very stable environment for KVM-based VMs. Linux containers or LXC Introduced recently in Proxmox VE 4.0, Linux containers allow multiple Linux instances on the same Linux host. All the containers are dependent on the host Linux operating system and only Linux flavors can be virtualized as containers. There are no containers for the Windows operating system. LXC replace prior OpenVZ containers, which were the primary containers in the virtualization method in the previous Proxmox versions. If you are not familiar with LXC and for details on LXC, refer to https://linuxcontainers.org/. Storage plugins Out of the box, Proxmox VE supports a variety of storage systems to store virtual disk images, ISO templates, backups, and so on. All plug-ins are quite stable and work great with Proxmox. Being able to choose different storage systems gives an administrator the flexibility to leverage the existing storage in the network. As of Proxmox VE 4.0, the following storage plug-ins are supported: The local directory mount points iSCSI LVM Group NFS Share GlusterFS Ceph RBD ZFS Vibrant culture Proxmox has a growing community of users who are always helping others to learn Proxmox and troubleshoot various issues. With so many active users around the world and through active participation of Proxmox developers, the community has now become a culture of its own. Feature requests are continuously being worked on, and the existing features are being strengthened on a regular basis. With so many users supporting Proxmox, it is sure here to stay. The basic installation of Proxmox The installation of a Proxmox node is very straightforward. Simply, accept the default options, select localization, and enter the network information to install Proxmox VE. We can summarize the installation process in the following steps: Download ISO from the official Proxmox site and prepare a disc with the image (http://proxmox.com/en/downloads). Boot the node with the disc and hit enter to start the installation from the installation GUI. We can also install Proxmox from a USB drive. Progress through the prompts to select options or type in information. After the installation is complete, access the Proxmox GUI dashboard using the IP address, as follows: https://<proxmox_node_ip:8006 In some cases, it may be necessary to open the firewall port to allow access to the GUI over port 8006. The advanced installation option Although the basic installation works in all scenarios, there may be times when the advanced installation option may be necessary. Only the advanced installation option provides you the ability to customize the main OS drive. A common practice for the operating system drive is to use a mirror RAID array using a controller interface. This provides drive redundancy if one of the drives fails. This same level of redundancy can also be achieved using a software-based RAID array, such as ZFS. Proxmox now offers options to select ZFS-based arrays for the operating system drive right at the beginning of the installation. For details on ZFS, if you are not familiar with ZFS, refer to https://en.wikipedia.org/wiki/ZFS. It is a common question to ask why one should choose ZFS software RAID over tried and tested hardware-based RAID. The simple answer is flexibility. A hardware RAID is locked or fully dependent on the hardware RAID controller interface that created the array, whereas ZFS software-based is not dependent on any hardware, and the array can be easily be ported to different hardware nodes. Should a RAID controller failure occur, the entire array created from that controller is lost unless there is an identical controller interface available for replacement? The ZFS array is only lost when all the drives or maximum tolerable number of drives are lost in the array. Besides ZFS, we can also select other filesystem types, such as ext3, ext4, or xfs from the same advanced option. We can also set the custom disk or partition sizes through the advanced option. The following screenshot shows the installation interface with the Target Hard disk selection page: Click on Options, as shown in the preceding screenshot, to open the advanced option for the Hard disk. The following screenshot shows the option window after clicking on the Options button: In the preceding screenshot, we selected ZFS RAID1 for mirroring and the two drives, Harddisk 0 and Harddisk 1, respectively to install Proxmox. If we pick one of the filesystems such as ext3, ext4, or xfs instead of ZFS, the Hard disk Option dialog box will look like the following screenshot with different set of options: Selecting a filesystem gives us the following advanced options: hdsize: This is the total drive size to be used by the Proxmox installation. swapsize: This defines the swap partition size. maxroot: This defines the maximum size to be used by the root partition. minfree: This defines the minimum free space that should remain after the Proxmox installation. maxvz: This defines the maximum size for data partition. This is usually /var/lib/vz. Debugging the Proxmox installation Debugging features are part of any good operating system. Proxmox has debugging features that will help you during a failed installation. Some common reasons are unsupported hardware, conflicts between devices, ISO image errors, and so on. Debugging mode logs and displays installation activities in real time. When the standard installation fails, we can start the Proxmox installation in debug mode from the main installation interface, as shown in the following screenshot: The debug installation mode will drop us in the following prompt. To start the installation, we need to press Ctrl + D. When there is an error during the installation, we can simply press Ctrl + C to get back to this console to continue with our investigation: From the console, we can check the installation log using the following command: # cat /tmp/install.log From the main installation menu, we can also press e to enter edit mode to change the loader information, as shown in the following screenshot: At times, it may be necessary to edit the loader information when normal booting does not function. This is a common case when Proxmox is unable to show the video output due to UEFI or a nonsupported resolution. In such cases, the booting process may hang. One way to continue with booting is to add the nomodeset argument by editing the loader. The loader will look as follows after editing: linux/boot/linux26 ro ramdisk_size=16777216 rw quiet nomodeset Customizing the Proxmox splash screen When building a custom Proxmox solution, it may be necessary to change the default blue splash screen to something more appealing in order to identify the company or department the server belongs to. In this section, we will see how easily we can integrate any image as the splash screen background. The splash screen image must be in the .tga format and must have fixed standard sizes, such as 640 x 480, 800 x 600, or 1024 x 768. If you do not have any image software that supports the .tga format, you can easily convert an jpg, gif, or png image to the .tga format using a free online image converter (http://image.online-convert.com/convert-to-tga). Once the desired image is ready in the .tga format, the following steps will integrate the image as the Proxmox splash screen: Copy the .tga image in the Proxmox node in the /boot/grub directory. Edit the grub file in /etc/default/grub to add the following code, and click on save: GRUB_BACKGROUND=/boot/grub/<image_name>.tga Run the following command to update the grub configuration: # update-grub Reboot. The following screenshot shows an example of how the splash screen may look like after we add a custom image to it: Picture courtesy of www.techcitynews.com We can also change the font color to make it properly visible, depending on the custom image used. To change the font color, edit the debian theme file in /etc/grub.d/05_debian_theme, and find the following line of code: set_background_image "${GRUB_BACKGROUND}" || set_default_theme Edit the line to add the font color, as shown in the following format. In our example, we have changed the font color to black and highlighted the font color to light blue: set_background_image "${GRUB_BACKGROUND}" "black/black" "light-blue/black" || set_default_theme After making the necessary changes, update grub, and reboot to see the changes. Summary In this article, we looked at why Proxmox is a better option as a hypervisor, what advanced installation options are available during an installation, and why do we choose software RAID for the operating system drive. We also looked at the cost of Proxmox, storage options, and network flexibility using openvswitch. We learned the presence of the debugging features and customization options of the Proxmox splash screen. In next article, we will take a closer look at the Proxmox GUI and see how easy it is to centrally manage a Proxmox cluster from a web browser. Resources for Article:   Further resources on this subject: Proxmox VE Fundamentals [article] Basic Concepts of Proxmox Virtual Environment [article]
Read more
  • 0
  • 0
  • 21868

Packt
14 Sep 2015
6 min read
Save for later

Getting Started – Understanding Citrix XenDesktop and its Architecture

Packt
14 Sep 2015
6 min read
In this article written by Gurpinder Singh, author of the book Troubleshooting Citrix Xendesktop, the author wants us to learn about the following topics: Hosted shared vs hosted virtual desktops Citrix FlexCast delivery technology Modular framework architecture What's new in XenDesktop 7.x (For more resources related to this topic, see here.) Hosted shared desktops (HSD) vs hosted virtual desktops (HVD) Instead of going through the XenDesktop architecture; firstly, we would like to explain the difference between the two desktop delivery platforms HSD and HVD. It is a common question that is asked by every System Administrator whenever there is a discussion on the most suited desktop delivery platform for the enterprises. Desktop Delivery platform depends on the requirements for the enterprise. Some choose Hosted Shared Desktops (HSD)or Server Based Computing (XenApp) over Hosted Virtual Desktop (XenDesktop); where single server desktop is shared among multiple users, and the environment is locked down using Active Directory GPOs. XenApp is cost effective platform when compared between XenApp and XenDesktop and many small to mid-sized enterprises prefer to choose this platform due to its cost benefits and less complexity. However, the preceding model does pose some risks to the environment as the same server is being shared by multiple users and a proper design plan is required to configure proper HSD or XenApp Published Desktop environment. Many enterprises have security and other user level dependencies where they prefer to go with hosted virtual desktops solution. Hosted virtual desktop or XenDesktop runs a Windows 7 or Windows 8 desktop running as virtual machine hosted on a data centre. In this model, single user connects to single desktop and therefore, there is a very less risk of having desktop configuration impacted for all users. XenDesktop 7.x and above versions now also enable you to deliver server based desktops (HSD) along with HVD within one product suite. XenDesktop also provides HVD pooled desktops which work on a shared OS image concept which is similar to HSD desktops with a difference of running Desktop Operating System instead of Server Operating System. Please have a look at the following table which would provide you a fair idea on the requirement and recommendation on both delivery platforms for your enterprise. Customer Requirement Delivery Platform User needs to work on one or two applications and often need not to do any updates or installation on their own. Hosted Shared Desktop User work on their own core set of applications for which they need to change system level settings, installations and so on. Hosted virtual Desktops (Dedicated) User works on MS Office and other content creation tools Hosted Shared Desktop User needs to work on CPU and graphic intensive applications that requires video rendering Hosted Virtual Desktop (Blade PCs) User needs to have admin privileges to work on specific set of applications. Hosted Virtual Desktop (Pooled) You can always have mixed set of desktop delivery platforms in your environment focussed on the customer need and requirements. Citrix FlexCast delivery technology Citrix FlexCast is a delivery technology that allows Citrix administrator to personalize virtual desktops to meet the performance, security and flexibility requirements of end users. There are different types of user requirements; some need standard desktops with standard set of apps and others require high performance personalized desktops. Citrix has come up with a solution to meet these demands with Citrix FlexCast Technology. You can deliver any kind of virtualized desktop with FlexCast technology, there are five different categories in which FlexCast models are available. Hosted Shared or HSD Hosted Virtual Desktop or HVD Streamed VHD Local VMs On-Demand Apps The detailed discussion on these models is out of scope for this article. To read more about the FlexCast models, please visit http://support.citrix.com/article/CTX139331. Modular framework architecture To understand the XenDesktop architecture, it is better to break down the architecture into discrete independent modules rather than visualizing it as an integrated one single big piece. Citrix provided this modularized approach to design and architect XenDesktop to solve end customers set of requirements and objectives. This modularized approach solves customer requirements by providing a platform that is highly resilient, flexible and scalable. This reference architecture is based on information gathered by multiple Citrix consultants working on a wide range of XenDesktop implementations. Have a look at the basic components of the XenDesktop architecture that everyone should be aware of before getting involved with troubleshooting: We won't be spending much time on understanding each component of the reference architecture, http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/xendesktop-deployment-blueprint.pdf in detail as this is out of scope for this book. We would be going through each component quickly. What's new in XenDesktop 7.x With the release of Citrix XenDesktop 7, Citrix has introduced a lot of improvements over previous releases. With every new product release, there is lot of information published and sometimes it becomes very difficult to get the key information that all system administrators would be looking for to understand what has been changed and what the key benefits of the new release are. The purpose of this section would be to highlight the new key features that XenDesktop 7.x brings to the kitty for all Citrix administrators. This section would not provide you all the details regarding the new features and changes that XenDesktop 7.x has introduced but highlights the key points that every Citrix administrator should be aware of while administrating XenDesktop 7. Key Highlights: XenApp and XenDesktop are part of now single setup Cloud integration to support desktop deployments on the cloud IMA database doesn't exist anymore IMA is replaced by FMA (Flexcast Management Architecture) Zones Concept are no more zones or ZDC (Data Collectors) Microsoft SQL is the only supported Database Sites are used instead of Farms XenApp and XenDesktop can now share consoles, Citrix Studio and Desktop Director are used for both products Shadowing feature is deprecated; Citrix recommends Microsoft Remote Assistance to be used Locally installed applications integrated to be used with Server based desktops HDX & mobility features Profile Management is included MCS can now be leveraged for both Server & Desktop OS MCS now works with KMS Storefront replaces Web Interface Remote-PC Access No more Citrix Streaming Profile Manager; Citrix recommends MS App-V Core component is being replaced by a VDA agent Summary We should now have a basic understanding on desktop virtualization concepts, Architecture, new features in XenDesktop 7.x, XenDesktop delivery models based on FlexCast Technology. Resources for Article: Further resources on this subject: High Availability, Protection, and Recovery using Microsoft Azure [article] Designing a XenDesktop® Site [article] XenMobile™ Solutions Bundle [article]
Read more
  • 0
  • 0
  • 20630
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-powershell
Packt
07 Sep 2015
17 min read
Save for later

Working with PowerShell

Packt
07 Sep 2015
17 min read
In this article, you will cover: Retrieving system information – Configuration Service cmdlets Administering hosts and machines – Host and MachineCreation cmdlets Managing additional components – StoreFront Admin and Logging cmdlets (For more resources related to this topic, see here.) Introduction With hundreds or thousands of hosts to configure and machines to deploy, configuring all the components manually could be difficult. As for the previous XenDesktop releases, and also with the XenDesktop 7.6 version, you can find an integrated set of PowerShell modules. With its use, IT technicians are able to reduce the time required to perform management tasks by the creation of PowerShell scripts, which will be used to deploy, manage, and troubleshoot at scale the greatest part of the XenDesktop components. Working with PowerShell instead of the XenDesktop GUI will give you more flexibility in terms of what kind of operations to execute, having a set of additional features to use during the infrastructure creation and configuration phases. Retrieving system information – Configuration Service cmdlets In this recipe, we will use and explain a general-purpose PowerShell cmdlet: the Configuration Service category. This is used to retrieve general configuration parameters, and to obtain information about the implementation of the XenDesktop Configuration Service. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be able to run a PowerShell script (.ps1 format), you have to enable the script execution from the PowerShell prompt in the following way, using its application: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will explain and execute the commands associated with the XenDesktop System and Services configuration area: Connect to one of the Desktop Broker servers, by using a remote Desktop connection, for instance. Right-click on the PowerShell icon installed on the Windows taskbar and select the Run as Administrator option. Load the Citrix PowerShell modules by typing the following command and then press the Enter key: Asnp Citrix* As an alternative to the Asnp command, you can use the Add-PSSnapin command. Retrieve the active and configured Desktop Controller features by running the following command: Get-ConfigEnabledFeature To retrieve the current status of the Config Service, run the following command. The output result will be OK in the absence of configuration issues: Get-ConfigServiceStatus To get the connection string used by the Configuration Service and to connect to the XenDesktop database, run the following command: Get-ConfigDBConnection Starting from the previously received output, it's possible to configure the connection string to let the Configuration Service use the system DB. For this command, you have to specify the Server, Initial Catalog, and Integrated Security parameters: Set-ConfigDBConnection –DBConnection"Server=<ServernameInstanceName>; Initial Catalog=<DatabaseName>; Integrated Security=<True | False>" Starting from an existing Citrix database, you can generate a SQL procedure file to use as a backup to recreate the database. Run the following command to complete this task, specifying the DatabaseName and ServiceGroupName parameters: Get-ConfigDBSchema -DatabaseName<DatabaseName> -ServiceGroupName<ServiceGroupName>> Path:FileName.sql You need to configure a destination database with the same name as that of the source DB, otherwise the script will fail! To retrieve information about the active Configuration Service objects (Instance, Service, and Service Group), run the following three commands respectively: Get-ConfigRegisteredServiceInstance Get-ConfigService Get-ConfigServiceGroup To test a set of operations to check the status of the Configuration Service, run the following script: #------------ Script - Configuration Service #------------ Define Variables $Server_Conn="SqlDatabaseServer.xdseven.localCITRIX,1434" $Catalog_Conn="CitrixXD7-Site-First" #------------ write-Host"XenDesktop - Configuration Service CmdLets" #---------- Clear the existing Configuration Service DB connection $Clear = Set-ConfigDBConnection -DBConnection $null Write-Host "Clearing any previous DB connection - Status: " $Clear #---------- Set the Configuration Service DB connection string $New_Conn = Set-ConfigDBConnection -DBConnection"Server=$Server_Conn; Initial Catalog=$Catalog_Conn; Integrated Security=$true" Write-Host "Configuring the DB string connection - Status: " $New_Conn $Configured_String = Get-configDBConnection Write-Host "The new configured DB connection string is: " $Configured_String You have to save this script with the .ps1 extension, in order to invoke it with PowerShell. Be sure to change the specific parameters related to your infrastructure, in order to be able to run the script. This is shown in the following screenshot: How it works... The Configuration Service cmdlets of XenDesktop PowerShell permit the managing of the Configuration Service and its related information: the Metadata for the entire XenDesktop infrastructure, the Service instances registered within the VDI architecture, and the collections of these services, called Service Groups. This set of commands offers the ability to retrieve and check the DB connection string to contact the configured XenDesktop SQL Server database. These operations are permitted by the Get-ConfigDBConnection command (to retrieve the current configuration) and the Set-ConfigDBConnection command (to configure the DB connection string); both the commands use the DB Server Name with the Instance name, DB name, and Integrated Security as information fields. In the attached script, we have regenerated a database connection string. To be sure to be able to recreate it, first of all we have cleared any existing connection, setting it to null (verify the command associated with the $Clear variable), then we have defined the $New_Conn variable, using the Set-ConfigDBConnection command; all the parameters are defined at the top of the script, in the form of variables. Use the Write-Host command to echo results on the standard output. There's more... In some cases, you may need to retrieve the state of the registered services, in order to verify their availability. You can use the Test-ConfigServiceInstanceAvailability cmdlet, retrieving whether the service is responding or not and its response time. Run the following example to test the use of this command: Get-ConfigRegisteredServiceInstance | Test-ConfigServiceInstanceAvailability | more Use the –ForceWaitForOneOfEachType parameter to stop the check for a service category, when one of its services responds. Administering hosts and machines – Host and MachineCreation cmdlets In this recipe, we will describe how to create the connection between the Hypervisor and the XenDesktop servers, and the way to generate machines to assign to the end users, all by using Citrix PowerShell. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be sure to be able to run a PowerShell script (the.ps1 format), you have to enable the script execution from the PowerShell prompt in this way: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will discuss the PowerShell commands used to connect XenDesktop with the supported hypervisors plus the creation of the machines from the command line: Connect to one of the Desktop Broker servers. Click on the PowerShell icon installed on the Windows taskbar. Load the Citrix PowerShell modules by typing the following command, and then press the Enter key: Asnp Citrix* To list the available Hypervisor types, execute this task: Get-HypHypervisorPlugin –AdminAddress<BrokerAddress> To list the configured properties for the XenDesktop root-level location (XDHyp:), execute the following command: Get-ChildItemXDHyp:HostingUnits Please refer to the PSPath, Storage, and PersonalvDiskStorage output fields to retrieve information on the storage configuration. Execute the following cmdlet to add a storage resource to the XenDesktop Controller host: Add-HypHostingUnitStorage –LiteralPath<HostPathLocation> -StoragePath<StoragePath> -StorageType<OSStorage|PersonalvDiskStorage> - AdminAddress<BrokerAddress> To generate a snapshot for an existing VM, perform the following task: New-HypVMSnapshot –LiteralPath<HostPathLocation> -SnapshotDescription<Description> Use the Get-HypVMMacAddress -LiteralPath<HostPathLocation> command to list the MAC address of specified desktop VMs. To provision machine instances starting from the Desktop base image template, run the following command: New-ProvScheme –ProvisioningSchemeName<SchemeName> -HostingUnitName<HypervisorServer> -IdentityPoolName<PoolName> -MasterImageVM<BaseImageTemplatePath> -VMMemoryMB<MemoryAssigned> -VMCpuCount<NumberofCPU> To specify the creation of instances with the Personal vDisk technology, use the following option: -UsePersonalVDiskStorage. After the creation process, retrieve the provisioning scheme information by running the following command: Get-ProvScheme –ProvisioningSchemeName<SchemeName> To modify the resources assigned to desktop instances in a provisioning scheme, use the Set-ProvScheme cmdlet. The permitted parameters are –ProvisioningSchemeName, -VMCpuCount, and –VMMemoryMB. To update the desktop instances to the latest version of the Desktop base image template, run the following cmdlet: Publish-ProvMasterVmImage –ProvisioningSchemeName<SchemeName> -MasterImageVM<BaseImageTemplatePath> If you do not want to maintain the pre-update instance version to use as a restore checkpoint, use the –DoNotStoreOldImage option. To create machine instances, based on the previously configured provisioning scheme for an MCS architecture, run this command: New-ProvVM –ProvisioningSchemeName<SchemeName> -ADAccountName"DomainMachineAccount" Use the -FastBuild option to make the machine creation process faster. On the other hand, you cannot start up the machines until the process has been completed. Retrieve the configured desktop instances by using the next cmdlet: Get-ProvVM –ProvisioningSchemeName<SchemeName> -VMName<MachineName> To remove an existing virtual desktop, use the following command: Remove-ProvVM –ProvisioningSchemeName<SchemeName> -VMName<MachineName> -AdminAddress<BrokerAddress> The next script will combine the use of part of the commands listed in this recipe: #------------ Script - Hosting + MCS #----------------------------------- #------------ Define Variables $LitPath = "XDHyp:HostingUnitsVMware01" $StorPath = "XDHyp:HostingUnitsVMware01datastore1.storage" $Controller_Address="192.168.110.30" $HostUnitName = "Vmware01" $IDPool = $(Get-AcctIdentityPool -IdentityPoolName VDI-DESKTOP) $BaseVMPath = "XDHyp:HostingUnitsVMware01VMXD7-W8MCS-01.vm" #------------ Creating a storage location Add-HypHostingUnitStorage –LiteralPath $LitPath -StoragePath $StorPath -StorageTypeOSStorage -AdminAddress $Controller_Address #---------- Creating a Provisioning Scheme New-ProvScheme –ProvisioningSchemeName Deploy_01 -HostingUnitName $HostUnitName -IdentityPoolName $IDPool.IdentityPoolName -MasterImageVM $BaseVMPathT0-Post.snapshot -VMMemoryMB 4096 -VMCpuCount 2 -CleanOnBoot #---------- List the VM configured on the Hypervisor Host dir $LitPath*.vm exit How it works... The Host and MachineCreation cmdlet groups manage the interfacing with the Hypervisor hosts, in terms of machines and storage resources. This allows you to create the desktop instances to assign to the end user, starting from an existing and mapped Desktop virtual machine. The Get-HypHypervisorPlugin command retrieves and lists the available hypervisors to use to deploy virtual desktops and to configure the storage types. You need to configure an operating system storage area or a Personal vDisk storage zone. The way to map an existing storage location from the Hypervisor to the XenDesktop controller is by running the Add-HypHostingUnitStorage cmdlet. In this case you have to specify the destination path on which the storage object will be created (LiteralPath), the source storage path on the Hypervisor machine(s) (StoragePath), and the StorageType previously discussed. The storage types are in the form of XDHyp:HostingUnits<UnitName>. To list all the configured storage objects, execute the following command: dirXDHyp:HostingUnits<UnitName> *.storage After configuring the storage area, we have discussed the Machine Creation Service (MCS) architecture. In this cmdlets collection, we have the availability of commands to generate VM snapshots from which we can deploy desktop instances (New-HypVMSnapshot), and specify a name and a description for the generated disk snapshot. Starting from the available disk image, the New-ProvScheme command permits you to create a resource provisioning scheme, on which to specify the desktop base image, and the resources to assign to the desktop instances (in terms of CPU and RAM -VMCpuCount and –VMMemoryMB), and if generating these virtual desktops in a non-persistent mode (-CleanOnBoot option), with or without the use of the Personal vDisk technology (-UsePersonalVDiskStorage). It's possible to update the deployed instances to the latest base image update through the use of the Publish-ProvMasterVmImage command. In the generated script, we have located all the main storage locations (the LitPath and StorPath variables) useful to realize a provisioning scheme, then we have implemented a provisioning procedure for a desktop based on an existing base image snapshot, with two vCPUs and 4GB of RAM for the delivered instances, which will be cleaned every time they stop and start (by using the -CleanOnBoot option). You can navigate the local and remote storage paths configured with the XenDesktop Broker machine; to list an object category (such as VM or Snapshot) you can execute this command: dirXDHyp:HostingUnits<UnitName>*.<category> There's more... The discussed cmdlets also offer you the technique to preserve a virtual desktop from an accidental deletion or unauthorized use. With the Machine Creation cmdlets group, you have the ability to use a particular command, which allows you to lock critical desktops: Lock-ProvVM. This cmdlet requires as parameters the name of the scheme to which they refer (-ProvisioningSchemeName) and the ID of the virtual desktop to lock (-VMID). You can retrieve the Virtual Machine ID by running the Get-ProvVM command discussed previously. To revert the machine lock, and free the desktop instance from accidental deletion or improper use, you have to execute the Unlock-ProvVM cmdlet, using the same parameter showed for the lock procedure. Managing additional components – StoreFrontÔ admin and logging cmdlets In this recipe, we will use and explain how to manage and configure the StoreFront component, by using the available Citrix PowerShell cmdlets. Moreover, we will explain how to manage and check the configurations for the system logging activities. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be able to run a PowerShell script (in the.ps1 format), you have to enable the script execution from the PowerShell prompt in this way: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will explain and execute the commands associated with the Citrix Storefront system: Connect to one of the Desktop Broker servers. Click on the PowerShell icon installed on the Windows taskbar. Load the Citrix PowerShell modules by typing the following command, and then press the Enter key: Asnp Citrix* To execute a command, you have to press the Enter button after completing the right command syntax. Retrieve the currently existing StoreFront service instances, by running the following command: Get-SfService To limit the number of rows as output result, you can add the –MaxRecordCount<value> parameter. To list the detailed information about the StoreFront service(s) currently configured, execute the following command: Get-SfServiceInstance –AdminAddress<ControllerAddress> The status of the currently active StoreFront instances can be retrieved by using the Get-SfServiceStatus command. The OK output will confirm the correct service execution. To list the task history associated with the configured StoreFront instances, you have to run the following command: Get-SfTask You can filter the desired information for the ID of the researched task (-taskid) and sort the results by the use of the –sortby parameter. To retrieve the installed database schema versions, you can execute the following command: Get-SfInstalledDBVersion By applying the –Upgrade and –Downgrade filters, you will receive respectively the schemas for which the database version can be updated or reverted to a previous compatible one. To modify the StoreFront configurations to register its state on a different database, you can use the following command: Set-SfDBConnection-DBConnection<DBConnectionString> -AdminAddress<ControllerAddress> Be careful when you specify the database connection string; if not specified, the existing database connections and configurations will be cleared! To check that the database connection has been correctly configured, the following command is available: Test-SfDBConnection-DBConnection<DBConnectionString>-AdminAddress<ControllerAddress> The second discussed cmdlets allows the logging group to retrieve information about the current status of the logging service and run the following command: Get-LogServiceStatus To verify the language used and whether the logging service has been enabled, run the following command: Get-LogSite The available configurable locales are en, ja, zh-CN, de, es, and fr. The available states are Enabled, Disabled, NotSupported, and Mandatory. The NotSupported state will show you an incorrect configuration for the listed parameters. To retrieve detailed information about the running logging service, you have to use the following command: Get-LogService As discussed earlier for the StoreFront commands, you can filter the output by applying the –MaxRecordCount<value> parameter. In order to get all the operations logged within a specified time range, run the following command; this will return the global operations count: Get-LogSummary –StartDateRange<StartDate>-EndDateRange<EndDate> The date format must be the following: AAAA-MM-GGHH:MM:SS. To list the collected operations per day in the specified time period, run the previous command in the following way: Get-LogSummary –StartDateRange<StartDate> -EndDateRange<EndDate>-intervalSeconds 86400 The value 86400 is the number of seconds that are present in a day. To retrieve the connection string information about the database on which logging data is stored, execute the following command: Get-LogDataStore To retrieve detailed information about the high level operations performed on the XenDesktop infrastructure, you have to run the following command: Get-LogHighLevelOperation –Text <TextincludedintheOperation> -StartTime<FormattedDateandTime> -EndTime<FormattedDateandTime>-IsSuccessful<true | false>-User <DomainUserName>-OperationType<AdminActivity | ConfigurationChange> The indicated filters are not mandatory. If you do not apply any filters, all the logged operations will be returned. This could be a very long output. The same information can be retrieved for the low level system operations in the following way: Get-LogLowLevelOperation-StartTime<FormattedDateandTime> -EndTime<FormattedDateandTime> -IsSuccessful<true | false>-User <DomainUserName> -OperationType<AdminActivity | ConfigurationChange> In the How it works section we will explain the difference between the high and low level operations. To log when a high level operation starts and stops respectively, use the following two commands: Start-LogHighLevelOperation –Text <OperationDescriptionText>- Source <OperationSource> -StartTime<FormattedDateandTime> -OperationType<AdminActivity | ConfigurationChange> Stop-LogHighLevelOperation –HighLevelOperationId<OperationID> -IsSuccessful<true | false> The Stop-LogHighLevelOperation must be related to an existing start high level operation, because they are related tasks. How it works... Here, we have discussed two new PowerShell command collections for the XenDesktop 7 versions: the cmdlet related to the StoreFront platform; and the activities Logging set of commands. The first collection is quite limited in terms of operations, despite the other discussed cmdlets. In fact, the only actions permitted with the StoreFront PowerShell set of commands are retrieving configurations and settings about the configured stores and the linked database. More activities can be performed regarding the modification of existing StoreFront clusters, by using the Get-SfCluster, Add-SfServerToCluster, New-SfCluster, and Set-SfCluster set of operations. More interesting is the PowerShell Logging collection. In this case, you can retrieve all the system-logged data, putting it into two principal categories: High-level operations: These tasks group all the system configuration changes that are executed by using the Desktop Studio, the Desktop Director, or Citrix PowerShell. Low-level operations: This category is related to all the system configuration changes that are executed by a service and not by using the system software's consoles. With the low level operations command, you can filter for a specific high level operation to which the low level refers, by specifying the -HighLevelOperationId parameter. This cmdlet category also gives you the ability to track the start and stop of a high level operation, by the use of Start-LogHighLevelOperation and Stop-LogHighLevelOperation. In this second case, you have to specify the previously started high level operation. There's more... In case of too much information in the log store, you have the ability to clear all of it. To refresh all the log entries, we use the following command: Remove-LogOperation -UserName<DBAdministrativeCredentials> -Password <DBUserPassword>-StartDateRange <StartDate> -EndDateRange <EndDate> The not encrypted –Password parameter can be substituted by –SecurePassword, the password indicated in secure string form. The credentials must be database administrative credentials, with deleting permissions on the destination database. This is a not reversible operation, so ensure that you want to delete the logs in the specified time range, or verify that you have some form of data backup. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 20202

article-image-openstack-networking-nutshell
Packt
22 Sep 2016
13 min read
Save for later

OpenStack Networking in a Nutshell

Packt
22 Sep 2016
13 min read
Information technology (IT) applications are rapidly moving from dedicated infrastructure to cloud based infrastructure. This move to cloud started with server virtualization where a hardware server ran as a virtual machine on a hypervisor. The adoptionof cloud based applicationshas accelerated due to factors such as globalization and outsourcing where diverse teams need to collaborate in real time. Server hardware connects to network switches using Ethernet and IP to establish network connectivity. However, as servers move from physical to virtual, the network boundary also moves from the physical network to the virtual network.Traditionally applications, servers and networking were tightly integrated. But modern enterprises and IT infrastructure demand flexibility in order to support complex applications. The flexibility of cloud infrastructure requires networking to be dynamic and scalable. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) play a critical role in data centers in order to deliver the flexibility and agility demanded by cloud based applications. By providing practical management tools and abstractions that hide underlying physical network’s complexity, SDN allows operators to build complex networking capabilities on demand. OpenStack is an open source cloud platform that helps build public and private cloud at scale. Within OpenStack, the name for OpenStack Networking project is Neutron. The functionality of Neutron can be classified as core and service. In this article by Sriram Subramanian and SreenivasVoruganti, authors of the book Software Defined Networking (SDN) with OpenStack, aims to provide a short introduction toOpenStack Networking. We will cover the following topics in this article: Understand traffic flows between virtual and physical networks Neutron entities that support Layer 2 (L2) networking Layer 3 (L3) or routing between OpenStack Networks Securing OpenStack network traffic Advanced networking services in OpenStack OpenStack and SDN The terms Neutron and OpenStack Networking are used interchangeably throughout this article. (For more resources related to this topic, see here.) Virtual and physical networking Server virtualization led to the adoption of virtualized applications and workloads running inside physical servers. While physical servers are connected to the physical network equipment, modern networking has pushed the boundary of networking into the virtual domain as well. Virtual switches, firewalls and routers play a critical role in the flexibility provided by cloud infrastructure. Figure 1: Networking components for server virtualization The preceding figure describes a typical virtualized server and the various networking components. The virtual machines are connected to a virtual switch inside the compute node (or server). The traffic is secured using virtual routers and firewalls. The compute node is connected to a physical switch which is the entry point into the physical network. Let us now walk through different traffic flow scenarios using the picture above as the background. In Figure 2, traffic from one VM to another on same compute node is forwarded by the virtual switch itself. It does not reach the physical network. You can even apply firewall rules to traffic between the two virtual machine. Figure 2: Traffic flow between two virtual machines on the same server Next, let us have a look at how traffic flows between virtual machines across two compute nodes. In Figure 3, the traffic comes out from compute node and then reaches the physical switch. The physical switch forwards the traffic to the second compute node and the virtual switch within the second compute node steers the traffic to appropriate VM. Figure 3: Traffic flow between two virtual machines on the different servers Finally, here is the depiction of traffic flow when a virtual machine sends or receives traffic from the Internet. The physical switch forwards the traffic to the physical router and firewall which is presumed to be connected to the internet. Figure 4: Traffic flow from a virtual machine to external network As seen from the above diagrams, the physical and the virtual network components work together to provide connectivity to virtual machines and applications. Tenant isolation As a cloud platform, OpenStack supports multiple users grouped into tenants. One of the key requirements of a multi-tenant cloud is to provide isolation of data traffic belonging to one tenant from rest of the tenants that use the same infrastructure. OpenStack supports different ways of achieving isolation and the it is the responsibility of the virtual switch to implement the isolation. Layer 2 (L2) capabilities in OpenStack The connectivity to a physical or virtual switch is also known as Layer 2 (L2) connectivity in networking terminology. Layer 2 connectivity is the most fundamental form of network connectivity needed for virtual machines. As mentioned earlier OpenStack supports core and service functionality. The L2 connectivity for virtual machines falls under the core capability of OpenStack Networking, whereas Router, Firewall etc., fall under the service category. The L2 connectivity in OpenStack is realized using two constructs called Network and Subnet. Operators can use OpenStack CLI or the web interface to create Networks and Subnets. And virtual machines are instantiated, the operators can associate them appropriate Networks. Creatingnetwork using OpenStack CLI A Network defines the Layer 2 (L2) boundary for all the instances that are associated with it. All the virtual machines within a Network are part of the same L2 broadcast domain. The Liberty release has introduced new OpenStack CLI (Command Line Interface) for different services. We will use the new CLI and see how to create a Network. Creating Subnet using OpenStack CLI A Subnet is a range of IP addresses that are assigned to virtual machines on the associated network. OpenStack Neutron configures a DHCP server with this IP address range and it starts one DHCP server instance per Network, by default. We will now show you how to create a Subnet using OpenStack CLI. Note: Unlike Network, for Subnet, we need to use the regular neutron CLI command in the Liberty release. Associating a network and Subnet to a virtual machine To give a complete perspective, we will create a virtual machine using OpenStack web interface and show you how to associate a Network and Subnet to a virtual machine. In your OpenStack web interface, navigate to Project|Compute|Instances. Click on Launch Instances action on the right hand side as highlighted above. In the resulting window enter the name for your instance and how you want to boot your instance. To associate a network and a subnet with the instance, click on Networking tab. If you have more than one tenant network, you will be able to choose the network you want to associate with the instance. If you have exactly one network, the web interface will automatically select it. As mentioned earlier, providing isolation for Tenant network traffic is a key requirement for any cloud. OpenStack Neutron uses Network and Subnet to define the boundaries and isolate data traffic between different tenants. Depending on Neutron configuration, the actual isolation of traffic is accomplished by the virtual switches. VLAN and VXLAN are common networking technologies used to isolate traffic. Layer 3 (L3) capabilities in OpenStack Once L2 connectivity is established, the virtual machines within one Network can send or receive traffic between themselves. However, two virtual machines belonging to two different Networks will not be able communicate with each other automatically. This is done to provide privacy and isolation for Tenant networks. In order to allow traffic from one Network to reach another Network, OpenStack Networking supports an entity called Router. The default implementation of OpenStack uses Namespaces to support L3 routing capabilities. Creating Router using OpenStack CLI Operators can create Routers using OpenStack CLI or web interface. They can then add more than one Subnets as interface to the Router. This allows the Networks associated with the router to exchange traffic with one another. The command to create a Router is as follows: This command creates a Router with the specified name. Associating Subnetwork to a Router Once a Router is created, the next step is to associate one or more sub-networks to the Router. The command to accomplish this is: The Subnet represented by subnet1 is now associated to the Router router1. Securing network traffic in OpenStack The security of network traffic is very critical and OpenStack supports two mechanisms to secure network traffic. Security Groups allow traffic within a tenant’s network to be secured. Linux iptables on the compute nodes are used to implement OpenStack security groups. The traffic that goes outside of a tenant’s network – to another Network or the Internet, is secured using OpenStackFirewall Service functionality. Like Routing, Firewall is a service with Neutron. Firewall service also uses iptables but the scope of iptables is limited to the OpenStack Router used as part of the Firewall Service. Usingsecurity groups to secure traffic within a network In order to secure traffic going from one VM to another within a given Network, we must create a security group. The command to create a security group is: The next step is to create one or more rules within the security group. As an example let us create a rule which allows only UDP, incoming traffic on port 8080 from any Source IP address. The final step is to associate this security group and the rules to a virtual machine instance. We will use the nova boot command for this: Once the virtual machine instance has a security group associated with it, the incoming traffic will be monitored and depending upon the rules inside the security group, data traffic may be blocked or permitted to reach the virtual machine. Note: it is possible to block ingress or egress traffic using security groups. Using firewall service to secure traffic We have seen that security groups provide a fine grain control over what traffic is allowed to and from a virtual machine instance. Another layer of security supported by OpenStack is the Firewall as a Service (FWaaS). The FWaaS enforces security at the Router level whereas security groups enforce security at a virtual machine interface level. The main use case of FWaaS is to protect all virtual machine instances within a Network from threats and attacks from outsidethe Network. This could be virtual machines part of another Network in the same OpenStack cloud or some entity on the Internet trying to make an unauthorized access. Let us now see how FWaaS is used in OpenStack. In FWaaS, a set of firewall rules are grouped into a firewall policy. And then a firewall is created that implements one policy at a time. This firewall is then associated to a Router. Firewall rule can be created using neutron firewall-rule-create command as follows: This rule blocks ICMP protocol so applications like Ping will be blocked by the firewall. The next step is to create a Firewall policy. In real world scenarios the security administrators will define several rules and consolidate them under a single policy. For example all rules that block various types of traffic can be combined into a single Policy. The command to create a firewall policy is: The final step is to create a firewall and associate it with a router. The command to do this is: In the command above we did not specify any Routers and the OpenStack behavior is to associate the firewall (and in turn the policy and rules) to all the Routers available for that tenant. The neutron firewall-create command supports an option to pick a specific Router as well. Advanced networking services Besides routing and firewall, there are few other commonly used networking technologies supported by OpenStack. Let’s take a quick look at these without delving deep into the respective commands. Load Balancing as a Service (LBaaS) Virtual machines instances created in OpenStack are used to run applications. Most applications are required to support redundancy and concurrent access. For example, a web server may be accessed by a large number of users at the same time. One of the common strategies to handle scale and redundancy is to implement load-balancing for incoming requests. In this approach, aLoad Balancer distributes an incoming service request onto a pool of servers, which processes the request thus providing higher throughput. If one of the servers in the pool fails, the Load Balancer removes it from the pool and the subsequent service requests are distributed among the remaining servers. User of the application use the IP address of the Load Balancer to access the application and are unaware of the pool of servers. OpenStack implements Load Balancer using HAProxy software and Linux Namespace. Virtual Private Network as a Service (VPNaaS) As mentioned earlier tenant isolation requires data traffic to be segregated and secured within an OpenStack cloud. However, there are times when external entities need to be part of the same Network without removing the firewall based security. This can be accomplished using a Virtual Private Network or VPN. A Virtual Private Network (VPN) connects two endpoints on different networks over a public Internet connection, such that the endpoints appear to be directly connected to each other.  VPNs also provide confidentiality and integrity of transmitted data. Neutron provides a service plugin that enables OpenStack users to connect two networks using a Virtual Private Network (VPN).  The reference implementation of VPN plugin in Neutron uses Openswan to create an IPSec based VPN. IPSec is a suite of protocols that provides secure connection between two endpoints by encrypting each IP packet transferred between them. OpenStack and SDN context So far in this article we have seen the different networking capabilities provided by OpenStack. Let us know look at two capabilities in OpenStack that enable SDN to be leveraged effectively. Choice of technology OpenStack being an open source platform bundles open source networking solutions as default implementation for these networking capabilities. For example, Routing is supported using Namespace, security using iptables and Load balancing using HAproxy. Historically these networking capabilities were implemented using customized hardware and software, most of them being proprietary solutions. These custom solutions are capable of much higher performance and are well supported by their vendors. And hence they have a place in the OpenStack and SDN ecosystem. From it initial releases OpenStack has been designed for extensibility. Vendors can write their own extensions and then can easily configure OpenStack to use their extension instead of the default solutions. This allows the operators to deploy the networking technology of their choice. OpenStack API for networking One of the most powerful capabilities of OpenStack is the extensive support for APIs. All services of OpenStack interact with one another using well defined RESTful APIs. This allows custom implementations and pluggable components to provide powerful enhancements for practical cloud implementation. For example, when a Network is created using OpenStack web interface, a RESTful request is sent to Horizon service. This in turn invokes a RESTful API to validate the user using Keystone service. Once validated, Horizon sends another RESTful API request to Neutron to actually create the Network. Summary As seen in this article, OpenStack supports a wide variety of networking functionality right out of the box. The importance of isolating tenant traffic and the need to allow customized solution requires OpenStack to support flexible configuration. We also highlighted some key aspects of OpenStack that will play a key role in deploying Software Defined Networking in datacenters, thereby supporting powerful cloud architecture and solution. Resources for Article: Further resources on this subject: Setting Up a Network Backup Server with Bacula [article] Jenkins 2.0: The impetus for DevOps Movement [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 19085

article-image-storage-practices-and-migration-hyper-v-2016
Packt
01 Dec 2016
17 min read
Save for later

Storage Practices and Migration to Hyper-V 2016

Packt
01 Dec 2016
17 min read
In this article by Romain Serre, the author of Hyper-V 2016 Best Practices, we will learn about Why Hyper-V projects fail, Overview of the failover cluster, Storage Replica, Microsoft System Center, Migrating VMware virtual machines, and Upgrading single Hyper-v hosts. (For more resources related to this topic, see here.) Why Hyper-V projects fail Before you start deploying your first production Hyper-V host, make sure that you have completed a detailed planning phase. I have been called in to many Hyper-V projects to assist in repairing what a specialist has implemented. Most of the time, I start by correcting the design because the biggest failures happen there, but are only discovered later during implementation. I remember many projects in which I was called in to assist with installations and configurations during the implementation phases, because these were the project phases where a real expert was needed. However, based on experience, this notion is wrong. Most critical to a successful design phase are two features—its rare existence and someone with technological and organizational experience with Hyper-V. If you don't have the latter, look out for a Microsoft Partner with a Gold Competency called Management and Virtualization on Microsoft Pinpoint (http://pinpoint.microsoft.com) and take a quick look at the reviews done by customers for successful Hyper-V projects. If you think it's expensive to hire a professional, wait until you hire an amateur. Having an expert in the design phase is the best way to accelerate your Hyper-V project. Overview of the failover cluster Before you start your first deployment in production, make sure you have defined the aim of the project and its smart criteria and have done a thorough analysis of the current state. After this, you should be able to plan the necessary steps to reach the target state, including a pilot phase. instantly detected. The virtual machines running on that particular node are powered off immediately because of the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster, utilizing the Windows Server Failover Clustering feature, it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Storage Replica Storage Replica is a new feature in Windows Server 2016 that provides block replication from storage level for a data recovery plan or for a stretched cluster. Storage Replica can be used in the following scenarios: Server-to-server storage replication using Storage Replica Storage replication in a stretch cluster using Storage Replica Cluster-to-cluster storage replication using Storage Replica Server-to-itself to replicate between volumes using Storage Replica Regarding the scenario or the bandwidth and the latency of the inter-site link, you can choose between a synchronous and an asynchronous replication. For further information about Storage Replica, you can about read this topic at http://bit.ly/2albebS. The SMB3 protocol is used to make Storage Replica. You can leverage TCP/IP or RDMA on the network. I recommend you to implement RDMA when it is possible to reduce latency and CPU workload and to increase throughput. Compared to Hyper-V Replica, the Storage Replica feature provides a replication of all Virtual Machines stored in a volume from the block level. Moreover, Storage Replica can replicate in the synchronous mode while Hyper-V Replica is always in the asynchronous mode. To finish, with Hyper-V Replica you have to specify the failover IP Address because the replication is executed from the VM level, whereas with Storage Replica, you don't need to specify the failover IP Address, but in case of a replication between two clusters in two different rooms, the VM networks must be configured in the destination room. The use of Hyper-V Replica or Storage Replica depends on the disaster recovery plan you need. If you want to protect some application workloads, you can use Hyper-V Replica. On the other hand, if you have the passive room ready to restart in case of issues in the active room, Storage Replica can be a great solution because all the VMs in a volume will be already replicated. To deploy a replication between two clusters you need two sets of storage based on iSCSI, SAS JBOD, fibre channel SAN, or Shared VHDX. For better performance I recommend you to implement SSD that will be used for the logs of Storage Replica. Microsoft System Center Microsoft System Center 2016 is Microsoft's solution for advanced management of Windows Server and its components, along with its dependencies such as various hardware and software products. It consists of various components that support every stage of your IT services from planning to operating over backup to automation. System Center has existed since 1994 and has evolved continuously. It now offers a great set of tools for very efficient management of server and client infrastructures. It also offers the ability to create and operate whole clouds—run in your own data center or a public cloud data center such as Microsoft Azure. Today, it's your choice whether to run your workloads on-premises or off-premises. System Center provides a standardized set of tools for a unique and consistent Cloud OS management experience. System Center does not add any new features to Hyper-V, but it does offer great ways to make the most out of it and ensure streamlined operating processes after its implementation. System Center is licensed via the same model as Windows Server is, leveraging Standard and data center Editions on a physical host level. While every System Center component offers great value in itself, the binding of multiple components into a single workflow offers even more advantages, as shown in the following screenshot: System Center overview When do you need System Center? There is no right or wrong answer to this, and the most given answer by any IT consultant around the world is, "It depends". System Center adds value to any IT environment starting with only a few systems. In my experience, a Hyper-V environment with up to three hosts, and 15 VMs can be managed efficiently without the use of System Center. If you plan to use more hosts or virtual machines, System Center will definitely be a great solution for you. Let's take a look at the components of System Center. Migrating VMware virtual machines If you are running virtual machines on VMware ESXi hosts, there are really good options available for moving them to Hyper-V. There are different approaches on how to convert a VMware virtual machine to Hyper-V: from the inside of the VM on a guest level, running cold conversions with the VM powered off on the host level, running hot conversions on a running VM, and so on. I will give you a short overview of the currently available tools in the market. System Center VMM SCVMM should not be the first tool of your choice, take a look at MVMC combined with MAT to get an equal functionality from a better working tool. The earlier versions of SCVMM allowed online or offline conversions of VMs; the current version, 2016, allows only offline conversions of VMs. Select a powered-off VM on a VMware host or from the SCVMM library share to start the conversion. The VM conversion will convert VMware-hosted virtual machines through vCenter and ensure that the entire configuration, such as memory, virtual processor, and other machine configurations, is also migrated from the initial source. The tool also adds virtual NICs to the deployed virtual machine on Hyper-V. The VMware tools must be uninstalled before the conversion because you won't be able to remove the VMware tools when the VM is not running on a VMware host. SCVMM 2016 supports ESXi hosts running 4.1 and 5.1 but not the latest ESX Version 5.5. SCVMM conversions are great to automate through their integrated PowerShell support and it's very easy to install upgraded Hyper-V integration services as part of the setup or by adding any kind of automation through PowerShell or System Center Orchestrator. Besides manually removing VMware tools, using SCVMM is an end-to-end solution in the migration process. You can find some PowerShell examples for SCVMM-powered V2V conversion scripts at http://bit.ly/Y4bGp8. I don't recommend the use of this tool anymore because Microsoft doesn't spend time on this tool anymore. Microsoft Virtual Machine Converter Microsoft released its first version of the free solution accelerator Microsoft Virtual Machine Converter (MVMC) in 2013, and it should be available in Version 3.1 by the release of this book. MVMC provides a small and easy option to migrate selected virtual machines to Hyper-V. It takes a very similar approach to the conversion as SCVMM does. The conversion happens at a host level and offers a fully integrated end-to-end solution. MVMC supports all recent versions of VMware vSphere. It will even uninstall the VMware tools and install the Hyper-V integration services. MVMC 2.0 works with all supported Hyper-V guest operating systems, including Linux. MVMC comes with a full GUI wizard as well as a fully scriptable command-line interface (CLI). Besides being a free tool, it is fully supported by Microsoft in case you experience any problems during the migration process. MVMC should be the first tool of your choice if you do not know which tool to use. Like most other conversion tools, it does the actual conversion on the MVMC server itself and requires its disk space to host the original VMware virtual disk as well as the converted Hyper-V disk. MVMC even offers an add-on for VMware virtual center servers to start conversions directly from the vSphere console. The current release of MV is freely available at its official download site at http://bit.ly/1HbRIg7. Download MVMC to the conversion system and start the click-through setup. After finishing the download, start the MVMC with the GUI by executing Mvmc.Gui.exe. The wizard guides you through some choices. MVMC is not only capable of migrating to Hyper-V but also allows you to move virtual machines to Microsoft Azure. Follow these few steps to convert a VMware VM: Select Hyper-V as a target. Enter the name of the Hyper-V host you want this VM to run on and specify a fileshare to use and the format of the disks you want to create. Choosing the dynamically expanding disks should be the best option most of the time. Enter the name of the ESXi server you want to use as a source as well as valid credentials. Select the virtual machine to convert. Make sure it has VMware tools installed. The VM can be either powered on or off. Enter a workspace folder to store the converted disk. Wait for the process to finish. There is some additional guidance available at http://bit.ly/1vBqj0U. This is a great and easy way to migrate a single virtual machine. Repeat the steps for every other virtual machine you have, or use some automation. Upgrading single Hyper-V hosts If you are currently running a single host with an older version of Hyper-V and now want to upgrade this host on the same hardware, there is a limited set of decisions to be made. You want to upgrade the host with the least amount of downtime and without losing any data from your virtual machine. Before you start the upgrade process, make sure all components from your infrastructure are compatible with the new version of Hyper-V. Then it's time to prepare your hardware for this new version of Hyper-V by upgrading all firmware to the latest available version and downloading the necessary drivers for Windows Server 2016 with Hyper-V along with its installation media. One of the most crucial questions in this update scenario is whether you should use the integrated installation option called in-place upgrade, where the existing operating system will be transformed to the recent version of Hyper-V or delete the current operating system and perform a clean installation. While the installation experience of in-place upgrades works well when only the Hyper-V role is installed, based on experiences, some versions of upgraded systems are more likely to suffer problems. Numbers pulled from the Elanity support database show about 15 percent more support cases on upgraded systems from Windows Server 2008 R2 than clean installations. Remember how fast and easy it is nowadays to do a clean install of Hyper-V; this is why it is highly recommended to do this over upgrading existing installations. If you are currently using Windows Server 2012 R2 and want to upgrade to Windows Server 2016, note that we have not yet seen any differences in the number of support cases between the installation methods. However, clean installations of Hyper-V being so fast and easy, I barely use them. Before starting any type of upgrade scenario, make sure you have current backups of all affected virtual machines. Nonetheless, if you want to use the in-place upgrade, insert the Windows Server 2016 installation media and run this command from your current operating system: Setup.exe /auto:upgrade If it fails, it's most likely due to an incompatible application installed on the older operating system. Start the setup without the parameter to find out which applications need to be removed before executing the unattended setup. If you upgrade from Windows Server 2012 R2, there is no additional preparation needed; if you upgrade from older operating systems, make sure to remove all snapshots from your virtual machines. Importing virtual machines If you choose to do a clean installation of the operating systems, you do not necessarily have to export the virtual machines first; just make sure all VMs are powered off and are stored on a different partition than your Hyper-V host OS. If you are using a SAN, disconnect all LUNs before the installation and reconnect them afterwards to ensure their integrity through the installation process. After the installation process, just reconnect the LUNs and set the disk online in diskpart or in Disk Management at Control Panel | Computer Management. If you are using local disks, make sure not to reformat the partition with your virtual machines on it. You should export VM to another location and import them back after reformatting; more efforts are required but it is safer. Set the partition online and then reimport the virtual machines. Before you start the reimport process, make sure all dependencies of your virtual machines are available, especially vSwitches. To import a single Hyper-V VM, use the following PowerShell cmdlet: Import-VM -Path 'D:VMsVM01Virtual Machines2D5EECDA-8ECC-4FFC- ACEE-66DAB72C8754.xml' To import all virtual machines from a specific folder, use this command: Get-ChildItem d:VMs -Recurse -Filter "Virtual Machines" | %{Get- ChildItem $_.FullName -Filter *.xml} | %{import-vm $_.FullName - Register} After that, all VMs are registered and ready for use on your new Hyper-V hosts. Make sure to update the Hyper-V integration services of all virtual machines before going back into production. If you still have virtual disks in the old .vhd format, it's now time to convert them to .vhdx files. Use this PowerShell cmdlet on powered-off VMs or standalone vDisk to convert a single .vhd file: Convert-VHD –Path d:VMstestvhd.vhd –DestinationPath d:VMstestvhdx.vhdx If you want to convert the disks of all your VMs, fellow MVPs, Aidan Finn and Didier van Hoye, provided a great end-to-end solution to achieve this. This can be found at http://bit.ly/1omOagi. I often hear from customers that they don't want to upgrade their disks, so as to be able to revert to older versions of Hyper-V when needed. First, you should know that I have never met a customer who has done that because there really is no technical reason why anyone should do this. Second, even if you would do this backwards move, running virtual machines on older Hyper-V hosts is not supported, if they had been deployed on more modern versions of Hyper-V before. The reason for this is very simple; Hyper-V does not offer a way for downgrading Hyper-V integration services. The only way to move a virtual machine back to an older Hyper-V host is by restoring a backup of the VM made earlier before the upgrade process. Exporting virtual machines If you want to use another physical system running a newer version of Hyper-V, you have multiple possible options. They are as follows: When using a SAN as a shared storage, make sure all your virtual machines, including their virtual disks, are located on other LUNs rather than on the host operating system. Disconnect all LUNs hosting virtual machines from the source host and connect them to the target host. Bulk import the VMs from the specified folders. When using SMB3 shared storage from scale-out file servers, make sure to switch access to the shared hosting VMs to the new Hyper-V hosts. When using local hard drives and upgrading from Windows Server 2008 SP2 or Windows Server 2008 R2 with Hyper-V, it's necessary to export the virtual machines to a storage location reachable from the new host. Hyper-V servers running legacy versions of the OS (prior to 2012 R2) need to power off the VMs before an export can occur. To export a virtual machine from a host, use the following PowerShell cmdlet: Export-VM –Name VM –Path D: To export all virtual machines to a folder underneath the following root, use the following command: Get-VM | Export-VM –Path D: In most cases, it is also possible to just copy the virtual machine folders containing virtual hard disk's and configuration files to the target location and import them to Windows Server 2016 Hyper-V hosts. However, the export method is more reliable and should be preferred. A good alternative for moving virtual machines can be the recreation of virtual machines. If you have another host up and running with a recent version of Hyper-V, it may be a good opportunity to also upgrade some guest OSes. For instance, Windows Server 2003 and 2003 R2 are out of extended support since July 2015. Depending on your applications, it may now be the right choice to create new virtual machines with Windows Server 2016 as a guest operating system and migrate your existing workloads from older VMs to these new machines. Summary In this article, we learned about Why Hyper-V projects fail, how to migrate VMware virtual machine and also about upgrading single Hyper-V hosts. This article also covers the overview of the failover cluster and Storage Replica. Resources for Article: Further resources on this subject: Hyper-V Basics [article] The importance of Hyper-V Security [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 18808

article-image-log-monitoring-tools-for-continuous-security-monitoring-policy-tutorial
Fatema Patrawala
13 Jul 2018
11 min read
Save for later

Log monitoring tools for continuous security monitoring policy [Tutorial]

Fatema Patrawala
13 Jul 2018
11 min read
How many times we have heard of organization's entire database being breached and downloaded by the hackers. The irony is, they are not even aware about anything until the hacker is selling the database details on the dark web after few months. Even though they implement decent security controls, what they lack is continuous security monitoring policy. It is one of the most common things that you might find in a startup or mid-sized organization. In this article, we will show how to choose the right log monitoring tool to implement continuous security monitoring policy. You are reading an excerpt from the book Enterprise Cloud Security and Governance, written by Zeal Vora. Log monitoring is a must in security Log monitoring is considered to be part of the de facto list of things that need to be implemented in an organization. It gives us the power of visibility of various events through a single central solution so we don't have to end up doing less or tail on every log file of every server. In the following screenshot, we have performed a new search with the keyword not authorized to perform and the log monitoring solution has shown us such events in a nice graphical way along with the actual logs, which span across days: Thus, if we want to see how many permission denied events occurred last week on Wednesday, this will be a 2-minute job if we have a central log monitoring solution with search functionality. This makes life much easier and would allow us to detect anomalies and attacks in a much faster than traditional approach. Choosing the right log monitoring tool This is a very important decision that needs to be taken by the organization. There are both commercial offerings as well as open source offerings that are available today but the amount of efforts that need to be taken in each of them varies a lot. I have seen many commercial offerings such as Splunk and ArcSight being used in large enterprises, including national level banks. On the contrary, there are also open source offerings, such as ELK Stack, that are gaining popularity especially after Filebeat got introduced. At a personal level, I really like Splunk but it gets very expensive when you have a lot of data being generated. This is one of the reasons why many startups or mid-sized organizations use commercial offering along with open source offerings such as ELK Stack. Having said that, we need to understand that if you decide to go with ELK Stack and have a large amount of data, then ideally you would need a dedicated person to manage it. Just to mention, AWS also has a basic level of log monitoring capability available with the help of CloudWatch. Let's get started with logging and monitoring There will always be many sources from which we need to monitor logs. Since it will be difficult to cover each and every individual source, we will talk about two primary ones, which we will be discussing sequentially: VPC flow logs AWS Config VPC flow logs VPC flow logs is a feature that allows us to capture information related to IP traffic that goes to and from the network interfaces within the VPC. VPC flow logs help in both troubleshooting related to why certain traffic is not reaching the EC2 instances and also understanding what the traffic is that is accepted and rejected. The VPC flow logs can be part of individual network interface level of an EC2 instance. This allows us to monitor how many packets are accepted or rejected in a specific EC2 instance running in the DMZ maybe. By default, the VPC flow logs are not enabled, so we will go ahead and enable the VPC flow log within our VPC: Enabling flow logs for VPC: In our environment, we have two VPCs named Development and Production. In this case, we will enable the VPC flow logs for development VPC: In order to do that, click on the Development VPC and select the Flow Logs tab. This will give you a button named Create Flow Log. Click on it and we can go ahead with the configuration procedure: Since the VPC flow logs data will be sent to CloudWatch, we need to select the IAM Role that gives these permissions: Before we go ahead in creating our first flow log, we need to create the CloudWatch log group as well where the VPC flow logs data will go into. In order to do it, go to CloudWatch, select the Logs tab. Name the log group according to what you need and click on Create log group: Once we have created our log group, we can fill the Destination Log Group field with our log group name and click on the Create Flow Log button: Once created, you will see the new flow log details under the VPC subtab: Create a test setup to check the flow: In order to test if everything is working as intended, we will start our test OpenVPN instance and in the security group section, allow inbound connections on port 443 and icmp (ping). This gives us the perfect base for a plethora of attackers detecting our instance and running a plethora of attacks on our server: Analyze flow logs in CloudWatch: Before analyzing for flow logs, I went for a small walk so that we can get a decent number of logs when we examine; thus, when I returned, I began analyzing the flow logs data. If we observe the flow log data, we see plenty of packets, which have REJECT OK at the end as well as ACCEPT OK. Flow logs can be unto specific interface levels, which are attached to EC2 instances. So, in order to check the flow logs, we need to go to CloudWatch, select the Log Groups tab, inside it select the log group that we created and then select the interface. In our case, we selected the interface related to the OpenVPN instance, which we had started: CloudWatch gives us the capability to filter packets based on certain expressions. We can filter all the rejected packets by creating a simple search for REJECT OK in the search bar and CloudWatch will give us all the traffic that was rejected. This is shown in the following image: Viewing the logs in GUI: Plain text data is good but it's not very appealing and does not give you deep insights about what exactly is happening. It's always preferred to send these logs to a Log Monitoring tool, which can give you deep insights about what exactly is happening. In my case, I have used Splunk to give us an overview about the logs in our environment. When we look into VPC Flow Logs, we see that Splunk gives us great detail in a very nice GUI and also maps the IP addresses to the location from which the traffic is coming: The following image is the capture of VPC flow logs which are being sent to the Splunk dashboard for analyzing the traffic patterns: The VPC Flow Logs traffic rate and location-related data The top rejected destination and IP address, which we rejected AWS Config AWS Config is a great service that allows us to continuously assess and audit the configuration of the AWS-related resources. With AWS Config, we can exactly see what configuration has changed from the previous week to today for services such as EC2, security groups, and many more. One interesting feature that Config allows is to set the compliance test as shown in the following screenshots. We see that there is one rule that is failing and is considered non-compliant, which is the CloudTrail. There are two important features that Config service provides: Evaluate changes in resources over the timeline Compliance checks Once they are enabled and you have associated Config rules accordingly, then you would see a dashboard similar to the following screenshot: In the preceding screenshot, on the left-hand side, Config gives details related to the Resources, which are present in your AWS; and on the right-hand column, Config gives us the status if the resources are compliant or non-compliant according to the rules that are set. Configuring the AWS Config service Let's look into how we can get started with the AWS Config service and have great dashboards along with compliance checks, which we saw in the previous screenshot: Enabling the Config service: The first time when we want to start working with Config, we need to select the resources we want to evaluate. In our case, we will select both the region-specific resources as well as global resources such as IAM: Configure S3 and IAM: Once we decide to include all the resources, the next thing is to create an Amazon S3 bucket where AWS Config will store the configuration and snapshot files. We will also need to select IAM role, which will allow Config into put these files to the S3 bucket: Select Config rules: Configuration rules are checks against your AWS resources, which can be done and the result will be part of the compliance standard. For example, root-account-mfa-enabled rule will check whether the ROOT account has MFA enabled or disabled and in the end it will give you a nice graphical overview about the output of the checks conducted by the rules. Currently, there are 38 AWS-managed rules, which we can select and use anytime; however, we can have custom rules anytime as well. For our case, I will use five specific rules, which are as follows: cloudtrail-enabled iam-password-policy restricted-common-ports restricted-ssh root-account-mfa-enabled Config initialization: With the Config rules selected, we can click on Finish and AWS Config will start, and it will start to check resources and its associated rules. You might get the dashboard similar to the following screenshot, which speaks about the available resources as well as the rule compliance related graphs: Let's analyze the functionality For demo purposes, I decided to disable the CloudTrail service and if we then look into the Config dashboard, it says that one rule check has been failed: Instead of graphs, Config can also show the resources in a tabular manner if we want to inspect the Config rules with the associated names. This is illustrated in the following diagram: Evaluating changes to resources AWS Config allows us to evaluate the configuration changes that have been made to the resources. This is a great feature that allows us to see how our resource looked a day, a week, or even months back. This feature is particularly useful specifically during incidents when, during investigation, one might want to see what exactly changed before the incident took place. It will help things go much faster. In order to evaluate the changes, we will need to perform the following steps: Go to AWS Config | Resources. This will give you the Resource inventory page in which you can either search for resources based on the resource type or based on tags. For our use case, I am searching for a tag value for an EC2 Instance whose name is OpenVPN: When we go inside the Config timeline, we see the overall changes that have been made to the resource. In the following screenshot, we see that there were a few changes that were made, and Config also shows us the time the changes that were made to the resource: When we click on Changes, it will give you the exact detail on what was the exact change that was made. In our case, it is related to the new network interface, which was attached to the EC2 instance. It displays the network interface ID, description along with the IP address, and the security group, which is attached to that network interface: When we start to integrate the AWS services with Splunk or similar monitoring tools, we can get great graphs, which will help us evaluate things faster. On the side, we always have the logs from the CloudTrail, if we want to see the changes that occurred in detail. We covered log monitoring and how to choose the right log monitoring tool for continuous security monitoring policy. Check out the book Enterprise Cloud Security and Governance to build resilient cloud architectures for tackling data disasters with ease. Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM) Monitoring, Logging, and Troubleshooting Analyzing CloudTrail Logs using Amazon Elasticsearch
Read more
  • 0
  • 0
  • 17877
article-image-endpoint-operations-management-agent-vrealize-operations
Vijin Boricha
30 May 2018
10 min read
Save for later

How to ace managing the Endpoint Operations Management Agent with vROps

Vijin Boricha
30 May 2018
10 min read
In this tutorial, you will learn the functionalities of endpoint operations management agent and how you can effectively manage it. You can install the Endpoint Operations Management Agent from a tar.gz or .zip archive, or from an operating system-specific installer for Windows or for Linux-like systems that support RPM. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  The agent can come bundled with JRE or without JRE. Installing the Agent Before you start, you have to make sure you have vRealize Operations user credentials to ensure that you have enough permissions to deploy the agent. Let's see how can we install the agent on both Linux and Windows machines. Manually installing the Agent on a Linux Endpoint Perform the following steps to install the Agent on a Linux machine: Download the appropriate distributable package from https://my.vmware.com and copy it over to the machine. The Linux .rpm file should be installed with the root credentials. Log in to the machine and run the following command to install the agent: [root]# rpm -Uvh <DistributableFileName> In this example, we are installing on CentOS. Only when we start the service will it generate a token and ask for the server's details. Start the Endpoint Operations Management Agent service by running: service epops-agent start The previous command will prompt you to enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP: If the server is on the same machine as the agent, you can enter the localhost. If a firewall is blocking traffic from the agent to the server, allow the traffic to that address through the firewall. Default port: Specify the SSL port on the vRealize Operations server to which the agent must connect. The default port is 443. Certificate to trust: You can configure Endpoint Operations Management agents to trust the root or intermediate certificate to avoid having to reconfigure all agents if the certificate on the analytics nodes and remote collectors are modified. vRealize Operations credentials: Enter the name of a vRealize Operations user with agent manager permissions. The agent token is generated by the Endpoint Operations Management Agent. It is only created the first time the endpoint is registered to the server. The name of the token is random and unique, and is based on the name and IP of the machine. If the agent is reinstalled on the Endpoint, the same token is used, unless it is deleted manually. Provide the necessary information, as shown in the following screenshot: Go into the vRealize Operations UI and navigate to Administration | Configuration | Endpoint Operations, and select the Agents tab: After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can also verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page. If you encounter issues during the installation and configuration, you can check the /opt/vmware/epops-agent/log/agent.log log file for more information. We will examine this log file in more detail later in this book. Manually installing the agent on a Windows Endpoint On a Windows machine, the Endpoint Operations Management Agent is installed as a service. The following steps outline how to install the Agent via the .zip archive bundle, but you can also install it via an executable file. Perform the following steps to install the Agent on a Windows machine: Download the agent .zip file. Make sure the file version matches your Microsoft Windows OS. Open Command Prompt and navigate to the bin folder within the extracted folder. Run the following command to install the agent: ep-agent.bat install After the install, is complete, start the agent by executing: ep-agent.bat start If this is the first time you are installing the agent, a setup process will be initiated. If you have specified configuration values in the agent properties file, those will be used. Upon starting the service, it will prompt you for the same values as it did when we installed in on a Linux machine. Enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP Default port Certificate to trust vRealize Operations credentials If the agent is reinstalled on the endpoint, the same token is used, unless it is deleted manually. After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page: Because the agent is collecting data, the collection status of the agent is green. Alternatively, you can also verify the collection status of the Endpoint Operations Management Agent by navigating to Administration | Configuration | Endpoint Operations, and selecting the Agents tab. Automated agent installation using vRealize Automation In a typical environment, the Endpoint Operations Management Agent will not be installed on many/all virtual machines or physical servers. Usually, only those servers holding critical applications or services will be monitored using the agent. Of those servers which are not, manually installing and updating the Endpoint Operations Management Agent can be done with reasonable administrative effort. If we have an environment where we need to install the agent on many VMs, and we are using some kind of provisioning and automation engine like Microsoft System Center Configuration Manager or VMware vRealize Automation, which has application-provisioning capabilities, it is not recommended to install the agent in the VM template that will be cloned. If for whatever reason you have to do it, do not start the Endpoint Operations Management Agent, or remove the Endpoint Operations Management token and data before the cloning takes place. If the Agent is started before the cloning, a client token is created and all clones will be the same object in vRealize Operations. In the following example, we will show how to install the Agent via vRealize Automation as part of the provisioning process. I will not go into too much detail on how to create blueprints in vRealize Automation, but I will give you the basic steps to create a software component that will install the agent and add it to a blueprint. Perform the following steps to create a software component that will install the Endpoint Operations Management Agent and add it to a vRealize Automation Blueprint: Open the vRealize Automation portal page and navigate to Design, the Software Components tab, and click New to create a new software component. On the General tab, fill in the information. Click Properties and click New . Add the following six properties, as shown in the following table: Name Type Value Encrypted Overridable Required Computed RPMNAME String The name of the Agent RPM package No Yes No No SERVERIP String The IP of the vRealize Operations server/cluster No Yes No No SERVERLOGIN String Username with enough permission in vRealize Operations No Yes No No SERVERPWORD String Password for the user No Yes No No SRVCRTTHUMB String The thumbprint of the vRealize Operations certificate No Yes No No SSLPORT String The port on which to communicate with vRealize Operations No Yes No No These values will be used to configure the Agent once it is installed. The following screenshot illustrates some example values: Click Actions and configure the Install, Configure, and Start actions, as shown in the following table: Life Cycle Stage Script Type Script Reboot Install Bash rpm -i http://<FQDNOfThe ServerWhere theRPMFileIsLocated>/$RPMNAME No (unchecked) Configure Bash sed -i "s/#agent.setup.serverIP=localhost/agent.setup.serverIP=$SERVERIP/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverSSLPort=443/agent.setup.serverSSLPort=$SSLPORT/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverLogin=username/agent.setup.serverLogin=$SERVERLOGIN/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverPword=password/agent.setup.serverPword=$SERVERPWORD/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverCertificateThumbprint=/agent.setup.serverCertificateThumbprint=$SRVCRTTHUMB/" /opt/vmware/epops-agent/conf/agent.properties No (unchecked) Start Bash /sbin/service epops-agent start /sbin/chkconfig epops-agent on No (unchecked) The following screenshot illustrates some example values: Click Ready to Complete and click Finish. Edit the blueprint to where you want the agent to be installed. From Categories, select Software Components. Select the Endpoint Operations Management software component and drag and drop it over the machine type (virtual machine) where you want the object to be installed: Click Save to save the blueprint. This completes the necessary steps to automate the Endpoint Operations Management Agent installation as a software component in vRealize Automation. With every deployment of the blueprint, after cloning, the Agent will be installed and uniquely identified to vRealize Operations. Reinstalling the agent When reinstalling the agent, various elements are affected, like: Already-collected metrics Identification tokens that enable a reinstalled agent to report on the previously-discovered objects on the server The following locations contain agent-related data which remains when the agent is uninstalled: The data folder containing the keystore The epops-token platform token file which is created before agent registration Data continuity will not be affected when the data folder is deleted. This is not the case with the epops-token file. If you delete the token file, the agent will not be synchronized with the previously discovered objects upon a new installation. Reinstalling the agent on a Linux Endpoint Perform the following steps to reinstall the agent on a Linux machine: If you are removing the agent completely and are not going to reinstall it, delete the data directory by running: $ rm -rf /opt/epops-agent/data If you are completely removing the client and are not going to reinstall or upgrade it, delete the epops-token file: $ rm /etc/vmware/epops-token Run the following command to uninstall the Agent: $ yum remove <AgentFullName> If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. Reinstalling the Agent on a Windows Endpoint Perform the following steps to reinstall the Agent on a Windows machine: From the CLI, change the directory to the agent bin directory: cd C:epops-agentbin Stop the agent by running: C:epops-agentbin> ep-agent.bat stop Remove the agent service by running: C:epops-agentbin> ep-agent.bat remove If you are completely removing the client and are not going to reinstall it, delete the data directory by running: C:epops-agent> rd /s data If you are completely removing the client and are not going to reinstall it, delete the epops-token file using the CLI or Windows Explorer: C:epops-agent> del "C:ProgramDataVMwareEP Ops agentepops-token" Uninstall the agent from the Control Panel. If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. You've become familiar with the Endpoint Operations Management Agent and what it can do. We also discussed how to install and reinstall it on both Linux and Windows operating systems. To know more about vSphere and vRealize automation workload placement, check out our book  Mastering vRealize Operations Manager - Second Edition. VMware vSphere storage, datastores, snapshots What to expect from vSphere 6.7 KVM Networking with libvirt
Read more
  • 0
  • 0
  • 17855

article-image-overview-horizon-view-architecture-and-its-components
Packt
27 Mar 2015
31 min read
Save for later

An Overview of Horizon View Architecture and its Components

Packt
27 Mar 2015
31 min read
In this article by Peter von Oven and Barry Coombs, authors of the book Mastering VMware Horizon 6, we will introduce you to the architecture and architectural components that make up the core VMware Horizon solution, concentrating on the virtual desktop elements of Horizon with Horizon View Standard. This article will cover the core Horizon View functionality of brokering virtual desktop machines that are hosted on the VMware vSphere platform. In this article, we will discuss the role of each of the Horizon View components and explain how they fit into the overall infrastructure and the benefits they bring, followed by a deep-dive into how Horizon View works. (For more resources related to this topic, see here.) Introducing the key Horizon components To start with, we are going to introduce, at a high level, the main components that make up the Horizon View product. All of the VMware Horizon components described are included as part of the licensed product, and the features that are available to you depend on whether you have the View Standard Edition, the Advanced Edition, or the Enterprise Edition. Horizon licensing also includes ESXi and vCenter licensing to support the ability to deploy the core hosting infrastructure. You can deploy as many ESXi hosts and vCenter Servers as you require to host the desktop infrastructure. The key elements of Horizon View are outlined in the following diagram: In the next section, we are going to start drilling down deeper into the architecture of how these high-level components fit together and how they work. A high-level architectural overview In this article, we will cover the core Horizon View functionality of brokering virtual desktop machines that are hosted on the VMware vSphere platform. The Horizon View architecture is pretty straightforward to understand, as its foundations lie in the standard VMware vSphere products (ESXi and vCenter). So, if you have the necessary skills and experience of working with this platform, then you are already halfway there. Horizon View builds on the vSphere infrastructure, taking advantage of some of the features of the ESX hypervisor and vCenter Server. Horizon View requires adding a number of virtual machines to perform the various View roles and functions. An overview of the View architecture is shown in the following diagram: View components run as applications that are installed on the Microsoft Windows Server operating system, so they could actually run on physical hardware as well. However, there are a great number of benefits available when you run them as virtual machines, such as delivering HA and DR, as well as the typical cost savings that can be achieved through virtualization. The following sections will cover each of these roles/components of the View architecture in greater detail. The Horizon View Connection Server The Horizon View Connection Server, sometimes referred to as Connection Broker or View Manager, is the central component of the View infrastructure. Its primary role is to connect a user to their virtual desktop by means of performing user authentication and then delivering the appropriate desktop resources based on the user's profile and user entitlement. When logging on to your virtual desktop, it is the Connection Server that you are communicating with. How does the Connection Server work? A user typically connects to their virtual desktop from their device by launching the View Client. Once the View Client has launched, the user enters the address details of the View Connection Server, which in turn responds by asking them to provide their network login details (their Active Directory (AD) domain username and password). It's worth noting that Horizon View now supports different AD function levels. These are detailed in the following screenshot: Based on their entitlements, these credentials are authenticated with AD and, if successful, the user is able to continue the logon process. Depending on what they are entitled to, the user could see a launch screen that displays a number of different desktop shortcuts available for login. These desktops represent the desktop pools that the user has been entitled to use. A pool is basically a collection of virtual desktops; for example, it could be a pool for the marketing department where the desktops contain specific applications/software for that department. Once authenticated, the View Manager makes a call to the vCenter Server to create a virtual desktop machine and then vCenter makes a call to View Composer (if you are using linked clones) to start the build process of the virtual desktop if there is not one already available. Once built, the virtual desktop is displayed/delivered within the View Client window, using the chosen display protocol (PCoIP or RDP). The process is described in detail in the following diagram: There are other ways to deploy VDI solutions that do not require a connection broker, and allow a user to connect directly to a virtual desktop; fact, there might be a specific use case for doing this such as having a large number of branches, where having local infrastructure allows trading to continue in the event of a WAN outage or poor network communication with the branch. VMware has a solution for what's referred to as a "Brokerless View": the VMware Horizon View Agent Direct-Connection Plugin. However, don't forget that, in a Horizon View environment, the View Connection Server provides greater functionality and does much more than just connecting users to desktops. The Horizon View Connection Server runs as an application on a Windows Server that, which in turn, could either be a physical or a virtual machine. Running as a virtual machine has many advantages; for example, it means that you can easily add high-availability features, which are key if you think about them, as you could potentially have hundreds of virtual user desktops running on a single-host server. Along with managing the connections for the users, the Connection Server also works with vCenter Server to manage the virtual desktop machines. For example, when using linked clones and powering on virtual desktops, these tasks might be initiated by the Connection Server, but they are executed at the vCenter Server level. Minimum requirements for the Connection Server To install the View Connection Server, you need to meet the following minimum requirements to run on physical or virtual machines: Hardware requirements: The following screenshot shows the hardware required:< Supported operating systems: The View Connection Server must be installed on one of the following operating systems: The Horizon View Security Server Horizon View Security Server is another instance and another version of the View Connection Server but, this time, it sits within your DMZ so that you can allow end users to securely connect to their virtual desktop machine from an external network or the Internet. You cannot install the View Security Server on the same machine that is already running as a View Connection Server or any of the other Horizon View components. How does the Security Server work? The user login process at the start is the same as when using a View Connection Server for internal access but, now we have added an extra security layer with the Security Server. The idea is that users can access their desktop externally without unnecessarily needing a VPN on the network first. The process is described in detail in the following diagram: The Security Server is paired with a View Connection Server that is configured by the use of a one-time password during installation. It's a bit like pairing your phone's Bluetooth with the hands-free kit in your car. When the user logs in from the View Client, they access the View Connection Server, which in turn authenticates the user against AD. If the View Connection Server is configured as a PCoIP gateway, then it will pass the connection and addressing information to the View Client. This connection information will allow the View Client to connect to the View Security Server using PCoIP. This is shown in the diagram by the green arrow (1). The View Security Server will then forward the PCoIP connection to the virtual desktop machine, (2) creating the connection for the user. The virtual desktop machine is displayed/delivered within the View Client window (3) using the chosen display protocol (PCoIP or RDP). The Horizon View Replica Server The Horizon View Replica Server, as the name suggests, is a replica or copy of a View Connection Server that is used to enable high availability to your Horizon View environment. Having a replica of your View Connection Server means that, if the Connection Server fails, users are still able to connect to their virtual desktop machines. You will need to change the IP address or update the DNS record to match this server if you are not using a load balancer. How does the Replica Server work? So, the first question is, what actually gets replicated? The View Connection Broker stores all its information relating to the end users, desktop pools, virtual desktop machines, and other View-related objects, in an Active Directory Application Mode (ADAM) database. Then, using the Lightweight Directory Access Protocol (LDAP) (it uses a method similar to what AD uses for replication), this View information gets copied from the original View Connection Server to the Replica Server. As both, the Connection Server and the Replica Server are now identical to each other, if your Connection Server fails, then you essentially have a backup that steps in and takes over so that end users can still continue to connect to their virtual desktop machines. Just like with the other components, you cannot install the Replica Server role on the same machine that is running as a View Connection Server or any of the other Horizon View components. Persistent or nonpersistent desktops In this section, we are going to talk about the different types of desktop assignments that can be deployed with Horizon View; these could also potentially have an impact on storage requirements, and also the way in which desktops are provisioned to the end users. One of the questions that always get asked is about having a dedicated (persistent) or a floating desktop assignment (nonpersistent). Desktops can either be individual virtual machines, which are dedicated to a user on a 1:1 basis (as we have in a physical desktop deployment, where each user effectively has their own desktop), or a user has a new, vanilla desktop that gets provisioned, personalized, and then assigned at the time of login and can be chosen at random from a pool of available desktops. This is the model that is used to build the user's desktop. The two options are described in more detail as follows: Persistent desktop: Users are allocated a desktop that retains all of their documents, applications, and settings between sessions. The desktop is statically assigned the first time that the user connects and is then used for all subsequent sessions. No other user is permitted access to the desktop. Nonpersistent desktop: Users might be connected to different desktops from the pool, each time that they connect. Environmental or user data does not persist between sessions and is delivered as the user logs on to their desktop. The desktop is refreshed or reset when the user logs off. In most use cases, a nonpersistent configuration is the best option, the key reason is that, in this model, you don't need to build all the desktops upfront for each user. You only need to power on a virtual desktop as and when it's required. All users start with the same basic desktop, which then gets personalized before delivery. This helps with concurrency rates. For example, you might have 5,000 people in your organization, but only 2,000 ever login at the same time; therefore, you only need to have 2,000 virtual desktops available. Otherwise, you would have to build a desktop for each one of the 5,000 users that might ever log in, resulting in more server infrastructure and certainly a lot more storage capacity. We will talk about storage in the next section. The other thing that we often see some confusion over is the difference between dedicated and floating desktops, and how linked clones fit in. Just to make it clear, linked clones and full clones are not what we are talking about when we refer to dedicated and floating desktops. Cloning operations refer to how a desktop is built, whereas the terms persistent and nonpersistent refer to how a desktop is assigned to a user. Dedicated and floating desktops are purely about user assignment and whether they have a dedicated desktop or one allocated from a pool on-demand. Linked clones and full clones are features of Horizon View, which uses View Composer to create a desktop image for each user from a master or parent image. This means, regardless of having a floating or dedicated desktop assignment, the virtual desktop machine could still be a linked or full clone. So, here's a summary of the benefits: It is operationally efficient. All users start from a single or smaller number of desktop images. Organizations reduce the amount of image and patch management. It is efficient storage-wise. The amount of storage required to host the nonpersistent desktop images will be smaller than keeping separate instances of unique user desktops. In the next section, we are going to cover an in-depth overview of Horizon View Composer and linked clones, and the advantages the technology delivers. Horizon View Composer and linked clones One of the main reasons a virtual desktop project fails to deliver, or doesn't even get out of the starting blocks, is heavy infrastructure down to storage requirements. The storage requirements are often seen as a huge cost burden, which can be attributed to the fact that people are approaching this in the same way they would approach a physical desktop environment's requirements. This would mean that each user gets their own dedicated virtual desktop and the hard disk space that comes with it, albeit a virtual disk; this then gets scaled out for the entire user population, so each user is allocated a virtual desktop with some storage. Let's take an example. If you had 1,000 users and allocated 250 GB per user's desktop, you would need 1,000 * 250 GB = 2.5 TB for the virtual desktop environment. That's a lot of storage just for desktops and could result in significant infrastructure costs that could possibly mean that the cost to deploy this amount of storage in the data center would render the project cost-in effective, compared to physical desktop deployments. A new approach to deploying storage for a virtual desktop environment is needed and this is where linked clone technology comes into play. In a nutshell, linked clones are designed to reduce the amount of disk space required, and to simplify the deployment and management of images to multiple virtual desktop machines—a centralized and much easier process. Linked clone technology Starting at a high level, a clone is a copy of an existing or parent virtual machine. This parent virtual machine (VM) is typically your gold build from which you want to create new virtual desktop machines. When a clone is created, it becomes a separate, new virtual desktop machine with its own unique identity. This process is not unique to Horizon View; it's actually a function of vSphere and vCenter, and in the case of Horizon View, we add in another component, View Composer, to manage the desktop images. There are two types of clones that we can deploy, a full clone or a linked clone. We will explain the difference in the next sections. Full clones As the name implies, a full clone disk is an exact, full-sized copy of the parent machine. Once the clone has been created, the virtual desktop machine is unique, with its own identity, and has no links back to the parent virtual machine from which it was cloned. It can operate as a fully independent virtual desktop in its own right and is not reliant on its parent virtual machine. However, as it is a full-sized copy, be aware that it will take up the same amount of storage as its parent virtual machine, which leads back to our discussion earlier in this article about storage capacity requirements. Using a full clone will require larger amounts of storage capacity and will possibly lead to higher infrastructure costs. Before you completely dismiss the idea of using full clone virtual desktop machines, there are some use cases that rely on this model. For example, if you use VMware Mirage to deliver a base layer or application layer, it only works today with full clones, dedicated Horizon View virtual desktop machines. If you have software developers, then they probably need to install specialist tools and a trust code onto a desktop, and therefore, need to "own" their desktop. Or perhaps, the applications that you run in your environment need a dedicated desktop due to the way the applications are licensed. Linked clones Having now discussed full clones, we are going to talk about deploying virtual desktop machines with linked clones. In a linked clone deployment, a delta disk is created and then used by the virtual desktop machine to store the data differences between its own operating system and the operating system of its parent virtual desktop machine. Unlike the full clone method, the linked clone is not a full copy of the virtual disk. The term linked clone refers to the fact that the linked clone will always look to its parent in order to operate, as it continues to read from the replica disk. Basically, the replica is a copy of a snapshot of the parent virtual desktop machine. The linked clone itself could potentially grow to the same size as the replica disk if you allow it to. However, you can set limits on how big it can grow, and should it start to get too big, then you can refresh the virtual desktops that are linked to it. This essentially starts the cloning process again from the initial snapshot. Immediately after a linked clone virtual desktop is deployed, the difference between the parent virtual machine and the newly created virtual desktop machine is extremely small and therefore reduces the storage capacity requirements compared to that of a full clone. This is how linked clones are more space-efficient than their full clone brothers. The underlying technology behind linked clones is more like a snapshot than a clone, but with one key difference: View Composer. With View Composer, you can have more than one active snapshot linked to the parent virtual machine disk. This allows you to create multiple virtual desktop images from just one parent. Best practice would be to deploy an environment with linked clones so as to reduce the storage requirements. However, as we previously mentioned, there are some use cases where you will need to use full clones. One thing to be aware of, which still relates to the storage, is that, rather than capacity, we are now talking about performance. All linked clone virtual desktops are going to be reading from one replica and therefore, will drive a high number of Input /Output Operations Per Second (IOPS) on the storage where the replica lives. Depending on your desktop pool design, you are fairly likely to have more than one replica, as you would typically have more than one data store. This in turn depends on the number of users who will drive the design of the solution. In Horizon View, you are able to choose the location where the replica lives. One of the recommendations is that the replica should sit on fast storage such as a local SSD. Alternative solutions would be to deploy some form of storage acceleration technology to drive the IOPS. Horizon View also has its own integrated solution called View Storage Accelerator (VSA) or Content Based Read Cache (CBRC). This feature allows you to allocate up to 2 GB of memory from the underlying ESXi host server that can be used as a cache for the most commonly read blocks. As we are talking about booting up desktop operating systems, the same blocks are required; as these can be retrieved from memory, the process is accelerated. Another solution is View Composer Array Integration (VCAI), which allows the process of building linked clones to be offloaded to the storage array and its native snapshot mechanism rather than taking CPU cycles from the host server. There are also a number of other third-party solutions that resolve the storage performance bottleneck, such as Atlantis Computing and their ILIO product, Nutanix, Nimble, and Tintri to name a few others. In the next section, we will take a deeper look at how linked clones work. How do linked clones work? The first step is to create your master virtual desktop machine image, which should contain not only the operating system, core applications, and settings, but also the Horizon View Agent components. This virtual desktop machine will become your parent VM or your gold image. This image can now be used as a template to create any new subsequent virtual desktop machines. The gold image or parent image cannot be a VM template. An overview of the linked clone process is shown in the following diagram:   Once you have created the parent virtual desktop or gold image (1), you then take a snapshot (2). When you create your desktop pool, this snapshot is selected and will become the replica (3) and will be set to be read-only. Each virtual desktop is linked back to this replica; hence the term linked clone. When you start creating your virtual desktops, you create linked clones that are unique copies for each user. Try not to create too many snapshots for your parent VM. I would recommend having just a handful, otherwise this could impact the performance of your desktops and make it a little harder to know which snapshot is which. What does View Composer build? During the image building process, and once the replica disk has been created, View Composer creates a number of other virtual disks, including the linked clone (operating system disk) itself. These are described in the following sections. Linked clone disk Not wanting to state the obvious, the main disk that gets created is the linked clone disk itself. This linked clone disk is basically an empty virtual disk container that is attached to the virtual desktop machine as the user logs in and the desktop starts up. This disk will start off small in size, but will grow over time, depending on the block changes that are requested from the replica disk by the virtual desktop machine's operating system. These block changes are stored in the linked clone disk, and this disk is sometimes referred to as the delta disk, or differential disk, due to the fact that it stores all the delta changes that the desktop operating system requests from the parent VM. As mentioned before, the linked clone disk can grow to the maximum size, equal to the parent VM but, following best practice, you would never let this happen. Typically, you can expect the linked clone disk to only increase to a few hundred MBs. We will cover this in the Linked clone processes section later. The replica disk is set as read-only and is used as the primary disk. Any writes and/or block changes that are requested by the virtual desktop are written/read directly from the linked clone disk. It is a recommended best practice to allocate tier-1 storage, such as local SSD drives, to host the replica, as all virtual desktops in the cluster will be referencing this single read-only VMDK file as their base image. Keeping it high in the stack improves performance, by reducing the overall storage IOPS required in a VDI workload. As we mentioned at the start of this section, storage costs are seen as being expensive for VDI. Linked clones reduce the burden of storage capacity but they do drive the requirement to derive a huge amount of IOPS from a single LUN. Persistent disk or user data disk The persistent disk feature of View Composer allows you to configure a separate disk that contains just the user data and user settings, and not the operating system. This allows any user data to be preserved when you update or make changes to the operating system disk, such as a recompose action. It's worth noting that the persistent disk is referenced by the VM name and not username, so bear this in mind if you want to attach the disk to another VM. This disk is also used to store the user's profile. With this in mind, you need to size it accordingly, ensuring that it is large enough to store any user profile type data such as Virtual Desktop Assessments. Disposable disk With the disposable disk option, Horizon View creates what is effectively a temporary disk that gets deleted every time the user powers off their virtual desktop machine. If you think about how the Windows desktop operating system operates and the files it creates, there are several files that are used on a temporary basis. Files such as Temporary Internet files or the Windows pagefile are two such examples. As these are only temporary files, why would you want to keep them? With Horizon View, these type of files are redirected to the disposable disk and then deleted when the VM is powered off. Horizon View provides the option to have a disposable disk for each virtual desktop. This disposable disk is used to contain temporary files that will get deleted when the virtual desktop is powered off. These are files that you don't want to store on the main operating system disk as they would consume unnecessary disk space. For example, files on the disposable disk are things such as the pagefile, Windows system temporary files, and VMware log files. Note that here we are talking about temporary system files and not user files. A user's temporary files are still stored on the user data disk so that they can be preserved. Many applications use the Windows temp folder to store installation CAB files, which can be referenced post-installation. Having said that, you might want to delete the temporary user data to reduce the desktop image size, in which case you could ensure that the user's temporary files are directed to the disposable disk. Internal disk Finally, we have the internal disk. The internal disk is used to store important configuration information, such as the computer account password, that would be needed to join the virtual desktop machine back to the domain if you refreshed the linked clones. It is also used to store Sysprep and Quickprep configurations details. In terms of disk space, the internal disk is relatively small, averaging around 20 MB. By default, the user will not see this disk from their Windows Explorer, as it contains important configuration information that you wouldn't want them to delete. Understanding the linked clone process There are several complex steps performed by View Composer and View Manager and that occur when a user launches a virtual desktop session. So, what's the process to build a linked clone desktop, and what goes on behind the scenes? When a user logs into Horizon View and requests a desktop, View Manager, using vCenter and View Composer, will create a virtual desktop machine. This process is described in the following sections. Creating and provisioning a new desktop An entry for the virtual desktop machine is created in the Active Directory Application Mode (ADAM) database before it is put into provisioning mode: The linked clone virtual desktop machine is created by View Composer. A machine account created in AD with a randomly generated password. View Composer checks for a replica disk and creates one if one does not already exist. A linked clone is created by the vCenter Server API call from View Composer. An internal disk is created to store the configuration information and machine account password. Customizing the desktop Now that you have a newly created, linked clone virtual desktop machine, the next phase is to customize it. The customization steps are as follows: The virtual desktop machine is switched to customization mode. The virtual desktop machine is customized by vCenter Server using the customizeVM_Task command and is joined to the domain with the information you entered in the View Manager console. The linked clone virtual desktop is powered on. The View Composer Agent on the linked clone virtual desktop machine starts up for the first time and joins the machine to the domain, using the NetJoinDomain command and the machine account password that was created on the internal disk. The linked clone virtual desktop machine is now Sysprep'd. Once complete, View Composer tells View Agent that customization has finished, and View Agent tells View Manager that the customization process has finished. The linked clone virtual desktop machine is powered off and a snapshot is taken. The linked clone virtual desktop machine is marked as provisioned and is now available for use. When a linked clone virtual desktop machine is powered on with the View Composer Agent running, the agent tracks any changes that are made to the machine account password. Any changes will be updated and stored on the internal disk. In many AD environments, the machine account password is changed periodically. If the View Composer Agent detects a password change, it updates the machine account password on the internal disk that was created with the linked clone. This is important, as a the linked clone virtual desktop machine is reverted to the snapshot taken after the customization during a refresh operation. For example, the agent will be able to reset the machine account password to the latest one. The linked clone process is depicted in the following diagram:   Additional features and functions of linked clones There are a number of other management functions that you can perform on a linked clone disk from View Composer; these are outlined in this section and are needed in order to deliver the ongoing management of the virtual desktop machines. Recomposing a linked clone Recomposing a linked clone virtual desktop machine or desktop pool allows you to perform updates to the operating system disk, such as updating the image with the latest patches, or software updates. You can only perform updates on the same version of an operating system, so you cannot use the recompose feature to migrate from one operating system to another, such as going from Windows XP to Windows 7. As we covered in the What does View Composer Build? section, we have separate disks for items such as user's data. These disks are not affected during a recompose operation, so all user-specific data on them is preserved. When you initiate the recompose operation, View Composer essentially starts the linked clone building process over again; thus, a new operating system disk is created, which then gets customized and a snapshot, such as the ones shown in the preceding sections, is taken. During the recompose operation, the MAC addresses of the network interface and the Windows SID are not preserved. There are some management tools and security-type solutions that might not work due to this change. However, the UUID will remain the same. The recompose process is described in the following steps: View Manager puts the linked clone into maintenance mode. View Manager calls the View Composer resync API for the linked clones being recomposed, directing View Composer to use the new base image and the snapshot. If there isn't a replica for the base image and snapshot yet, in the target datastore for the linked clone, View Composer creates the replica in the target datastore (unless a separate datastore is being used for replicas, in which case a replica is created in the replica datastore). View Composer destroys the current OS disk for the linked clone and creates a new OS disk linked to the new replica. The rest of the recompose cycle is identical to the customization phase of the provisioning and customization cycles. The following diagram shows a graphical representation of the recompose process. Before the process begins, the first thing you need to do is update your Gold Image (1) with the patch updates or new applications you want to deploy as the virtual desktops. As described in the preceding steps, the snapshot is then taken (2) to create the new replica, Replica V2 (3). The existing OS disk is destroyed, but the User Data disk (4) is maintained during the recompose process:   Refreshing a linked clone By carrying out a refresh of the linked clone virtual desktop, you are effectively reverting it to its initial state, when its original snapshot was taken after it had completed the customization phase. This process only applies to the operating system disk and no other disks are affected. An example use case for refresh operations would be recomposing a nonpersistent desktop two hours after logoff, to return it to its original state and make it available for the next user. The refresh process performs the following tasks: The linked clone virtual desktop is switched into maintenance mode. View Manager reverts the linked clone virtual desktop to the snapshot taken after customization was completed: - vdm-initial-checkpoint. The linked clone virtual desktop starts up, and View Composer Agent detects if the machine account password needs to be updated. If not, and the password on the internal disk is newer than the one in the registry, the agent will update the machine account password using the one on the internal disk. One of the reasons why you would perform a refresh operation is if the linked clone OS disk starts to become bloated. As we previously discussed, the OS-linked clone disk could grow to the full size of its parent image. This means it would be taking up more disk space than is really necessary, which kind of defeats the objective of linked clones. The refresh operation effectively resets the linked clone to a small delta between it and its parent image. The following diagram shows a representation of the refresh operation:   The linked clone on the left-hand side of the diagram (1) has started to grow in size. Refreshing reverts it back to the snapshot as if it was a new virtual desktop, as shown on the right-hand side of the diagram (2). Rebalancing operations with View Composer The rebalance operation in View Composer is used to evenly distribute the linked clone virtual desktop machines across multiple datastores in your environment. You would perform this task in the event that one of your datastores was becoming full while others have ample free space. It might also help with the performance of that particular datastore. For example, if you had 10 virtual desktop machines in one datastore and only two in another, then running a rebalance operation would potentially even this out and leave you with six virtual desktop machines per datastore. You must use the View Administrator console to initiate the rebalance operation in View Composer. If you simply try to vMotion any of your virtual desktop machines, then View Composer will not be able to keep track of them. On the other hand, if you have six virtual desktop machines on one datastore and seven on another, then it is highly likely that initiating a rebalance operation will have no effect, and no virtual desktop machines will be moved, as doing so has no benefit. A virtual desktop machine will only be moved to another datastore if the target datastore has significantly more spare capacity than the source. The rebalance process is described in the following steps: The linked clone is switched to maintenance mode. Virtual machines to be moved are identified based on the free space in the available datastores. The operating system disk and persistent disk are disconnected from the virtual desktop machine. The detached operating system disk and persistent disk are moved to the target datastore. The virtual desktop machine is moved to the target datastore. The operating system disk and persistent disk are reconnected to the linked clone virtual desktop machine. View Composer resynchronizes the linked clone virtual desktop machines. View Composer checks for the replica disk in the datastore and creates one if one does not already exist as per the provisioning steps covered earlier in this article. As per the recompose operation, the operating system disk for the linked clone gets deleted and a new one is created and then customized. The following diagram shows the rebalance operation:   Summary In this article, we discussed the Horizon View architecture and the different components that make up the complete solution. We covered the key technologies, such as how linked clones work to optimize storage. Resources for Article: Further resources on this subject: Importance of Windows RDS in Horizon View [article] Backups in the VMware View Infrastructure [article] Design, Install, and Configure [article]
Read more
  • 0
  • 0
  • 16331

article-image-virtual-machine-concepts
Packt
19 Mar 2014
18 min read
Save for later

Virtual Machine Concepts

Packt
19 Mar 2014
18 min read
(For more resources related to this topic, see here.) The multiple instances of Windows or Linux systems that are running on an ESXi host are commonly referred to as a virtual machine (VM). Any reference to a guest operating system (OS) is an instance of Linux, Windows, or any other supported operating system that is installed on the VM. vSphere virtual machines At the heart of virtualization lies the virtual machine. A virtual machine is a set of virtual hardware whose characteristics are determined by a set of files; it is this virtual hardware that a guest operating system is installed on. A virtual machine runs an operating system and a set of applications just like a physical server. A virtual machine comprises a set of configuration files and is backed by the physical resources of an ESXi host. An ESXi host is the physical server that has the VMware hypervisor, known as ESXi, installed. Each virtual machine is equipped with virtual hardware and devices that provide the same functionality as having physical hardware. Virtual machines are created within a virtualization layer, such as ESXi running on a physical server. This virtualization layer manages requests from the virtual machine for resources such as CPU or memory. It is the virtualization layer that is responsible for translating these requests to the underlying physical hardware. Each virtual machine is granted a portion of the physical hardware. All VMs have their own virtual hardware (there are important ones to note, called the primary 4: CPU, memory, disk, and network). Each of these VMs is isolated from the other and each interacts with the underlying hardware through a thin software layer known as the hypervisor. This is different from a physical architecture in which the installed operating system interacts with installed hardware directly. With virtualization, there are many benefits, in relation to portability, security, and manageability that aren't available in an environment that uses a traditional physical infrastructure. However, once provisioned, virtual machines use many of the same principles that are applied to physical servers. The preceding diagram demonstrates the differences between the traditional physical architecture (left) and a virtual architecture (right). Notice that the physical architecture typically has a single application and a single operating system using the physical resources. The virtual architecture has multiple virtual machines running on a single physical server, accessing the hardware through the thin hypervisor layer. Virtual machine components When a virtual machine is created, a default set of virtual hardware is assigned to it. VMware provides devices and resources that can be added and configured to the virtual machine. Not all virtual hardware devices will be available to every single virtual machine; both the physical hardware of the ESXi host and the VM's guest OS must support these configurations. For example, a virtual machine will not be capable of being configured with more vCPUs than the ESXi host has logical CPU cores. The virtual hardware available includes: BIOS: Phoenix Technologies 6.00 that functions like a physical server BIOS. Virtual machine administrators are able to enable/disable I/O devices, configure boot order, and so on. DVD/CD-ROM: NEC VMware IDE CDR10 that is installed by default in new virtual machines created in vSphere. The DVD/CD-ROM can be configured to connect to the client workstation DVD/CD-ROM, an ESXi host DVD/CD-ROM, or even an .iso file located on a datastore. DVD/CD-ROM devices can be added to or removed from a virtual machine. Floppy drive: This is installed by default with new virtual machines created in vSphere. The floppy drive can be configured to connect to the client device's floppy drive, a floppy device located on the ESXi host, or even a floppy image (.flp) located on a datastore. Floppy devices can be added to or removed from a virtual machine. Hard disk: This stores the guest operating system, program files, and any other data associated with a virtual machine. The virtual disk is a large file, or potentially a set of files, that can be easily copied, moved, and backed up. IDE controller: Intel 82371 AB/EB PCI Bus Master IDE Controller that presents two Integrated Drive Electronics (IDE) interfaces to the virtual machine by default. This IDE controller is a standard way for storage devices, such as floppy drives and CD-ROM drives, to connect to the virtual machine. Keyboard: This mirrors the keyboard that is first connected to the virtual machine console upon initial console connection. Memory: This is the virtual memory size configured for the virtual machine that determines the guest operating system's memory size. Motherboard/Chipset: The motherboard uses VMware proprietary devices that are based on the following chips: Intel 440BX AGPset 82443BX Host Bridge/Controller Intel 82093 AA I/O Advanced Programmable Interrupt Controller Intel 82371 AB (PIIX4) PCI ISA IDE Xcelerator National Semiconductor PC87338 ACPI 1.0 and PC98/99 Compliant Super I/O Network adapter: ESXi networking features provide communication between virtual machines residing on the same ESXi host, between VMs residing on different ESXi hosts, and between VMs and physical machines. When configuring a VM, network adapters (NICs) can be added and the adapter type can be specified. Parallel port: This is an interface for connecting peripherals to the virtual machine. Virtual parallel ports can be added to or removed from the virtual machine. PCI controller: This is a bus located on the virtual machine motherboard, communicating with components such as a hard disk. A single PCI controller is presented to the virtual machine. This cannot be configured or removed. PCI device: DirectPath devices can be added to a virtual machine. The devices must be reserved for PCI pass-through on the ESXi host that the virtual machine runs on. Keep in mind that snapshots are not supported with DirectPath I/O pass-through device configuration. For more information on virtual machine snapshots, see http://vmware.com/kb/1015180. Pointing device: This mirrors the pointing device that is first connected to the virtual machine console upon initial console connection. Processor: This specifies the number of sockets and core for the virtual processor. This will appear as AMD or Intel to the virtual machine guest operating system depending upon the physical hardware. Serial port: This is an interface for connecting peripherals to the virtual machine. The virtual machine can be configured to connect to a physical serial port, a file on the host, or over the network. The serial port can also be used to establish a direct connection between two VMs. Virtual serial ports can be added to or removed from the virtual machine. SCSI controller: This provides access to virtual disks. The virtual SCSI controller may appear as one of several different types of controllers to a virtual machine, depending on the guest operating system of the VM. Editing the VM configuration can modify the SCSI controller type, a SCSI controller can be added, and a virtual controller can be configured to allocate bus sharing. SCSI device: A SCSI device interface is available to the virtual machine by default. This interface is a typical way to connect storage devices (hard drives, floppy drives, CD-ROMs, and so on) to a VM. SCSI device that can be added to or removed from a virtual machine. SIO controller: The Super I/O controller provides serial and parallel ports, and floppy devices, and performs system management activities. A single SIO controller is presented to the virtual machine. This cannot be configured or removed. USB controller: This provides USB functionality to the USB ports managed. The virtual USB controller is a software virtualization of the USB host controller function in a VM. USB device: Multiple USB devices may be added to a virtual machine. These can be mass storage devices or security dongles. The USB devices can be connected to a client workstation or to an ESXi host. Video controller: This is a VMware Standard VGA II Graphics Adapter with 128 MB video memory. VMCI: The Virtual Machine Communication Interface provides high-speed communication between the hypervisor and a virtual machine. VMCI can also be enabled for communication between VMs. VMCI devices cannot be added or removed. Uses of virtual machines In any infrastructure, there are many business processes that have applications supporting them. These applications typically have certain requirements, such as security or performance requirements, which may limit the application to being the only thing installed on a given machine. Without virtualization, there is typically a 1:1:1 ratio for server hardware to an operating system to a single application. This type of architecture is not flexible and is inefficient due to many applications using only a small percentage of the physical resources dedicated to it, effectively leaving the physical servers vastly underutilized. As hardware continues to get better and better, the gap between the abundant resources and the often small application requirements widens. Also, consider the overhead needed to support the entire infrastructure, such as power, cooling, cabling, manpower, and provisioning time. A large server sprawl will cost more money for space and power to keep these systems housed and cooled. Virtual infrastructures are able to do more with less—fewer physical servers are needed due to higher consolidation ratios. Virtualization provides a safe way of putting more than one operating system (or virtual machine) on a single piece of server hardware by isolating each VM running on the ESXi host from any other. Migrating physical servers to virtual machines and consolidating onto far fewer physical servers means lowering monthly power and cooling costs in the datacenter. Fewer physical servers can help reduce the datacenter footprint; fewer servers means less networking equipment, fewer server racks, and eventually less datacenter floor space required. Virtualization changes the way a server is provisioned. Initially it took hours to build a cable and install the OS; now it takes only seconds to deploy a new virtual machine using templates and cloning. VMware offers a number of advanced features that aren't found in a strictly physical infrastructure. These features, such as High Availability, Fault Tolerance, and Distributed Resource Scheduler, help with increased uptime and overall availability. These technologies keep the VMs running or give the ability to quickly recover from unplanned outages. The ability to quickly and easily relocate a VM from one ESXi host to another is one of the greatest benefits of using vSphere virtual machines. In the end, virtualizing the infrastructure and using virtual machines will help save time, space, and money. However, keep in mind that there are some upfront costs to be aware of. Server hardware may need to be upgraded or new hardware purchased to ensure compliance with the VMware Hardware Compatibility List (HCL). Another cost that should be taken into account is the licensing costs for VMware and the guest operating system; each tier of licensing allows for more features but drives up the price to license all of the server hardware. The primary virtual machine resources Virtualization decouples physical hardware from an operating system. Each virtual machine contains a set of its own virtual hardware and there are four primary resources that a virtual machine needs in order to correctly function. These are CPU, memory, network, and hard disk. These four resources look like physical hardware to the guest operating systems and applications. The virtual machine is granted access to a portion of the resources at creation and can be reconfigured at any time thereafter. If a virtual machine experiences constraint, one of the four primary resources is generally where a bottleneck will occur. In a traditional architecture, the operating system interacts directly with the server's physical hardware without virtualization. It is the operating system that allocates memory to applications, schedules processes to run, reads from and writes to attached storage, and sends and receives data on the network. This is not the case with a virtualized architecture. The virtual machine guest operating system still does the aforementioned tasks, but also interacts with virtual hardware presented by the hypervisor. In a virtualized environment, a virtual machine interacts with the physical hardware through a thin layer of software known as the virtualization layer or the hypervisor; in this case the hypervisor is ESXi. This hypervisor allows the VM to function with a degree of independence from underlying physical hardware. This independence is what allows vMotion and Storage vMotion functionality. The following diagram demonstrates a virtual machine and its four primary resources: This section will provide an overview of each of the "primary four" resources. CPU The virtualization layer runs CPU instructions to make sure that the virtual machines run as though accessing the physical processor on the ESXi host. Performance is paramount for CPU virtualization, and therefore will use the ESXi host physical resources whenever possible. The following image displays a representation of a virtual machine's CPU: A virtual machine can be configured with up to 64 virtual CPUs (vCPUs) as of vSphere 5.5. The maximum vCPUs able to be allocated depends on the underlying logical cores that the physical hardware has. Another factor in the maximum vCPUs is the tier of vSphere licensing; only Enterprise Plus licensing allows for 64 vCPUs. The VMkernel includes a CPU scheduler that dynamically schedules vCPUs on the ESXi host's physical processors. The VMkernel scheduler, when making scheduling decisions, considers socket-core-thread topology. A socket is a single, integrated circuit package that has one or more physical processor cores. Each core has one or more logical processors, also known as threads. If hyperthreading is enabled on the host, then ESXi is capable of executing two threads, or sets of instruction, simultaneously. Effectively, hyperthreading provides more logical CPUs to ESXi on which vCPUs can be scheduled, providing more scheduler throughput. However, keep in mind that hyperthreading does not double the core's power. During times of CPU contention, when VMs are competing for resources, the VMkernel timeslices the physical processor across all virtual machines to ensure that the VMs run as if having a specified number of vCPUs. VMware vSphere Virtual Symmetric Multiprocessing (SMP) is what allows the virtual machines to be configured with up to 64 virtual CPUs, which allows a larger CPU workload to run on an ESXi host. Though most supported guest operating systems are multiprocessor aware, many guest OSes and applications do not need and are not enhanced by having multiple vCPUs. Check vendor documentation for operating system and application requirements before configuring SMP virtual machines. Memory In a physical architecture, an operating system assumes that it owns all physical memory in the server, which is a correct assumption. A guest operating system in a virtual architecture also makes this assumption but it does not, in fact, own all of the physical memory. A guest operating system in a virtual machine uses a contiguous virtual address space that is created by ESXi as its configured memory. The following image displays a representation of a virtual machine's memory: Virtual memory is a well-known technique that creates this contiguous virtual address space, allowing the hardware and operating system to handle the address translation between the physical and virtual address spaces. Since each virtual machine has its own contiguous virtual address space, this allows ESXi to run more than one virtual machine at the same time. The virtual machine's memory is protected against access from other virtual machines. This effectively results in three layers of virtual memory in ESXi: physical memory, guest operating system physical memory, and guest operating system virtual memory. The VMkernel presents a portion of physical host memory to the virtual machine as its guest operating system physical memory. The guest operating system presents the virtual memory to the applications. The virtual machine is configured with a set of memory; this is the sum that the guest OS is told it has available to it. A virtual machine will not necessarily use the entire memory size; it only uses what is needed at the time by the guest OS and applications. However, a VM cannot access more memory than the configured memory size. A default memory size is provided by vSphere when creating the virtual machine. It is important to know the memory needs of the application and guest operating system being virtualized so that the virtual machine's memory can be sized accordingly. Network There are two key components with virtual networking: the virtual switch and virtual Ethernet adapters. A virtual machine can be configured with up to ten virtual Ethernet adapters, called vNICs. The following image displays a representation of a virtual machine's vNIC: Virtual network switching is software interfacing between virtual machines at the vSwitch level until the frames hit an uplink or a physical adapter, exiting the ESXi host and entering the physical network. Virtual networks exist for virtual devices; all communication between the virtual machines and the external world (physical network) goes through vNetwork standard switches or vNetwork distributed switches. Virtual networks operate on layer 2, data link, of the OSI model. A virtual switch is similar to a physical Ethernet switch in many ways. For example, virtual switches support the standard VLAN (802.1Q) implementation and have a forwarding table, like a physical switch. An ESXi host may contain more than one virtual switch. Each virtual switch is capable of binding multiple vmnics together in a network interface card (NIC) team, which offers greater availability to the virtual machines using the virtual switch. There are two connection types available on a virtual switch: a port group and a VMkernel port. Virtual machines are connected to port groups on a virtual switch, allowing access to network resources. VMkernel ports provide a network service to the ESXi host to include IP storage, management, vMotion, and so on. Each VMkernel port must be configured with its own IP address and network mask. The port groups and VMkernel ports reside on a virtual switch and connect to the physical network through the physical Ethernet adapters known as vmnics. If uplinks (vmnics) are associated with a virtual switch, then the virtual machines connected to a port group on this virtual switch will be able to access the physical network. Disk In a non-virtualized environment, physical servers connect directly to storage, either to an external storage array or to their internal hard disk arrays to the server chassis. The issue with this configuration is that a single server expects total ownership of the physical device, tying an entire disk drive to one server. Sharing storage resources in non-virtualized environments can require complex filesystems and migration to file-based Network Attached Storage (NAS) or Storage Area Networks (SAN). The following image displays a representation of a virtual disk: Shared storage is a foundational technology that allows many things to happen in a virtual environment (High Availability, Distributed Resource Scheduler, and so on). Virtual machines are encapsulated in a set of discrete files stored on a datastore. This encapsulation makes the VMs portable and easy to be cloned or backed up. For each virtual machine, there is a directory on the datastore that contains all of the VM's files. A datastore is a generic term for a container that holds files as well as .iso images and floppy images. It can be formatted with VMware's Virtual Machine File System (VMFS) or can use NFS. Both datastore types can be accessed across multiple ESXi hosts. VMFS is a high-performance, clustered filesystem devised for virtual machines that allows a virtualization-based architecture of multiple physical servers to read and write to the same storage simultaneously. VMFS is designed, constructed, and optimized for virtualization. The newest version, VMFS-5, exclusively uses 1 MB block size, which is good for large files, while also having an 8 KB subblock allocation for writing small files such as logs. VMFS-5 can have datastores as large as 64 TB. The ESXi hosts use a locking mechanism to prevent the other ESXi hosts accessing the same storage from writing to the VMs' files. This helps prevent corruption. Several storage protocols can be used to access and interface with VMFS datastores; these include Fibre Channel, Fibre Channel over Ethernet, iSCSI, and direct attached storage. NFS can also be used to create a datastore. VMFS datastore can be dynamically expanded, allowing the growth of the shared storage pool with no downtime. vSphere significantly simplifies accessing storage from the guest OS of the VM. The virtual hardware presented to the guest operating system includes a set of familiar SCSI and IDE controllers; this way the guest OS sees a simple physical disk attached via a common controller. Presenting a virtualized storage view to the virtual machine's guest OS has advantages such as expanded support and access, improved efficiency, and easier storage management.
Read more
  • 0
  • 0
  • 14165
article-image-optimizing-netscaler-traffic
Packt
14 Oct 2015
11 min read
Save for later

Optimizing NetScaler Traffic

Packt
14 Oct 2015
11 min read
In this article by Marius Sandbu, author of the book Implementing NetScaler VPX™ - Second Edition, explains the purpose of NetScaler is to act as a logistics department; it serves content to different endpoints using different protocols across different types of media, and it can either be a physical device or a device on top of a hypervisor within a private cloud infrastructure. Since there are many factors are at play here, there is room for tuning and improvement. Some of the topics we will go through in this article are as follows: Tuning for virtual environments Tuning TCP traffic (For more resources related to this topic, see here.) Tuning for virtual environments When setting up a NetScaler in a virtual environment, we need to keep in mind that there are many factors that influence how it will perform, for instance, the underlying CPU of the virtual host, NIC throughput and capabilities, vCPU over allocation, NIC teaming, MTU size, and so on. So, always important to remember the hardware requirements when setting up a NetScaler VPX on a virtualization host. Another important factor to keep in mind when setting up a NetScaler VPX is the concepts of Package Engines. By default, when we set up or import a NetScaler, it is set up with two vCPU. The first of these two is dedicated for management purpose and the second vCPU is dedicated to doing all the packet processing, such as content switching, SSL offloading, ICA-proxy, and so on. It is important to note that the second vCPU might be seen as 100% utilized in the hypervisor performance monitoring tools, but the correct way to check if it is being utilized is by using the CLI command stat system Now, by default, VPX 10 and VPX 200 only have support for one packet engine. This is because of the fact that due to its bandwidth limitations it does not require more packet engine CPUs to process the packets. On the other hand, VPX 1000 and VPX 3000 have support for up to 3 packet engines. This, in most cases, is needed to be able to process all the packets that are going through the system, if the bandwidth is going to be utilized to its fullest. In order to add a new packet engine, we need to assign more vCPUs to the VPX and more memory. Packet engines also have the benefit of load balancing processing between them, so instead of having a vCPU that is 100% utilized, we can even the load between multiple vCPUs and get an even better performance and bandwidth. The following chart shows the different editions and support for multiple packet engines: License/Memory 2 GB 4 GB 6 GB 8 GB 10 GB 12 GB VPX 10 1 1 1 1 1 1 VPX 200 1 1 1 1 1 1 VPX 1000 1 2 3 3 3 3 VPX 3000 1 2 3 3 3 3 It is important for us to remember that multiple PEs are only available for VMware, XenServer, and Hyper-V, but not on KVM. If we plan on using NIC-teaming on the underlying virtualization host, there are some important aspects to consider. Most of the different vendors have guidelines which describe the kinds of load balancing techniques that are available in the hypervisor. For instance, Microsoft has a guide that describes their features. You can find the guide at http://www.microsoft.com/en-us/download/details.aspx?id=30160. One of the NIC teaming options, called Switch Independent Dynamic Mode, has an interesting side effect; it replaces the source MAC address of the virtual machine with the one of the primary NIC on the host, hence we might experience packet loss on a VPX. Therefore, it is recommended in most cases that we have LACP/LAG, or in case of Hyper-V, use the HyperVPort distribution feature instead. Note that the features such as SRV-IO or PCI pass through are not supported for NetScaler VPX. NetScaler 11 also introduced the support for Jumbo Frames for the VPX, which allows for a much higher payload in an Ethernet frame. Instead of the traditional 1500 bytes, we can scale up to 9000 bytes of payload. This allows for a much lower overhead since the frames contain more data. This requires that the underlying NIC on the hypervisor supports this feature and is enabled as well, and this in most cases just work for communication with backend resources and not with users accessing public resources. This is because of the fact that most routers and ISP block such high MTU. This feature can be configured on at the Interface level in NetScaler, which can be done under System | Network | Interface then choose Select Interface and click on Edit. Here, we have an option called Maximum Transmission Unit which can be adjusted up to 9,216 Bytes. It is important to note that NetScaler communicates with backend resources using Jumbo frames, and then adjusts the MTU when communicating back with clients. It can also communicate with Jumbo frames in both paths, in case the NetScaler is set up as a backend load balancer. It is important to note that NetScaler only supports Jumbo frames load balancing for the following protocols: TCP TCP based protocols, like HTTP SIP RADIUS TCP tuning Much of the traffic which is going through NetScaler is based on the TCP protocol. Either it is ICA-Proxy, HTTP and so on. TCP is a protocol that provides reliable, error-checked delivery of packets back and forth. This ensures that data is successfully transferred before being processed further. TCP has many features to adjust bandwidth during transfer, congestion checking, adjusting segment sizing, and so on. We will delve a little into all these features in this section. We can adjust the way NetScaler uses TCP using TCP profiles, by default all services and vServers that are created on the NetScaler uses the default TCP profile nstcp_default_profile. Note that these profiles can be found by navigating to System | Profiles | TCP Profiles. Make sure not to alter the default TCP profile without properly consulting the network team, as this affects the way in which TCP works for all default services on the NetScaler. This default profile has most of the different TCP features turned off; this is to ensure compatibility with most infrastructures. The profile has not been adjusted much since it was first added in NetScaler. Citrix also has a lot of other different profiles depending on the use cases, so we are going to look a bit closer on the different options we have here. For instance, the profile nstcp_default_XA_XD_profile, which is intended for ICA-proxy traffic has some differences from the default profile, which are as follows: Window Scaling Selective Acknowledgement Forward Acknowledgement Use of Nagles Algorithm Window Scaling is a TCP option that allows the receiving point to accept more data than it is allowed in the TCP RFC for window size before getting an acknowledgement. By default, the window size is set to accept 65.536 bytes. With Window scaling enabled, it basically bitwise shifts the window size. This is an option that needs to be enabled on both endpoints in order to be used, and will only be sent in the initial three-way handshake. Select Acknowledgement (SACK) is a TCP option that allows for better handling of TCP retransmission. In a scenario of two hosts communicating where SACK is not enabled, and suddenly one of the hosts drops out of the network and loses some packets when they come back online it receives more packets from the other host. In this case the first host will ACK from the last packet it got from the other host before it dropped out. With SACK enabled, it will notify the other host of the last packet it got before it dropped out, and the other packets it received when he got back online. This allows for faster recovery of the communication, since the other host does not need to resend all the packets. Forward Acknowledgement (FACK) is a TCP option which works in conjunction with SACK and helps avoid TCP congestion by measuring the total number of data bytes that are outstanding in the network. Using the information from SACK, it can more precisely calculate how much data it can retransmit. Nagles Algoritm is a TCP feature that tries to cope with the small packet problem. Applications like Telnet often sends each keystroke within its own packet, creating multiple small packets containing only 1 byte of data, which results in a 41-byte packet for one keystroke. The algorithm works by combining a number of small outgoing messages into the same message, thus avoiding an overhead. Since ICA is a protocol that operates with many small packets, which might create congestion, Nagle is enabled in the TCP profile. Also, since many will be connecting using 3G or Wi-Fi, which might, in some cases, be unreliable to change channel, we need options that require the clients to be able to re-establish a connection quickly, which allows the use of SACK and FACK. Note that Nagle might have negative performance on applications that have their own buffering mechanism and operate inside LAN. If we take a look at another profile like nstcp_default_lan, we can see that FACK is disabled; this is because resources needed to calculate the amount of outstanding data in a high-speed network might be too much. Another important aspect of these profiles is the TCP congestion algorithms. For instance, nstcp_default_mobile uses the Westwood congestion algorithm; this is because it is much better at handling large bandwidth-delay paths, such as the wireless. The following congestion algorithms are available in NetScaler: Default (Based on TCP Reno) Westwood (Based on TCP Westwood+) BIC CUBIC Nile (Based on TCP Illinois) It is worth noting here that Westwood is aimed for 3G/4G connections or other slow wireless connections. BIC is aimed for high bandwidth connections with high latency, such as WAN connections. CUBIC is almost like BIC, but not as aggressive when it comes to fast-ramp and retransmissions. However, it is important to note that CUBIC is the default TCP algorithm in Linux kernels from 2.6.19 to 3.1 Nile is a newly created algorithm created by Citrix, which is based upon TCP Illinois and is targeted at high-speed, long-distance networks. It achieves higher throughput than standard TCP and is also compatible with standard TCP. So, here we can customize the algorithm that is better suited for a service. For instance, if we have a vServer that serves content to mobile devices, then we could use the nstcp_default_mobile TCP profile. There are also some other parameters that are important to keep in mind while working with the TCP profile. One of these parameters is multipath TCP. This is feature which allows endpoint which has multiple paths to a service. It is typically a mobile device which has WLAN and 3G capabilities, and allows the device to communicate with a service on a NetScaler using both channels at the same time. This requires that the device supports communication on both methods and that the service or application on the device supports Multipath TCP. So let's take an example of how a TCP profile might look like. If we have a vServer on NetScaler that is used to service an application to mobile devices. Meaning that the most common way that users can access this service is using 3G or Wi-Fi. The web service has its own buffering mechanism meaning that it tries not to send small packets over the link. The application is Multipath-TCP aware. In this scenario, we can leverage the nstcp_default_mobile profile, since it has most of the defaults for a mobile scenario, but we can also enable multipath TCP and create a new profile of it and bind it to the vServer. In order to bind a TCP profile to a vServer, we have go to a particular vServer | Edit | Profiles | TCP Profiles, as shown in the following screenshot: Note that AOL did a presentation of their own TCP customization on NetScaler; the presentation can be found at http://www.slideshare.net/masonke/net-scaler-tcpperformancetuningintheaolnetwork. It is also important to note that TCP should always be done in cooperation with the network team. Summary In this article, we have learned about Tuning for virtual environments and TCP tuning. Resources for Article: Further resources on this subject: Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box[article] XenMobile™ Solutions Bundle[article] Load balancing MSSQL [article]
Read more
  • 0
  • 0
  • 13890

article-image-load-balancing-mssql
Packt
18 Apr 2014
5 min read
Save for later

Load balancing MSSQL

Packt
18 Apr 2014
5 min read
NetScaler is the only certified load balancer that can load balance the MySQL and MSSQL services. It can be quite complex and there are many requirements that need to be in place in order to set up a proper load-balanced SQL server. Let us go through how to set up a load-balanced Microsoft SQL Server running on 2008 R2. Now, it is important to remember that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized. This is to ensure that the content that the user is requesting is available on all the backend servers. Microsoft SQL Server supports different types of availability solutions, such as replication. You can read more about it at http://technet.microsoft.com/en-us/library/ms152565(v=sql.105).aspx. Using transactional replication is recommended, as this replicates changes to different SQL servers as they occur. As of now, the load balancing solution for MSSQL, also called DataStream, supports only certain versions of SQL Server. They can be viewed at http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-dbproxy-reference-protocol-con.html. Also, only certain authentication methods are supported. As of now, only SQL authentication is supported for MSSQL. The steps to set up load balancing are as follows: We need to add the backend SQL servers to the list of servers. Next, we need to create a custom monitor that we are going to use against the backend servers. Before we create the monitor, we can create a custom database within SQL Server that NetScaler can query. Open Object Explorer in the SQL Management Studio, and right-click on the Database folder. Then, select New Database, as shown in the following screenshot: We can name it ns and leave the rest at their default values, and then click on OK. After that is done, go to the Database folder in Object Explorer. Then, right-click on Tables, and click on Create New Table. Here, we need to enter a column name (for example, test), and choose nchar(10) as the data type. Then, click on Save Table and we are presented with a dialog box, which gives us the option to change the table name. Here, we can type test again. We have now created a database called ns with a table called test, which contains a column also called test. This is an empty database that NetScaler will query to verify connectivity to the SQL server. Now, we can go back to NetScaler and continue with the set up. First, we need to add a DBA user. This can be done by going to System | User Administration | Database Users, and clicking on Add. Here, we need to enter a username and password for a SQL user who is allowed to log in and query the database. After that is done, we can create a monitor. Go to Traffic Management | Load Balancing | Monitors, and click on Add. As the type, choose MSSQL-ECV, and then go to the Special Parameters pane. Here, we need to enter the following information: Database: This is ns in this example. Query: This is a SQL query, which is run against the database. In our example, we type select * from test. User Name: Here we need to enter the name of the DBA user we created earlier. In my case, it is sa. Rule: Here, we enter an expression that defines how NetScaler will verify whether the SQL server is up or not. In our example, it is MSSQL.RES. ATLEAST_ROWS_COUNT(0), which means that when NetScaler runs the query against the database, it should return zero rows from that table. Protocol Version: Here, we need to choose the one that works with the SQL Server version we are running. In my case, I have SQL Server 2012. So, the monitor now looks like the following screenshot: It is important that the database we created earlier should be created on all the SQL servers we are going to load balance using NetScaler. So, now that we are done with the monitor, we can bind it to a service. When setting up the services against the SQL servers, remember to choose MSSQL as the protocol and 1433 as the port, and then bind the custom-made monitor to it. After that, we need to create a virtual load-balanced service. An important point to note here is that we choose MSSQL as the protocol and use the same port nr as we used before 1433. We can use NetScaler to proxy connections between different versions of SQL Server. As our backend servers are not set up to connect the SQL 2012 version, we can present the vServer as a 2008 server. For example, if we have an application that runs on SQL Server 2008, we can make some custom changes to the vServer. To create the load-balanced vServer, go to Advanced | MSSQL | Server Options. Here, we can choose different versions, as shown in the following screenshot: After we are done with the creation of the vServer, we can test it by opening a connection using the SQL Management Server to the VIP address. We can verify whether the connection is load balancing properly by running the following CLI command: Stat lb vserver nameofvserver Summary In this article, we followed steps for setting up a load-balanced Microsoft SQL Server running on 2008 R2 remembering that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized to ensure that the content that the user is requesting is available on all the backend servers. Resources for Article: Understanding Citrix Provisioning Services 7.0 Getting Started with the Citrix Access Gateway Product Family Managing Citrix Policies
Read more
  • 0
  • 0
  • 13786
Modal Close icon
Modal Close icon