Planning for desktop virtualization requires understanding the building blocks of Virtual Desktop Infrastructure, commonly referred to as VDI. This entails not only understanding the technical components of VDI, but also the business drivers and how VDI fits into your overall environment. Mapping your business objectives with the proper technology should be the ultimate goal of any VDI project.
In this chapter, you will learn about the following:
The building blocks of VDI
VDI layers
How to determine the right fit for your environment
The road map to success
Managing your project
The first step in understanding Virtual Desktop Infrastructure (VDI) is to identify what VDI means to your environment. VDI is an all-encompassing term for most virtual infrastructure projects. For this book, we will use the definitions cited in the following sections for clarity.
Hosted Virtual Desktop is a machine running a single-user operating system such as Windows 7 or Windows 8, sometimes called a desktop OS, which is hosted on a virtual platform within the data center. Users remotely access a desktop that may or may not be dedicated but runs with isolated resources. This is typically a Citrix XenDesktop virtual desktop, as shown in the following figure:

Hosted Virtual Desktop model; each user has dedicated resources
Hosted Shared Desktop is a machine running a multiuser operating system such as Windows 2008 Server or Windows 2012 Server, sometimes called a server OS, possibly hosted on a virtual platform within the data center. Users remotely access a desktop that may be using shared resources among multiple users. This will historically be a Citrix XenApp published desktop, as demonstrated in the following figure:

Hosted Shared Desktop model; each user shares the desktop server resources
With Session-based Computing, users remotely access applications or other resources on a server running in the data center. These are typically client/server applications. This server may or may not be virtualized. This is a multiuser environment, but the users do not access the underlying operating system directly. This will typically be a Citrix XenApp hosted application, as shown in the following figure:

Session-based Computing model; each user accesses applications remotely, but shares resources
In application virtualization, applications are centrally managed and distributed, but they are locally executed. This may be in conjunction with, or separate from, the other options mentioned previously. Application virtualization typically involves application isolation, allowing the applications to operate independently of any other software. This will be an example of Citrix XenApp offline applications as well as Citrix profiled applications, Microsoft App-V application packages, and VMware ThinApp solutions. Have a look at the following figure:

Application virtualization model; the application packages execute locally
The preceding list is not a definitive list of options, but it serves to highlight the most commonly used elements of VDI. Other options include client-side hypervisors for local execution of a virtual desktop, hosted physical desktops, and cloud-based applications. Depending on the environment, all of these components can be relevant.
Before engaging in a virtual desktop solution, the key question is, "What do you need?" As a virtualization architect, I have been involved in countless design and implementation projects. These range from simple proof-of-concept projects for 200 users to migrations for global implementations of 30,000 users. I have seen too many projects fail simply because the right questions were never asked.
One of the first items to determine is which flavor or flavors of VDI to use. Will traditional session-based computing (for example, hosted applications only) suffice or do you need to provide a full desktop? Will users need dedicated resources or can they share resources? Which applications will be available within the VDI space; all or just the most critical? How tightly controlled or locked down will you want this new environment to be? As you can imagine, there is no right and simple answer. In most environments, the answer is a mixed-bag solution.
When considering VDI, there are many factors to choose from, all of which will impact the design decisions. These factors include application compatibility (first and foremost), performance, manageability, scalability, storage, upfront capital costs, and long-term operating costs. Additional factors are reliability, ease of use, mobility, flexibility, recoverability, fault tolerance, and security. In the end, a technology solution should be there to support the business; the business should not be there to support the chosen technology. This means that IT departments cannot work in a vacuum. The driving forces must be what is good for the business and what empowers the application users.
Note
Any technology solution should be there to support the business; the business should not be there to support the chosen technology.
Your choice of VDI solution should be based on your business needs. To fully understand your needs and how they relate to VDI, your entire computing environment must be analyzed. The factors to understand when preparing for a virtual desktop solution include user data, personalization, application management, image management, and device management. These are business drivers that illustrate how users work in the current environment, and they must be understood for successful adaptation in any new environment.
User data includes personal documents, application data, and shared corporate data, all of which must be identified and managed. Home drive assignment, folder redirection, profile management, exclusions, and file synchronization are all viable methods to manage user data in a VDI environment. In order to gain the full benefits of mobility and flexibility within VDI, user data should be managed as its own layer to keep it separate from the operating system, as shown in the following figure:

Virtual machine layers
User personalization settings are commonly known as profiles. Profiles typically include mission-critical elements such as core application settings and non-critical items such as favorites, backgrounds, and pictures. Although the non-critical elements may seem mundane, they are often necessary to ensure end-user satisfaction and acceptance. Profile management (or lack thereof) can greatly impede performance metrics measured around logon and logoff times when profiles are loaded or unloaded. Profile management is also essential to enable smooth roaming capabilities. Organizations will differ on how much personalization is allowed (none to virtually everything); it is important to identify what to allow and then optimize its management. We'll cover this in more detail in Chapter 2, Defining Your Desktop Virtualization Environment.
Application management involves understanding not only which applications are installed, but also what and how they are used. Usage includes data requirements, compute resource consumption, companion applications, network bandwidth utilization, and access patterns (for example, are there midmorning or afternoon spikes, is the application only used at certain times such as during month-end batch processing, do users run the application consistently all day long, and so on). All of these considerations are used to build an application profile. Properly gauging application profiles is important to scale your environment with the proper amount of resources. Underpowered systems will become sluggish and hamper implementation, while overpowered solutions might unnecessarily consume resources, driving up the project costs.
Application delivery identifies how applications are delivered to the end user. This may primarily be dependent on application compatibility and interoperability. Some applications may need to be locally installed as part of the base image, others may be streamed as part of virtualization, and some may be hosted on application servers. Other determining factors include maintenance schedules such as update and patch frequency. Determining how and where applications are delivered may impact the overall solution. This will be covered in more detail in Chapter 5, Designing Your Application Delivery Layer.
Image management is used to control the delivery and changes to base operating system images. This includes the initial base image design (operating system, core applications, and common utilities), patch management, antivirus configurations, application delivery, and version controls. Factors to consider are provisioning methods and finding a balance between common and unique elements. Chapter 6, Designing Your Virtual Image Delivery, deals with image management in more detail.
Device management is often an afterthought in many virtualization projects, but it should be considered upfront. It is not enough to consider whether you will use mobile devices; you should also identify which mobile platforms you will support. Other considerations are thin clients, laptops, repurposed desktops, kiosks, multimedia stations, and so on. Along with the device type, peripherals must be understood. Is there any specialty equipment or add-ons that are required for your environment, such as scanners, badge readers, or custom printers? Determine which types of endpoint devices might impact functional requirements.
Note
Understanding application workloads and user requirements is the biggest piece of the VDI puzzle. Choosing the right VDI technology is reliant upon completely understanding your environment and business objectives.
Defining your business use cases helps map users, devices, and requirements into a usable format. Business cases will vary in scope and detail; each case has its own usage and delivery requirements that might be unique. VDI does not include a one-size-fits-all solution; it should be designed with as much flexibility as possible. We will examine use cases more in Chapter 2, Defining Your Desktop Virtualization Environment.
The entire virtual desktop solution will still need physical infrastructure to support operations. This infrastructure will need to be designed for cost, scalability, and reliability. This includes analyzing your current capabilities to determine whether you can grow your current infrastructure or if you need to create a brand new design. Some organizations will choose new environments as part of a capital project budget. This aids in design and deployment since it becomes a parallel effort to existing operations. Chapter 3, Designing Your Infrastructure, will explore infrastructure design in detail. Have a look at the following figure:

VDI layers
With so many layers and so many options from Citrix (as well as other vendors), the challenge becomes determining the right fit for your environment. There is no easy answer to this conundrum since each organization is different, with diverse goals and objectives.
The following are multiple real-world examples from consulting engagements. These may help you decide which types of VDI are the right fit for you:
XenApp for scalability: A Fortune 500 insurance company was designing a new Bring Your Own Device (BYOD) initiative. This organization had well-defined use cases and a strong team providing central management. Hence, they decided everything can run on a hosted, shared desktop model on physical servers. This allowed them the greatest possible user density by leveraging shared resources among all users, thus reducing the total cost of ownership.
XenApp as a proven technology: A global food-services organization was considering a secure computing environment for offshore contractors. When looking at VDI options, they felt many vendors and products were capable of delivering the necessary applications and performance. This company ultimately decided to focus on server-hosted applications to provide utmost flexibility with the lowest overhead. They went with XenApp because this was a proven technology and the market leader, with strong support both internally and externally.
XenDesktop for application compatibility: A leading personal credit lending organization was migrating to a centralized data center model with the added goal of using lightweight thin clients for data entry. This initiative was started in order to better manage secure access to their data and provide workforce flexibility for their call centers. A major concern was that their primary line of business application was only supported on Windows desktop operating systems. In order to meet all requirements, a XenDesktop solution was deemed necessary. Since all their users used the same applications, with no variance, they were able to achieve a company-wide solution with limited design constraints.
XenApp for application hosting: A healthcare software development firm needed a mature product to deliver their custom application suite to subscribers in the home-health field. The platform required secure remote access to the patients' data applications within a centralized database. Their business model required a scalable and mature product set using session-based computing for the hosted application, with high levels of fault tolerance.
XenDesktop for peripheral support: A medical school was already using thin clients to deliver hosted XenApp applications from within their data center. Through a green initiative, they needed to deploy digital radiology and eliminate X-rays developed on film. This would speed the X-ray viewing process, and it would also reduce the cost and chemicals associated with film development. The new equipment required enhanced USB support and 32-bit graphics to achieve the proper resolution.
XenApp as a desktop replacement: A regional university needed to have a highly scalable and secure desktop replacement for all classrooms and student labs. They needed a solution to replace managing high-risk workstations containing local applications. The solution was a two-tiered XenApp environment: one collection of session hosts provided a published desktop with primary applications locally installed and the second collection of session hosts provided specialty applications on demand.
XenDesktop for resource isolation: A major landscape management company was facing resource issues with their primary route planning and mapping software. They were leveraging XenApp for all applications in a hosted shared environment. When the route planners used the geographical information software to plan the drivers' routes, the intense calculation consumed the bulk of the server's shared CPU and memory resources. This degraded the performance for other users. Moving the geographical and routing software packages into a desktop image, the customer was able to dedicate and isolate resources, so other users were not affected by the processes.
XenDesktop for enhanced graphics: A global manufacturing client needed to provide detailed 3D graphics for its computer-aided design systems supporting engineers working remotely. Instead of investing in expensive laptops, the client chose blade PCs for XenDesktop with advanced graphics cards. This allowed the facility to centrally control the images and data, while still meeting the performance and graphical requirements of the design engineers.
XenApp for consolidation: A national food services company was in the process of acquiring additional companies and consolidating disperse operations. As part of this initiative, they needed to move over 100 different lines of business applications spread across five different data centers. To accomplish this, a new XenApp environment was designed and deployed based on a new consolidated server image.
"There is no right or wrong answer when deciding between a XenDesktop or XenApp solution as either one works in most use-case scenarios. In evaluating the technical criteria and value of each option, the final decision often comes down to comfort and familiarity." – Dan Feller, Lead Architect, Citrix Systems
Just like there is no one solution to VDI, there is no magic bullet when it comes to a successful deployment. However, there are some tried and true elements, demonstrated in the following figure, which will help you succeed:

Basic project methodology
The basic methodology of any IT project should follow something like this:
Assess: Assess your environment to determine what you currently have and what you need. This is one of the most critical elements since it includes your business case development and established criteria for success.
Discover: Discover your existing infrastructure. This is ultimately an extension of the assessment phase, but it is focused more on technical capabilities.
Design: Design a new environment or enhance an existing environment. This design should be a comprehensive architectural plan and should take numerous iterations to finalize. This design plan can be used as a build guide and should be revised as changes are implemented. All design plans should include items such as system architecture, scalability, risk identification, and disaster recovery planning.
Build: Build the environment. Most environments start with a proof-of-concept build to validate the design and technological components. The build phase may induce changes to the overall design, so all baselines should be updated. The build process should include iterative testing as components are brought online.
Test: Test the environment to ensure functionality. This includes unit-level testing to ensure the components operate as designed, which is generally included as part of the build process. This also includes user acceptance testing. This will be discussed in more detail in Chapter 9, Implementing Your XenApp® Solution
Deploy: Deploy the environment to end users. Start with a small pilot deployment with a limited number of power users. Once the pilot is complete, assuming success, a phased deployment approach for production should be planned. This will ensure full acceptance by users and limit the impact of any previously unknown issues. Monitoring is a continuation of deployment, helping validate that the environment reaches a steady state of operations.
I use a slightly different model when deploying virtual desktop solutions. The first and foremost phase is assessment and discovery. The focus should be on high-level strategy and business drivers during this phase. This is the most critical element since all future decisions will hinge on this analysis. Once all the requirements and expectations are defined, determining the best solution for your environment can proceed.
Once the base analysis is complete, the project moves to a design phase. During design, the results of analysis and business requirements are translated into a high-level technical architecture. This includes determining the hardware, software, and all infrastructure components. Once approved, this high-level architecture becomes the design plan.
During the build phase, all of the technology and infrastructure is put in place. This might include building out the data center presence or simply creating the VDI components on top of the existing infrastructure. Once a base build is complete, the environment is ready to test and validate.
The testing phase should include base functionality testing, capacity testing, application integration testing, and user acceptance testing. Testing results are used not only to validate functionality and performance, but also to validate scalability and design decisions. If testing reveals a change in the baseline, the design should be modified as well. Testing is an iterative process that must be repeated with each change to ensure optimal quality and project success.
In smaller environments, or when time is sensitive, the design, build, and testing phases can be consolidated into a single effort (building and testing while designing). However, this is risky and can sometimes lead to delays or overruns.
The pilot phase should be integrated as part of the overall project plan. This may be part of the testing and validation phase, or it may occur once the initial testing is complete. Successful pilot programs are phased in to increase server loads and user counts, and they should encompass multiple use case scenarios. A pilot should mimic production just on a smaller scale. Pilot testing results may lead to baseline or design changes, and subsequent testing cycles may be necessary. However, note that an extensive pilot program is critical to organizational acceptance and project success. You are better off identifying critical issues during a small pilot phase than during a major production rollout.
The last step, of course, is production rollout. Production rollout should be established in phases to keep support manageable as well as to monitor impact on the infrastructure and the overall system performance. An often overlooked key component to production rollout is communication. This includes setting management and end user expectations properly and user training. Open communication will also ease concerns users may have over the state of their desktop. The time spent properly communicating, or over communicating, is quickly recouped through reduced help desk calls.
The following diagram represents the iterative process of IT project management. Notice the weight on analysis as well as the iterative processes, check points, and the phased rollout:

Enhanced project methodology
Communication is critical not just for customer satisfaction, but also to manage the project as a whole, including identifying any changes in scope, timelines, and budget. There are six key factors to ensure your project is successful, which are:
Managing the scope (what is being done)
Managing the schedule (timelines)
Managing the budget (avoiding cost overruns)
Ensuring quality (everything works as planned)
Managing risk factors (avoiding the big pitfalls)
Ensuring customer satisfaction (did you meet the project goals)
The following figure represents the six components for successful project management:

Project management components
Customer satisfaction is critical. This is often overlooked in the IT world as our customers are commonly our coworkers. I was involved in one project that was on time, on budget, and worked. However, the project was a failure because the end users were never properly assessed and what was delivered was not what was needed or wanted.
According to a 2012 survey by McKinsey & Company of large IT projects:
45 percent go over budget
7 percent go over time
56 percent deliver less value
17 percent fail so miserably that they threaten the company's existence
In this chapter, we explored the building blocks of a virtual desktop infrastructure. We looked at different models to deliver virtual desktops, including hosted virtual desktops, hosted shared desktops, session-based computing, and application virtualization. We also discussed the various layers of desktop virtualization and we looked at scenarios to help determine the right fit for your environment. You may find that you will need a mix of models and solutions, which is not uncommon.
In addition to looking at the various virtual desktop components, we also discussed building a road map to success. It is not good enough to have a proper design; you must be able to deliver the design successfully. To do so requires some project planning and project management skills, the most notable of which is communication.
In the next chapter, we will look at further defining our virtual desktop environment, including understanding our users and applications in order to build our use cases. We will also look at assessing our current environment and fine tuning our strategy for our new environment.