In this chapter, you'll learn the concepts and best practices of Microsoft Windows imaging techniques and in doing so learn the terminology associated with deployment. You will also become familiar with the different approaches to imaging and when each approach is generally regarded as the best in show for a given scenario. Finally, you'll learn some history on how things have changed in imaging from the old Windows XP style deployment to Windows 7, Windows 8, and now Windows 10. The solutions accelerator from Microsoft, the Microsoft Deployment Toolkit (MDT), is the answer to a lot of the deployment problems facing deployment projects and will be the focus of this book.
In the beginning there was DOS, and it was good. But then there was a need for more and Windows came into being. At first, it was OK to pop the floppy disks that contained Windows for Workgroups into machines one by one on each computer individually in an enterprise environment. But soon, businesses started asking for things such as configuration settings for deploying Windows en masse.
Sysdiff.exe and other fun things were created, where the intrepid NT 3.5 admin could build a machine, tweak it, and run Sysdiff to create a template with which other installations could follow and be identical, more or less. Later, as things progressed, the need was strong for a way to really clone machines!
And so, in the distant past (10+ years ago), the world of imaging and deploying the Windows Client came to be ruled by disk sector duplication deployments. This process was fairly involved, in that a technician would install a copy of Windows XP, patch it, install updated drivers, configure Windows XP's look and feel, install applications, patch the applications and finally configure the applications. After that was done (a process that could take a day or more) it was captured with a tool in a sector-by-sector fashion into a file for later deployment over network or media, again, sector-by-sector. Thus the technician would have an image, for a single model of computer, with a single set of applications.
So imagine an enterprise-level environment with say, 10 models of computers (I've seen some with over 100 models so 10 is a good example) and 1-3 sets of applications installed per model. Now the technician (or now it's most likely technicians at this point) is patching and managing roughly 10-30 images in our conservatively estimated enterprise environment. We didn't even throw 32 bit versus 64 bit into the equation.
So this poses a few problems for deployment projects that may not be readily apparent:
Each image is say 15-20 GB in size post-compression. Particularly in computing ages past, maintaining a library of images of this size was a daunting proposition.
Each image needs to be updated on a semi-regular basis to take into account service packs, OS patches, application patches, driver updates, and random configuration tweaks requested by management and marketing departments. Not doing so increases the deployment time as all the work of applying updates and patches then occurs at every deployment process instead of once before capture.
Each machine had the same globally unique identifier (GUID), because it was in fact a clone of another machine. So when you joined both to the same Windows domain (even with different names) hilarity ensued. Tools were created, such as NewSID and Sysprep's
/generalizeswitch, which helped get around this.
But around 2006, with the release of Windows Vista, things changed. There was a new paradigm in image deployment that would change everything: the Windows Imaging Format (WIM) format. The WIM format is essentially a container for an image. With it, and some tools from the Assessment and Deployment Kit (ADK), one can service the Windows image offline, which allows us to add patches, drivers, and remove components such as games from our image, all without having to install it first on bare-metal hardware.
An example of this would be something like the Deployment Image Servicing and Management (DISM) command (in an elevated command prompt) to remove a hotfix from your running system:
DISM /online /remove-package /packagename:Package_for_KB2868623~31bf3856ad364e35~amd64~~18.104.22.168
Around this same time enters a tool known as BDD. The Business Desktop Deployment (BDD) toolkit was a set of scripts that could be used to customize, configure, and deploy the Windows image in the enterprise environment. BDD 2.5 was released in August 2005, prior to the RTM of Vista.
BDD had several iterations and even had a Microsoft Certified Professional Exam created for one of its versions. These iterations were each an improvement upon the last until finally, in November 2007, the MDT was released.
Fast forward to the present, and MDT 2013 Update 2 is current at the time of writing. At this point, MDT is essentially System Center Configuration Manager (SCCM) "lite". You can backend it with a database, put a web frontend on it, do dynamic actions based on hardware make and model, install previous applications, and much more.
This tool, the MDT, will be the focus of this book. There are other (typically more expensive) solutions out there to be sure, but if one is preparing to perform deployments at scale, MDT should be looked at as it can easily do a lot of manual work and, while it costs nothing, it is supported by Microsoft Support.
When we look at utilizing the WIM format and MDT, there are essentially three schools of thought in building what is commonly termed a golden image in deployment. These are the thick, thin, and hybrid images. They each have their merits and rather than adhere to a single one, I tend to view each as a tool in the deployment toolbox. So depending on the situation and customer needs, I would recommend one over another:
Thick Image: A thick image is an image that contains a patched operating system plus all applications used in the environment. It is large, sometimes problematic to deploy, and has some interesting licensing implications as well in that every deployed system has every piece of software installed.
Sometimes a thick image is the best option due to logistics. Imagine you need to deploy Windows to systems on a submarine or a cruise ship. Sending media containing a thick image by freight/helicopter might be an answer versus deployment from a share.
Thin Image: A thin image is (as one might assume) an image that contains nothing except a patched operating system. It is quick to deploy, but customization post-deployment can take quite some time, even by automated scripts. This is a minimalist approach but has merit when you need an image of the smallest size or only a few diverging applications from a golden base image.
Hybrid Image: A hybrid image is an image that contains a patched operating system and core business applications, typically applications for which the business has a site license. Typically, some limited customizations occur post deployment with these images as part of a task sequence.
Applications, drivers and packages are three components that can be included in the image, depending on type of image. These are defined clearly in the MDT documentation and UI, but need introduction here:
Applications: Applications are usually software installation packages one wants to place into the image or deploy as part of the task sequence itself. Sometimes driver packages can fall into this category as well. The Hewlett-Packard ProLiant Support Pack is a great example of a bundled offering of driver and firmware updates for systems that work best when run as an installation (application in MDT) rather than as a Plug and Play (PnP) operation. Further, many Bluetooth driver stacks, network teaming software, and video graphics driver packs fall into this grouping. They may install in PnP, but do not behave properly unless run as a packaged installation. Generally, this is a result of the installer checking/updating firmware as part of the installation, and PnP just adds the driver and moves on.
Drivers: Drivers are components usually provided by the hardware manufacturer (hopefully in concentrated CAB files for ease of deployment, we will discuss it later). These drivers can (and usually should) be provisioned using mandatory driver profiles, but for small scale or single model deployments, the natural PnP feature of Windows can be used to select and install drivers from MDT.
Packages: Packages are updates from Microsoft to address a problem or defect in the operating system. Typically, these are pulled from the Microsoft Update Catalog and then imported into the MDT console for application to Windows PE or the image itself.
The following tools are used for imaging:
MDT: The toolset covered in this book. MDT is a collection of visual basic and PowerShell scripts used for different deployment tasks all wrapped together in a management console UI and sequencing engine used to call the scripts in stages for deploying Windows or performing other tasks related to Windows imaging (such as patching or servicing a current installation, capturing an image for later deployment, or modifying the image in some manner).
Task Sequence: A task sequence is a series of commands executed by MDT's task sequencer. This is the heart of MDT, where the administrator can configure deployment steps, capturing the user state for later migration, servicing, and patching and other tasks.
Task Sequencer: The name of the process MDT uses to manage its tasks. This is almost analogous to a computer virus in that the task sequencer, depending on the commands being performed, can modify the boot environment, boot over a network, collect additional task sequence commands from a central remote share, and boot off of media. It keeps track of a task's progress in a central store known as
variables.datand logs to a set of log files for troubleshooting and audit purposes.
variables.dat: A flat file db format used to store data for an executing task sequence. It will contain metadata such as the chassis type of the machine the task is executing, how much RAM is installed, and many other variables that are queried to the hardware, PnP bus, and BIOS/firmware.
For most Windows users, the setup process is something of a black box. You run setup and stuff happens and then voilá, you have a Windows installation. For the deployment engineer however, the setup process is where the magic happens. MDT manipulates the setup by providing variables along the process, to customize the resulting image for the target machine.
MDT does this by inserting variables into the
Unattend.xml file for Windows setup. Some of these variables can even be provided dynamically based on queries using a technique known as UserExit scripts. These are used to determine a variables property based on something such as the organizational unit (OU) of a user account, the location of the machine on the network (usually determined by what the default gateway value is), or a hardware query such as
chassis type=laptop to specify that the machine is a laptop and therefore needs a VPN client installed.
The options available to the engineer are detailed in depth in the technical documents of the MDT word documents available on the Microsoft download site at https://technet.microsoft.com/en-us/library/dn781292.aspx. Some are documented in MSDN as well in further detail.
Troubleshooting in the setup isn't generally considered an easy thing to work on in IT. MDT makes it somewhat more straightforward for engineers by centralizing a logging directory for the administrator. A master
smsts.log file logs the activity of the task sequencer and will indicate which sublog is needed to review for additional information if needed.
By now, you should have a grasp of what imaging is about and why it is needed. In addition, you can see the history of why we are where we are in the technology space of deployment. Chapter 2, Setting Up Your Environment, will walk you through building out your deployment system using the MDT and Windows ADK. You'll learn some best practices to set up your deployment share and imaging practices and get some configuration guidance on modifying the ADK/MDT scripts.