This chapter covers all the basics of the vSphere architecture, virtualization, an introduction to hypervisors, and virtual infrastructure.
We will cover the following topics:
Understanding the need for, and use of vSphere and how it differs from other common hypervisors
Understanding ESXi and modes of management
History of ESXi
Different types of vSphere installations: Auto Deploy, fresh installation, and upgrade
Introduction to virtual infrastructure: virtual machines, disks, CPUs, memory, switches, network, and storage
Let us start with understanding the virtualization philosophy. Virtualization is the separation of a resource or request for a service from the underlying physical delivery of that service. In other words, it's an abstraction of the operating system from hardware resources present on your server.
This blend of virtualization provides a layer of abstraction between computing, storage, and network hardware, and the application running on it. This implementation of virtualization is invisible to the end user as there is very little change to the end user experience.
The key benefit of server virtualization is the ability to run multiple operating systems on a single physical server and share the same underlying hardware resources. Virtualization has been a part of the IT industry for decades, but in 1998, VMware, Inc. delivered the benefits of virtualization to industry standard x86-based platforms.
There are two approaches to server virtualization:
Hosted approach: A hosted approach provides partitioning services (virtualization) on the top of a standard operating system (for example, Microsoft Windows 7) and supports the broadest range of hardware configuration, as it uses the drivers of the underlying operating system. This design is also called Type 2 hypervisor. Popular products in this category are VMware Workstation and VMware Player. Comparable competitor products include Oracle VirtualBox and Parallels Desktop.
Bare-metal (hypervisor) architecture: A hypervisor architecture is the first layer of software installed on a clean x86-based system. Hence, it is often referred to as the bare-metal approach. Since it has direct access to the hardware resources, a hypervisor is more efficient than hosted architectures. This enables greater scalability, robustness, and performance. VMware vSphere (ESXi) is one of the pioneers in bare-metal architecture. Bare-metal hypervisors are also called Type 1 hypervisors.
The following image shows differences between the two major virtualization approaches:
Let us understand why a company would buy VMware vSphere. VMware vSphere consists of ESXi (the bare-metal hypervisor) and a management product named vCenter Server. We will be covering vCenter Server in the later chapters of the book. For now, let us focus on vSphere ESXi – the ultimate hypervisor.
The following image shows how we can expunge the disadvantages of conventional physical setups with virtualized infrastructures.
VMware ESXi is a Type 1 hypervisor that can run directly on the host server's physical hardware, without requiring an additional underlying operating system. The basic server requires some form of persistent storage (such as a hard disk or flash memory) that stores the hypervisor and supported files. It can also be run directly from the RAM of the server itself. This is called Auto Deploy setups – we will have a deeper look at this in later chapters of the book.
VMware ESXi provides the foundation for building a more reliable and dynamic IT infrastructure. This market-leading, production-proven hypervisor abstracts processor, memory, network, and storage resources into multiple virtual machines. Each virtual machine is capable of running both an operating system and its applications, as they would on physical hardware. The following image explains major parts of the ESXi architecture.
Multiple ESXi hosts can be deployed and managed more efficiently with vCenter Server. This enables centralized management and a better quality of service to data center applications and enterprise desktops.
VMware ESXi installs directly to the server hardware, inserting a robust virtualization layer between the hardware and the operating system. VMware ESXi partitions a physical server into multiple secure and portable virtual machines, that can run side by side on the same physical server. Each virtual machine represents a complete system with processors, memory, networking, storage, and BIOS. As a result, an operating system and software applications can be installed without any modification inside a virtual machine. A virtualization layer completely isolates virtual machines from each other, thus preventing a crash or configuration error in one virtual machine affecting the others.
Sharing the physical server resources amongst a number of virtual machines not only increases hardware utilization, but also decreases capital costs. The bare-metal architecture gives VMware ESXi complete control over the server resources allocated to each virtual machine. It also provides near-native virtual machine performance and enterprise-class scalability. VMware ESXi provides virtual machines with a built-in high availability, resource management, and security features to deliver improved service levels to software applications that are more efficient than static physical environments.
With the help of an example, we'll understand how a company will benefit from implementing a vSphere environment. Let's assume a company buys a server for 5,000 USD. Apart from buying the physical server, the company needs to invest in cooling (because servers generate a lot of heat), power consumption, real estate (like data center rooms), and personnel to manage that server. So all of a sudden, that physical server will cost around 6000-7000 USD, or may be even more.
Price is just one aspect of it. Let us understand the other aspect. After putting in all this money, the server utilization generally doesn't go beyond 10-15 percent annually. So, at the end of the day, the capital expenditure for the company was around 6000-7000 USD and utilization is 10-15 percent. Thus, it's a loss for the company.
Now with virtualization coming into picture, we can run multiple virtual machines on top of a single server. For example, an administrator is able to run 6 VMs on top of a physical server, with each virtual machine generating about 10 percent of resource utilization. So in total, all the virtual machines combined will generate 60 percent of resource utilization, which is far better than 10-15 percent in the previous scenario. And since virtualization helps in the consolidation of the servers, a company can save a lot by cutting down on the cost of procuring new servers.
So, cost saving and increased resource utilization are major advantages of virtualization.
VMware vSphere ESXi is by far the most advanced hypervisor in the virtualization market. There are other players in the market, including Citrix, Microsoft, and Red Hat, however, VMware ESXi is the most prominent and the most feature-rich hypervisor. Let us look at how it is better and how it differs from other hypervisors present in the market:
Hyper-V: Microsoft has their own server virtualization platform known as Hyper-V. It is mostly used by Small Medium Business (SMBs) because of lower license costs, but is gaining in market as well. It has features such as live migration, quick migration, and dynamic memory along with some other features. Microsoft Hyper-V is basically free, but as a customer you will have to buy the required Windows Server operating system. Moreover, you can buy Microsoft System Center software suite to manage a Hyper-V environment from a centralized location, but it will cost extra money.
XenServer: Citrix has a virtualization platform named XenServer. XenServer used to be the most often used hypervisor on Linux-based systems, but lost some market segments because of more efficient alternatives such as KVM. XenServer is based on Xen, which is a free hypervisor that is also part of many Linux distributions as well. XenServer offers additional tools for easier infrastructure management (XenCenter). It is recommended that if a company is already using Citrix products, then it should consider XenServer, as they already have the expertise available from this vendor. XenServer is often used along with Citrix XenDesktop in Virtual Desktop Infrastructure (VDI) environments.
KVM: KVM is a Linux-based open source hypervisor. First introduced into the Linux kernel in February 2007, it is now a mature hypervisor and is probably the most widely deployed open source hypervisor. KVM is used in many products, such as Red Hat Enterprise Virtualization (RHEV).
The choice of hypervisor depends on the requirement. If you need a simple virtualization platform, then you can just get a free version of Hyper-V or ESXi. These two are the most popular hypervisor platforms and have really professional support, while also constantly being extended with new features. XenServer also has its advantages for those experienced in the Linux operating system.
There are different ways to access and work with the ESXi environment. The following topics will give us a better understanding of ESXi and its different modes of management.
Prior to ESXi, VMware's hypervisor was called ESX. In comparison with ESXi, ESX had some significant differences. The most noticeable difference is that ESX's core was a highly customized Linux kernel based on Red Hat Enterprise Linux. Using a Linux user environment named Service Console, administrators used to gain privileged access to the ESX kernel. It was possible to customize ESX servers by installing additional drivers and software agents (for example, backup and monitoring) on the Service Console. The following screenshots displays the boot process of the ESX Service Console.
In 2008, the first ESXi version 3.5 was released. The versions 3.5 and 4.x were the only ones that included both ESX and ESXi variants. Beginning with vSphere 5.0, ESX was dropped by VMware to focus on ESXi. One of the disadvantages of the ESX legacy design was that it had a big footprint. Another approach for the ESXi design was to simplify management and backup functionality by outsourcing it from the kernel. As a result, ESXi is more efficient and has a much smaller footprint than ESX (at about 144 MB), enabling more dynamic vSphere environments with technologies like Auto Deploy. Many current hardware vendors are building cost-effective SD cards into their servers. Because of the low footprint, ESXi fits well on those cards, making more expensive hard drives deprecated for the ESXi operating system.
ESXi provides a virtualization layer that abstracts the processor, memory, storage, and networking resources of the physical host into multiple virtual machines. ESXi is a bare-metal hypervisor that creates a foundation for a dynamic and automated data center.
ESXi has a very small disk footprint (about 144 MB), which adds more security, as the attack surface is very small.
A virtual machine monitor (VMM) process that runs inside the ESXi kernel (VMkernel). It is responsible for virtualizing the guest operating system and memory management. It also handles storage and network I/O traffic between the VMkernel and the virtual machine executable process (VMX). There is at least one VMM per virtual machine and each virtual CPU.
A mouse/keyboard/screen (MKS) process that offers a mouse and keyboard input along with video output. Any compatible vSphere Client can connect to a VMs MKS process to control the VM console, like on physical computers.
A free version of ESXi, called vSphere free hypervisor can be downloaded from the VMware website (https://www.vmware.com/go/download-vsphere), or a licensed version of vSphere can be purchased. ESXi can be installed on a hard disk, USB device, or SD card. It can also be loaded on a diskless host (directly into memory) with a feature called vSphere Auto Deploy.
ESXi is supported on Intel processors, such as Xeon or never, and AMD Opteron processors. ESXi includes a 64-bit VMkernel. Hosts with 32-bit-only processors are not supported anymore. The last version with 32-bit support was ESX 3.5. ESXi offers support for both 32-bit and 64-bit guest operating systems.
There are two graphical user interfaces present in ESXi and vCenter Server, which can be used to interact with the vSphere environment. They are known as the VMware vSphere Web Client and the VMware vSphere client. VMware will use a modern Web Client for administration, in the future - and the legacy vSphere client (often also called C# client) will cease to exist in future vSphere releases. To help customers become accustomed to the new tool flavor, both clients are supported in vSphere 5.x and 6.0. Customers who are new to VMware virtualization should start with the vSphere Web Client.
The VMware vSphere Web Client is a browser based, fully extensible, platform independent administration tool for the vSphere platform. It is based on the Adobe Flex framework, and all operations necessary on VMware ESXi and vCenter Server can be undertaken using it.
The vSphere client was present in the previous versions of vSphere and is still available in vSphere 5.5 and 6.0. The vSphere legacy client is used to directly connect to ESXi hosts and also vCenter servers.
In vSphere 5.5, all new features are available only through the vSphere web client. The traditional vSphere client will continue to operate, supporting the same feature set as vSphere 5.0, but not exposing any of the new features in vSphere 5.5.
Now we will look at different clients, which are used to manage ESXi individually.
The vSphere client is one of the interfaces for managing the vSphere environment. It provides console access to virtual machines. The vSphere client is used to connect remotely to ESXi hosts and vCenter servers from a Windows system. The following screenshot shows the vSphere legacy client, connected to an ESXi host.
To login to the vCenter Server system with the same username and password that was used to start the windows session, we can select the Use windows session's credentials check box, which is optional.
The vSphere Web Client is accessed from a browser, which can be on any operating system. As the web client still also requires Adobe Flash and Adobe dropped the support for Linux-based systems, customers need to choose between Microsoft Windows and Mac OS X for managing their virtual infrastructure. vSphere Web Client is accessed from vCenter Server directly. The application server runs a Adobe Flex client, which pulls the information from the Inventory service running on the vCenter server and shows it on the browser of the user device.
The vSphere Web Client is accessed by using a web browser. The administrator, by using the server FQDN or IP address, needs to navigate to
https://<FQDN or IP address>:9443/vsphere-client/.
To access virtual machine consoles from the vSphere Web Client, the user needs to install a browser plugin named the VMware Client Integration Plugin. The following screenshot shows the vSphere Web Client overview.
Besides the two major clients mentioned previously, there are also certain command-line tools to manage the vSphere environment from a remote location. These are:
vCLI: vSphere Command-Line Interface (vCLI) is an application that provides a set of commands that allow us to manage ESXi hosts and vCenter servers. These commands are equivalent to those available on the ESXi shell when managing ESXi hosts using SSH or Direct Console User Interface (DCUI). When a vCLI command connects through the vCenter Server system, authentication is done through the vCenter Server users and roles. Details of this will be dealt with in the later part of the book.
vSphere Management Appliance (vMA): VMware vSphere Management Assistant provides a platform for running commands and scripts for the vSphere environment. vMA is deployed as a virtual appliance that is built on SUSE Linux Enterprise Server for VMware. A virtual appliance includes one or more virtual machines that are packaged together and managed as a single unit. vMA comes with all necessary tools for managing ESXi hosts and vCenter servers, including vSphere CLI and vSphere SDK for Perl API.
vSphere PowerCLI: vSphere PowerCLI is a powerful command-line tool that lets us automate all aspects of vSphere management, including network, storage, and virtual machines. It is a snap-in for Microsoft Windows PowerShell.
A virtual machine running on VMware vSphere is based on a particular virtual hardware version (also called virtual HW or VMHW). This version defines a list of virtual hardware and features that is available to the VM, and also defines guest operating system support. Newer vSphere versions include newer VMHW generations to enable customers to benefit from newly introduced features. VMs created on older vSphere releases can be started and customized on newer versions also, but it is not possible to run newer VMHW versions on older vSphere releases. It is also possible to upgrade the VMHW version of particular VMs to make use of newer features.
Let's have a look at some major changes between the last four VMHW versions:
Version / Feature
The list of differences between particular virtual hardware versions is quite long. You can look at the differences between the particular vSphere releases, in the Configuration Maximums documents. These documents list differences between various products and technologies including virtual machines, ESXi and vCenter Server per vSphere release:
VMware vSphere is available in various editions to match the customer's individual requirements. In addition, there are also two kits for smaller companies, which are looking towards starting with virtualization.
The available vSphere editions differ in their feature sets. The following table shows some of the major differences between the particular variants:
Feature / Edition
Required vCenter license
vCenter Server Standard
Yes (including Long Distance vMotion)
Yes (2 vCPUs)
Yes (4 vCPUs)
Big data extensions
Distributed resource scheduler (DRS) / Distributed power management (DPM)
Storage / Network I/O Control
Host profiles and Auto Deploy
Common features in all editions include:
Data protection technology
Hot add (adding virtual hardware resources without stopping the virtual machine; needs to be supported by the guest operating system as well)
We will have a deeper look at the particular features in later chapters of this book. Enterprise plus is the most advanced but also the most expensive vSphere edition.
Customers just beginning with vSphere technology might want to have a look at the more cost-effective kits - Essentials and Essentials Plus. Both kits apply to infrastructure setups with a maximum of 3 ESXi hosts, with up to 2 physical CPUs each. While the Essentials kit only includes basic virtualization functionality, the Essentials Plus kit also includes some advanced features like the following:
Both kits are compatible with vCenter Server Essentials. Note that the other vCenter server releases are not compatible with Essentials kits. For example, it is not possible to include an Essentials remote office in vCenter Server Standard.
VMware ESXi requires a 64-bit server, for example AMD Opteron or Intel Xeon. The server can have up to 320 logical CPUs (cores or threads) for vSphere 5.5 and can support up to 4096 virtual CPUs per host, which requires a minimum of 4 GB of memory. An ESXi host can have up to 4TB of memory. These limitations vary depending on the vSphere release. They are listed in detail in the "Configuration Maximums" document for the particular software release.
ESXi can be installed on Flash cards, USB storage and SATA, SCSI and SAS disk drives.
Insert the ESXi CD/DVD into the CD/DVD drive or attach the installer USB flash drive.
Restart the machine.
Set the BIOS to boot from the CD-ROM or USB.
On Select a disk page, select the drive on which ESXi has to be installed and press Enter.
Press F1 to view the information of the selected disk.
Specify a root password for the ESXi host.
After specifying the password, ESXi will get installed on to the system. Other necessary information like the hostname, IP address, and so on, is provided after the installation, using the Direct Console User Interface (DCUI).
If booting from SAN, select the RAW LUN on which you are supposed to install the ESXi, in step 4.
Auto Deploy is a method, which enables automatic deployment of a ESXi host, which in turn increases the scalability of the vSphere environment. The reason behind this is that the administrator need not install ESXi hypervisor on all the physical hosts. Instead, vCenter Server loads an ESXi image directly onto the physical host along with the optional configuration data of the ESXi host, which is also pushed by vCenter Server. If the physical server is shut down or rebooted, then the current state of the ESXi host is lost, but the ESXi image and the configuration data is pushed back again as soon as the server is restarted. vCenter Server stores and manages ESXi updates and patching through an image profile and, optionally, the host configuration through a host profile. This setup is especially effective if you're maintaining a big amount of ESXi hosts. Instead of patching those hosts, you just need to reboot them to get the most recent ESXi image.
Upgrading an ESXi host requires VMware vSphere Update Manager, CD-ROM, or USB key installation media. vSphere Update Manager can be used to upgrade multiple ESXi hosts more efficiently in unattended mode. Before upgrading any ESXi host, make sure to create a backup of your ESXi host configuration. It is possible to do a cross platform upgrade, i.e. ESXi 4.x to ESXi 5.x. Before upgrading your environment, make sure to check the VMware Product Interoperability Matrix on the VMware website at http://www.vmware.com/resources/compatibility/sim/interop_matrix.php.
We will have a deeper look at the vSphere Update Manager in Chapter 11, Securing and updating vSphere.
Let us get a better understanding of what a virtual machine is, and how ESXi interacts with the four major components: CPU, memory, network, and storage.
A VM is also a set of discrete files. Following are some of the files that make up a virtual machine; except for the log files, all the VM files start with the VM's name (
A configuration file (
One or more virtual disk files. The first virtual disk has files
A file containing the virtual machine's BIOS state and configuration (
A VM's current log file (
.log) and a set of files used to archive old log entries (
Swap files (
.vswp) used to reclaim memory during periods of contention.
A snapshot description of files (
.vmsd). This file is empty if the virtual machine has no snapshots.
If the virtual machine is converted into a template, a virtual machine template configuration file (
.vmtx) replaces the virtual machine configuration file (
A virtual machine can have other files, for example, if one or more snapshots were taken or if raw device mappings (RDMs) were added. A virtual machine has an additional lock file if it resides on an NFS, iSCSI, or Fibre-channel data store. This mechanism is very important when running a cluster, to avoid VM reboots in case of network failures. In such a case, a special heartbeat connection is implemented on a datastore basis. A virtual machine has a Changed Block Tracking (CBT) file (
x-ctk.vmdk), if it is backed up with the VMware vSphere Data Protection (VDP) appliance or compatible products. Using this technology, it is possible to detect the changed virtual hard disk blocks, and only include those in the backup processes. This speeds-up the backup dramatically, while also reducing the storage consumption of backup archives.
CPU virtualization emphasizes performance and runs directly on the available CPUs whenever possible. The underlying physical resources are used whenever possible and the virtualization layer runs instructions only when needed, to make virtual machines operate as if they were running directly on a physical machine. When multiple virtual machines are running on one ESXi host, the physical resources are shared equally by default. We will later have a look at more complex resource setups, which enable a kind of prioritization per VM basis.
vSphere's virtualization architecture is invisible to the guest operating system. There is no need to customize it to support being virtualized. Virtual CPUs can be set on a core and thread basis (for example, 2 vCPUs with 2 cores, each resulting in 4 threads available).
In a non-virtual environment, the operating system assumes that it owns all physical memory in the system. When an application starts, it uses the interfaces provided by the operating system to allocate or release virtual memory pages during the execution.
In vSphere, the ESXi kernel (VMkernel) owns all the memory resources and implements its efficiency management. VMkernel reserves some of the memory for itself; the remaining memory is available to virtual machines (including some overhead for each VM). Virtual memory is grouped into pages that are mapped into physical memory or data store resources, if the ESXi host runs out of physical memory. To avoid this scenario, vSphere offers the Ballooning functionality, which forces powered-on virtual machines to free up unneeded memory (for example, memory caches).
We will later have a deeper look at advanced memory resource management, including reservations and limitations.
The key virtual networking components in virtual architecture are virtual Ethernet adapters and virtual switches. A virtual machine can be configured with one or more virtual Ethernet adapters. Virtual switches allow virtual machines on the same ESXi host to communicate with each other using the same protocols that would be used over physical switches, without the need for additional hardware.
Virtual switches also support VLANs that are compatible with standard VLAN implementations from other vendors, such as Cisco. To enable more complex setups, VMware also offers distributed virtual switches that enable advanced features like Link Layer Discovery Protocol (LLDP) and 40GB NIC support. Distributed switches are managed centrally by vCenter Server, which reduces time and effort for network configuration.
We will have a look at more advanced network setups in later chapters.
Conventional file systems allow only one server to have read-write access to a file at a given time. VMware vSphere VMFS enables a distributed storage architecture that allows multiple ESXi hosts, concurrent read and write access to the same shared storage resources. VMFS is a high-performance cluster file system designed for virtual machines. VMFS uses distributed journaling of its file system metadata changes. As a result, VMFS can easily recover data in the event of a system failure. VMFS allows virtual disks to have up to 62TB capacity and also expanding storage capacity online without downtime.
In this chapter, we understood the core concept of virtualization and both, the need for and use of vSphere in infrastructure virtualization. We also had a look at the major differences between vSphere and other common hypervisors. To sum it up, virtualization is basically, abstraction of an operating system from hardware resources present on your server. In other words, it lets you install multiple operating systems on the same server, enabling the server administrator to utilize a server more effectively and efficiently. vSphere, the data center product of VMware, provides effective measures and features to create a virtual infrastructure. It can be installed on your server in three different ways – Auto Deploy, fresh installation from scratch, and upgrading over a current vSphere installation. Virtual infrastructure is made up of various components like virtual machines, disks, CPUs, memory, switches, network, and storage. It is just like a regular physical infrastructure but is managed and controlled with the help of vSphere.
In the next chapter, we will cover vCenter Server and how to import, start, and configure the vCenter Server Appliance. We will also get to know how to configure vCenter Server and how to use it to manage the server's inventory, ESXi hosts, virtual machines, and other infrastructure components. Licensing of vCenter Server and backup will also be covered in the next chapter.