Reader small image

You're reading from  VMware Cloud on AWS Blueprint

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781803238197
Edition1st Edition
Right arrow
Authors (3):
Oleg Ulyanov
Oleg Ulyanov
author image
Oleg Ulyanov

Oleg Ulyanov is a Staff Cloud Architect with more than 15 years of experience. He is a Subject Matter Expert in VMware Hybrid Cloud, cloud migration, networking, and storage. He has experience as a VMware professional services architect, helping customers achieve their technical and business goals through IT transformation and migrating to VMware Hybrid Clouds. He holds various industry certificates, including VMware VCP, VCAP6/7-DCV, SNIA, and Microsoft.
Read more about Oleg Ulyanov

Michael Schwartzman
Michael Schwartzman
author image
Michael Schwartzman

Michael Schwartzman, a Senior Azure Application Innovation Specialist at Microsoft, has over a decade of experience in cloud infrastructure, cloud security, and hybrid cloud solutions. Prior to his current role, Michael served as a Lead Cloud Solution Architect specializing in VMware Cloud on AWS. He has played a pivotal role in assisting Global ISVs with the development and sale of SaaS solutions on Azure. Additionally, Michael's broad expertise encompasses support for both digital natives and traditional enterprises, optimization of their cloud systems. His dedication to remaining at the forefront of the rapidly evolving tech landscape establishes him as a go-to expert for businesses seeking to leverage cutting-edge cloud technology.
Read more about Michael Schwartzman

Harsha Sanku
Harsha Sanku
author image
Harsha Sanku

Harsha Sanku is a Solutions Architect at Amazon Web Services, specializing in AWS Hybrid Cloud and Edge Computing services. His expertise lies in Cloud Infrastructure including Networking & Security. He has been a VMware Cloud on AWS Specialist for the last four years. Harsha has a strong background in designing and implementing data center infrastructure and private clouds, with a particular focus on VMware technologies. In his current role at AWS, he collaborates with customers to migrate and modernize their hybrid cloud infrastructure, ensuring they remain competitive in the ever-evolving business and IT landscape.
Read more about Harsha Sanku

View More author details
Right arrow

Understanding the VMware Cloud on AWS high-level architecture

This section will describe the high-level architecture of the main components that comprise VMware Cloud on AWS.

VMware Cloud on AWS is integrated into VMware’s Cloud Services Platform (CSP). The VMware Cloud Services Provider (CSP) console allows customers to manage their organization’s billing and identity, and grant access to VMware Cloud services. You can leverage the VMware Cloud Tech Zone Getting Started resource (https://vmc.techzone.vmware.com/getting-started-vmware-cloud-aws) to get familiar with the process of setting up an organization and configuring access in the CSP console.

The VMware CSP console allows you to manage VMware Cloud on AWS. You will use VMware CSP console to deploy VMware Cloud on AWS. Once the service is deployed, you leverage VMware CSP console to manage the SDDC.

The following figure shows the high-level design of the VMware Cloud on AWS architecture, showing both a VMware Cloud customer organization running the VMware Cloud services alongside an AWS-native organization running AWS services:

Figure 1.3 – High-level architecture of VMware Cloud on AWS

Figure 1.3 – High-level architecture of VMware Cloud on AWS

Now, let us switch to the Tanzu Kubernetes service available with VMware Cloud on AWS.

Tanzu Kubernetes with VMware Cloud on AWS

VMware Cloud on AWS includes VMware Tanzu Kubernetes Grid as a service. VMware currently offers several Tanzu Kubernetes Grid (TKG) flavors for running Kubernetes:

  • vSphere with Tanzu or the TKG service: This solution has made vSphere a platform that can run Kubernetes workloads directly on the hypervisor layer. This can be enabled on a vSphere cluster and allows Kubernetes workloads to be run directly on ESXi hosts. Additionally, it can create upstream Kubernetes clusters in dedicated resource pools. This flavor is integrated into the VMware Cloud on AWS platform, providing a Container as a Service (CaaS) service, and is included in the basic pricing service.
  • Tanzu Kubernetes Grid Multi-Cloud (TKGm) is an installer-driven wizard that sets up Kubernetes environments for use across public cloud environments and on-premises SDDCs. This flavor is supported but not included on VMware Cloud on AWS’s basic pricing service, but it can be consumed with a separate license.
  • Tanzu Kubernetes Integrated Edition: VMware Tanzu Kubernetes Integrated (previously known as VMware Enterprise PKS) is a Kubernetes-based container solution that includes advanced networking, a private registry, and life cycle management. It is beyond the scope of this book.

VMware Tanzu Mission Control (TMC) is a SaaS offering for multi-cloud Kubernetes cluster management and can be accessed through VMware CSP console. It provides the following:

  • Kubernetes cluster deployment and management on a centralized platform across multiple clouds
  • He ability to centralize operations and management
  • A policy engine that automates access control policies across multiple clusters
  • The ability to centralize authorization and authentication with federated identity

The following figure presents a high-level architecture of services available between the on-premises and the VMware Cloud solution in order to provide hybrid operations:

Figure 1.4 – Hybrid operation components connecting on-premises to VMware Cloud

Figure 1.4 – Hybrid operation components connecting on-premises to VMware Cloud

SDDC cluster design

A VMware Cloud on AWS SDDC includes compute (vSphere), storage (vSAN), and networking (NSX) resources grouped together into one or more clusters managed by a single VMware vCenter Server instance.

Host types

VMware Cloud on AWS runs on dedicated bare-metal Amazon EC2 instances. When deploying an SDDC, VMware ESXi software is deployed directly to the physical host without nested virtualization. In contrast to the pricing structure for other Amazon EC2 instances running on AWS Nitro System (which generally follows a pay-per-usage model per running EC2 instance), the pricing model for VMware Cloud on AWS is priced for the entire bare-metal instance, regardless of the number of virtual machines running on it.

Multiple host types are available for you when designing an SDDC. Each host has different data storage or performance specifications. Depending on the workload and use case, customers can mix multiple host types within different clusters of an SDDC to provide better performance and economics, as depicted in the following figure:

Figure 1.5 – VMware Cloud SDDC with two clusters, one each of i3.metal and i3en.metal host types

Figure 1.5 – VMware Cloud SDDC with two clusters, one each of i3.metal and i3en.metal host types

At the time of writing this book (2023), three different host types can be used to provision an SDDC.

i3.metal

The i3.metal type is VMware Cloud on AWS’s first host type. I3 hosts are ideal for general-purpose workloads. This host instance type may be used in any cluster, including single-, two-, or three-node clusters and stretched cluster deployments. The i3.metal host specification can be found in the following table:

Figure 1.6 – i3.metal host specification

Figure 1.6 – i3.metal host specification

This instance type has a dual-socket Intel Broadwell CPU, with 36 cores per host.

As in all hosts in the VMware Cloud on AWS service, it boots from an attached 12 GB EBS volume.

The host vSAN configuration is comprised of eight 1.74 TB disks, and two disks per disk group are allocated for the caching tier and are not counted as part of the raw capacity pool.

It is important to note that hyperthreading is disabled on this instance type and that both deduplication and compression are enabled on the vSAN storage side. As VMware moves toward consuming new host types, it’s anticipated that use cases for i3.metal will become rare.

i3en.metal

The i3en hosts are designed to support data-intensive workloads. They can be used for storage-heavy or general-purpose workload requirements that cannot be met by the standard i3.metal instance. It makes economic sense in storage-heavy clusters because of the significantly higher storage capacity as compared to the i3.metal host: it has four times as much raw storage space at a lower price per GB.

This host instance type may be used in stretched cluster deployments and regular cluster deployments (two-node and above).

The i3en.metal host specification can be found in the following table:

Figure 1.7 – i3en.metal host specification

Figure 1.7 – i3en.metal host specification

The i3en.metal type comes with hyperthreading enabled by default, to provide 96 cores and 768 GB of memory.

The host vSAN configuration is comprised of eight 7.5 TB physical disks, using NVMe namespaces. Each physical disk is broken up into 4 virtual disk namespaces, creating a total of 32 NVMe namespaces. Four namespaces per host are allocated for the caching tier and are not counted as raw capacity.

This host type offers a significantly larger disk, with more RAM and CPU cores. Additionally, there is network traffic encryption on the NIC level, and only compression is enabled on the vSAN storage side; deduplication is disabled.

Note

VMware on AWS customer-facing vSAN storage information is provided in TiB units and not in TB units. This may cause confusion when performing storage sizing.

i4i.metal

VMware and AWS announced the availability of a brand new instance type in September 2022 – i4i.metal. With this new hardware platform, customers can now benefit from the latest Intel CPU architecture (Ice Lake), increased memory size and speed, and twice as much storage capacity compared to i3. The host specification can be found in the following table:

Figure 1.8 – i4i.metal host specification

Figure 1.8 – i4i.metal host specification

Based on a recent performance study (https://blogs.vmware.com/performance/2022/11/sql-performance-vmware-cloud-on-aws-i3-i3en-i4i.html) using a Microsoft SQL Server workload, i4i outperforms i3.metal on a magnitude of 3x.

In the next section, we will evaluate how the VMware Cloud on AWS SDDC is mapped to AWS Availability Zones.

AWS Availability Zones

The following figure describes the relationship between a Region and Availability Zones in AWS:

Figure 1.9 – Architecture of a Region and Availability Zones

Figure 1.9 – Architecture of a Region and Availability Zones

Each AWS Region is made up of multiple Availability Zones. These are data centers that are physically isolated. High-speed and low-latency connections connect Availability Zones within the same Region. Availability Zones are placed differently in floodplains, equipped with uninterruptible power supplies and on-site backup generators.

If available, they can be connected to different power grids or utility companies. Each Availability Zone has redundant connections to multiple ISPs. By default, an SDDC is deployed on a single Availability Zone.

The following figure describes the essential building blocks of VMware Cloud on AWS SDDC clusters, which are, in turn, built from compute hosts:

Figure 1.10 – Architecture of a VMware Cloud on AWS SDDC, with clusters and hosts

Figure 1.10 – Architecture of a VMware Cloud on AWS SDDC, with clusters and hosts

A cluster is built from a minimum of two hosts and can have a host added or removed at will from the VMware Cloud on AWS SDDC console.

Cluster types and sizes

VMware Cloud on AWS supports many different types of clusters. They can accommodate various use cases from Proof of Concept (PoC) to business-critical applications. There are three types of standard clusters (single availability zone) in an SDDC.

Single-host SDDC

A cluster refers to a compute pool of multiple hosts; a single-host SDDC is an exception to that rule, as it provides a fully functional SDDC with VMware vSAN, NSX, and vSphere on a single host instead of multiple hosts. This option allows customers to experiment with VMware Cloud on AWS for a low price.

Information

Customers need to know that single-host SDDC clusters can’t be patched or software updated within their 60-day lifespan. These clusters cease automatically after 60 days. All virtual machines and data are deleted. VMware doesn’t back up the data, and in the case of host failure, there will be data loss.

An SLA does not cover single-host SDDCs, and they should not be used for production purposes. Customers can choose to convert a single-host SDDC into a 2-host SDDC cluster at any time during their 60-day operational period. Once converted, the 2-host SDDC cluster will be ready for production workloads. All the data will be migrated to both hosts on the 2-host SDDC cluster.

VMware will manage the multi-host production cluster and keep it up to date with the latest software updates and security patches. This can be the path from PoC to production.

Two-host SDDC clusters

The 2-host SDDC cluster allows for a fully redundant data replica suitable for entry-level production use cases. This deployment is good for customers beginning their public cloud journey. It is also suitable for DR pilot light deployments that are part of VCDR services, which will be covered later in the book.

The 2-host SDDDC cluster has no time restrictions and is SLA-eligible. VMware will patch and upgrade all the hosts in the SDDC Clusters with zero downtime similar to how a multi-host SDDC cluster running production workloads is patched or updated.

The two-host cluster leverages a virtual EC2 m5.2xlarge instance as a vSAN witness to store and update the witness metadata; it allows for resiliency in case of a hardware failure in any one of the hosts. When scaling up to three hosts, the metadata witness is terminated. On the contrary, the metadata witness instance is recreated when scaled down from three hosts.

Note

At launch time, a three-host cluster couldn’t be scaled down to two hosts; however, this limitation was addressed in early 2022. There are still limitations associated with the number of virtual machines that can be turned on concurrently (36) and support of large VM vMotion through HCX is limited with i3 host types.

More information can be found at https://vmc.techzone.vmware.com/resource/entry-level-clusters-vmware-cloud-aws.

Three-host SDDC clusters

A three-host cluster can scale the number of hosts up and down in a cluster from 3 to 16, without the previously described limitations, and is recommended for larger production environments.

Multi-cluster SDDC

There can be up to 20 clusters in an SDDC. The management appliances will always be in cluster number 1. Each cluster needs to contain the same type of hosts, but different clusters can operate different host types.

The following figure describes at a high level how, in a single customer organization, there can be multiple SDDCs, and within the SDDC, multiple clusters, with each SDDC having its own management appliances residing in cluster 1:

Figure 1.11 – Detailed view of multiple SDDCs and clusters in an organization

Figure 1.11 – Detailed view of multiple SDDCs and clusters in an organization

So far, we have gone through a single Availability Zone cluster type. Next, let’s explore how customers can architect their SDDC resiliency to withstand a full Availability Zone (AZ) failure leveraging stretched clusters.

Stretched clusters

When designing application resiliency across AZs with native AWS services, resiliency must be achieved via application-level availability or services such as AWS RDS. Traditional vSphere-based applications need to be refactored to enjoy those resiliency capabilities.

In contrast, VMware Cloud on AWS offers AZ failure resilience via vSAN stretched clusters.

Cross-AZ availability is possible by extending the vSphere cluster across two AWS AZs. The stretched vSAN cluster makes use of vSANs synchronous writes across AZs. It has a zero RPO and a near-zero RTO, depending on the vSphere HA reboot time. This functionality is transparent for applications running inside of a VM.

Figure 1.12 – High-level architecture of VMware Cloud on AWS standard cluster

Figure 1.12 – High-level architecture of VMware Cloud on AWS standard cluster

Customers can select the two AZs that they wish their SDDC to be stretched across in the stretched cluster deployment. VMware automation will then pick the correct third AZ for your witness node to deploy in. The VMware Cloud on AWS service covers the cost of provisioning of the witness node, which operates on an Amazon EC2 instance created from an Amazon Machine Images (AMI) converted from a VMware OVA. The stretched cluster and witness deployments are fully automated once the initial parameters have been set, just like a single AZ SDDC installation.

Stretched clusters can span AZs within the same AWS Region and require a minimum number of four hosts. vSphere HA is turned on, and the host isolation response is set to shut down and start virtual machines. In an AZ failure, a vSphere HA event is triggered. vSAN fault domains are used to maintain vSAN data integrity.

The SLA definition of a standard cluster has an availability commitment of 99.9% uptime, and a stretched cluster will have availability commitment of 99.99% uptime. If it utilizes 3+3 hosts or more in each AZ, and a 2+2 cluster will have an availability commitment 99.9% uptime. This enables continuous operations if an AWS AZ fails.

The VMware SLA for VMware Cloud on AWS is available here: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/support/vmw-cloud-aws-service-level-agreement.pdf.

Elastic Distributed Resource Scheduler

The Elastic DRS system employs an algorithm designed to uphold an optimal count of provisioned hosts, ensuring high cluster utilization while meeting specified CPU, memory, and storage performance criteria. Elastic DRS continually assesses the current demand within your SDDC and utilizes its algorithm to propose either scaling in or scaling out of the cluster. When a scale-out recommendation is received, a decision engine acts by provisioning a new host into the cluster. Conversely, when a scale-in recommendation is generated, the least-utilized host is removed from the cluster.

It’s important to note that Elastic DRS is not compatible with single-host starter SDDCs. To implement Elastic DRS, a minimum of three hosts is required for a single-AZ SDDC and six hosts for a multi-AZ SDDC. Upon the initiation of a scale-out by the Elastic DRS algorithm, all users within the organization receive notifications both in the VMware Cloud Console and via email.

Figure 1.13 – Elastic DRS cluster threshold monitoring and adjustments

Figure 1.13 – Elastic DRS cluster threshold monitoring and adjustments

You can control the Elastic DRS configuration through Elastic DRS policies. The default Elastic DRS baseline policy is always active and is configured to monitor the utilization of the vSAN datastore exclusively. Once the utilization reaches 80%, Elastic DRS will initiate the host addition process. Customers can opt to use different Elastic DRS policies depending on the use cases and requirements. The following policies are available:

  • Optimize for best performance (recommended when hosting mission-critical applications): When using this policy, Elastic DRS will monitor CPU, memory, and storage resources. When generating scale-in or scale-out recommendations, the policy uses aggressive high thresholds and moderate low thresholds for scale-in.
  • Optimize for the lowest cost (recommended when running general-purpose workloads with costs factoring over performance): This policy, as opposed to the previous one, has more aggressive low thresholds and is configured to tolerate longer spikes of high utilization. Using this policy might lead to overcommitting compute resources and performance drops, but it helps to maintain the lowest number of hosts within a cluster.
  • Optimize for rapid scaling (recommended for DR, VDI, or any workloads that have predictable spike characteristics): When opting for this policy, you can define how many hosts will be added to the cluster in parallel. While the default setting is 2 hosts, you can select up to 16 hosts in a batch. With this policy, you can address the demand of workloads with high spikes in resource utilization – for example, VDI desktops starting up on Monday morning. Also, use this policy with VCDR to achieve low cost and high readiness of the environment for a DR situation.

The resource (storage, CPU, and memory) thresholds will vary depending on the preceding policies.

Elastic DRS Policy

Storage Thresholds

CPU Thresholds

Memory Thresholds

Baseline Policy

Scale-Out Threshold: 80%

(Storage Only)

Optimize for Best Performance

Scale-Out Threshold: 80%

Scale-Out Threshold: 90%

Scale-Out Threshold: 80%

Optimize for the Lowest Cost

Scale-Out Threshold: 80%

Scale-In Threshold: 40%

Scale-Out Threshold: 90%

Scale-In Threshold: 60%

Scale-Out Threshold: 80%

Scale-In Threshold: 60%

Rapid Scaling

Scale-Out Threshold: 80%

Scale-In Threshold: 40%

Scale-Out Threshold: 80%

Scale-In Threshold: 50%

Scale-Out Threshold: 80%

Scale-In Threshold: 50%

Table 1.1 – Elastic DRS policy default thresholds

Note

VMware will automatically add hosts to the cluster if storage utilization exceeds 80%. This is because the baseline Elastic DRS policy is, by default, enabled on all SDDC clusters; it cannot be disabled. This is a preventative measure to ensure that vSAN has enough “slack” storage to support applications and workloads.

Automatic cluster remediation

One of the ultimate benefits of running VMware SDDC on an AWS public cloud is access to elastic resource capacity. It tremendously helps to address not only resource demands (see the preceding Elastic DRS section) but also to quickly recover from a hardware failure.

The auto-remediation service monitors ESXi hosts for different types of hardware failures. Once a failure is detected, the auto-remediation service triggers the autoscaler mechanism to add a host to the cluster and place the failed host into maintenance mode if possible. vSphere DRS will automatically migrate resources or vSphere HA will restart the affected virtual machines, depending on the severity of the failure. vSAN will synchronize the Virtual Machine Disk (VMDK) files. Once this process is complete, the auto-remediation service will initiate the removal of the failed host.

The following diagram describes how the autoscaler service monitors for alerts in the SDDC and makes remediation actions accordingly:

Figure 1.14  – Autoscaler service high-level architecture

Figure 1.14 – Autoscaler service high-level architecture

All these operations are transparent to customers and also do not incur additional costs – a newly added host is non-billable for the whole time of the remediation.

Previous PageNext Page
You have been reading a chapter from
VMware Cloud on AWS Blueprint
Published in: Feb 2024Publisher: PacktISBN-13: 9781803238197
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Oleg Ulyanov

Oleg Ulyanov is a Staff Cloud Architect with more than 15 years of experience. He is a Subject Matter Expert in VMware Hybrid Cloud, cloud migration, networking, and storage. He has experience as a VMware professional services architect, helping customers achieve their technical and business goals through IT transformation and migrating to VMware Hybrid Clouds. He holds various industry certificates, including VMware VCP, VCAP6/7-DCV, SNIA, and Microsoft.
Read more about Oleg Ulyanov

author image
Michael Schwartzman

Michael Schwartzman, a Senior Azure Application Innovation Specialist at Microsoft, has over a decade of experience in cloud infrastructure, cloud security, and hybrid cloud solutions. Prior to his current role, Michael served as a Lead Cloud Solution Architect specializing in VMware Cloud on AWS. He has played a pivotal role in assisting Global ISVs with the development and sale of SaaS solutions on Azure. Additionally, Michael's broad expertise encompasses support for both digital natives and traditional enterprises, optimization of their cloud systems. His dedication to remaining at the forefront of the rapidly evolving tech landscape establishes him as a go-to expert for businesses seeking to leverage cutting-edge cloud technology.
Read more about Michael Schwartzman

author image
Harsha Sanku

Harsha Sanku is a Solutions Architect at Amazon Web Services, specializing in AWS Hybrid Cloud and Edge Computing services. His expertise lies in Cloud Infrastructure including Networking & Security. He has been a VMware Cloud on AWS Specialist for the last four years. Harsha has a strong background in designing and implementing data center infrastructure and private clouds, with a particular focus on VMware technologies. In his current role at AWS, he collaborates with customers to migrate and modernize their hybrid cloud infrastructure, ensuring they remain competitive in the ever-evolving business and IT landscape.
Read more about Harsha Sanku