An Introduction to AWS Outposts
After prevailing over the initial hype cycle, the cloud has truly evolved as the de facto platform of choice to run IT services, by overcoming what seemed to be an insurmountable gap between customer premises and the cloud. There have been several attempts to bridge both these worlds.
The term hybrid was initially coined as a reference to solutions that purported to operate as a cloud would in the customer data center. Ever since then, several of these kinds of denominations in the Information Technology realm have evolved and consolidated into what is now referred to as the edge. Amazon Web Services (AWS) now delivers managed cloud infrastructure in the form of AWS Outposts, which lives in the very same edge.
This chapter explores the concept of the edge and then transitions into exploring the key concepts and terminology of AWS Outposts. Finally, we will wrap up with the use cases for this product.
In this chapter, you will cover the following:
- Identifying the edge space in the Information Technology domain
- Understanding the purpose of AWS Outposts
- Identifying how Outposts fits into the edge space
- Understanding what business problems Outposts solves
Defining hybrid, edge, and rugged edge IT spaces
Amazon as an enterprise has rapidly evolved from the challenges it endured that could neither have been addressed technically nor economically at the time when running large-scale applications. This fact may have led us to conclude that AWS was unlikely to develop a product that would resemble a traditional server rack you could find in any regular data center.
As with any market or industry, things change. New technologies arise, paradigms shift, and new trends pose new challenges and require new solutions. It was no different from the way enterprises consume corporate IT services. In the past couple of years, the hybrid phenomenon gained a lot of momentum to become one of the preferred ways for enterprises to run their business.
This is not strange by any means. You have the start-up sector, which is cloud-native and certainly does not see any reason to have a physical infrastructure of its own. Start-up companies only need a solid connection to the internet and personal equipment to carry out the development work and the administrative work with cloud providers.
At the other end of the spectrum, we had companies over the past few decades doing IT the traditional way, operating fully on-premises. But in recent years, the market has developed the perception that it does not need to be one way or the other.
Back then, your option to run IT infrastructure outside your own local data center relied on offerings from third-party specialized data center providers. Offerings such as hosting, location, and co-location were extremely popular at the time, and are still available today. If you could order a good leased line to connect your site(s) with the provider’s site, any of the options would be available to you.
At best, you could have one of these providers supplying and managing all the necessary equipment to run your business while leveraging the OPEX financial model. Your IT team would take care of services and Line-Of-Business (LOB) applications and you would be in business. For some companies, the CAPEX model made sense as purchased equipment became assets for the company and added value to the balance sheets.
Times change and the advent of the cloud challenged the constraints and limitations of traditional data centers. Andrew Jassy, currently the president and CEO of Amazon, in an interview for the site TechCrunch (https://techcrunch.com/2016/07/02/andy-jassys-brief-history-of-the-genesis-of-aws/), described how AWS was conceived to be the Operating System for the internet at its inception, designed to reliably run applications and services at massive scale.
When AWS came to life in 2006, it wasn’t too clear that it would become what it is today. From humble beginnings with no clear ambition and marketing to turning into a cloud behemoth just 15 years later, AWS and the cloud were just yet another technology trend that remained to prove reliable and solid. The early adopters pioneered the new cloud paradigm and got their feet wet with infrastructure and services that existed beyond their reach when they could not even schedule a visit to the data center.
Adopting cloud services was an exercise of a dual IT landscape design. Either one given service lived on-premises or lived in the cloud. The connection between the two was basically to exchange data for migration or backup and, eventually, very simple interaction between systems with multiple components. It was difficult to consider a three-tier architecture where one of the tiers would sit in the cloud with the other on-premises. Internet bandwidth was scarce, connections were not strongly reliable, and you often had to resort to VPNs for security because it was challenging to procure dedicated links to directly connect with cloud providers at the time.
As the cloud trend reached critical mass and established itself as a valid path, businesses faced a new reality: the cloud had to be considered within their technology plans and a thorough assessment of the IT landscape was necessary to devise a strategy that could somehow encompass cloud offerings and to a serious extent. A vague statement about the cloud just being hype was no longer acceptable to business owners – it was here to stay.
This new way of consuming corporate IT services was dubbed hybrid cloud and described as a combination of cloud services running alongside the traditional on-premises data center solutions. Not surprisingly, the point of view of this model was oriented from the data center out into the outside world, stretching toward the cloud, because it was primarily articulated by on-premises infrastructure providers whose vision centered around the traditional model.
The possibility of a business going all in with the cloud while shutting down all traditional data centers was somewhat far-fetched, but it was delineated as a real alternative. While it is clear that not all workloads will be a fit for the cloud and some may remain on-premises, a significant shift of IT infrastructure to the cloud can realistically be envisioned.
Further developments in this trend revealed that one piece of the puzzle was missing. If considered as a binary choice, an on-premises data center versus the cloud, any move could be a significant risk because there was no middle ground. IT teams were facing an all-or-nothing situation where systems with multiple components would have to be moved as a whole, likely in one go.
Evaluating how a system would perform when running on the cloud was complex because tests had to be carried out in terms of production size and capacity without close contact with all other surrounding systems and services. Even with extensible tests, a cutover date was an event of high significance, full of anxiety, and likely to have a long maintenance window. Clearly, an intermediary infrastructure bridging both worlds would be beneficial.
Initial attempts to fill this gap were made by traditional software providers, offering solutions to be run on-premises that used the type of technologies and solutions offered by cloud providers. This was the private cloud – one attempt to bring the cloud operational model to customer on-premises data centers. Running on their own infrastructure at their data centers or co-location sites, the promise was to leverage cloud-like services and technologies at your facility or closer to you.
It was a good approach and makes good sense. IT teams can become familiar with cloud technologies and how system operations are carried out in the cloud while relatively comfortable at home with their own equipment, learning at their own pace. As IT professionals became familiar with the cloud model, the transition to a cloud provider could be facilitated as the value and challenges became clearer.
Even with a good portion of the market leveraging the private cloud offering, there was still the inescapable fact that on-premises, you could not leverage the cloud-specific services and technologies. Moreover, you would never benefit from the scalability and economies of scale offered by cloud providers. It was you running cloud-like services and still managing the necessary infrastructure.
Cloud adoption has gained significant momentum in recent years and we can see now how start-up companies are said to be born in the cloud or cloud natives. These businesses would have never considered creating their products and applications using the on-premises infrastructure. Such offerings would not be possible if they were conceived within the limitations and paradigms of traditional technologies.
Systems have become increasingly complex, made up of many moving parts as opposed to the monolithic approach of yesterday. Technologies favored distributed systems and highly specialized and smaller microservices. This movement highly favored the appeal of the cloud, built on top of pay-per-use, faster innovation, elasticity, and scale. For more information, refer to this video (https://www.youtube.com/watch?v=yMJ75k9X5_8), The Six Main Benefits of Cloud Computing.
Fast forward to today and considering the latest world developments, the cloud has completely solidified its position and, to be fair, has exploded in adoption, which was significantly accelerated because of the challenges imposed by recent events such as the pandemic. The cloud model was battle-tested and made it through, to the point that it became the de facto standard model to be considered the foundation of technology.
While the future of the cloud seems to be clear skies, there is another fact that still holds: the vast majority of IT spending is still on traditional infrastructure and data centers. While this seems to be a wonderful opportunity to thrive in a market where the largest chunk of business is yet to be conquered, it also means that the missing key piece to act as the catalyst for the widespread adoption of the cloud is more crucial than ever.
As the next step toward blurring the boundaries between the cloud and the so-called physical world, the concept of a hybrid has been redefined. Hybrid is considered to be this enabler, the indistinguishable middle ground where on-premises and the cloud live together in a harmonic symbiosis where both parties benefit from each other. To amplify that notion, the term edge was added to the vernacular.
What we are now seeing is the original hybrid concept in reverse. Now, it originates in the cloud and branches out to the world in the form of edge nodes, where any given data center is considered to be one of these nodes. Effectively, the cloud aims to be everywhere, encompassing all kinds of businesses and places, powered by the recent advancements in high-speed wireless connectivity through 5G networks and IoT devices and sensors.
To make it clearer, an edge node is considered to be anywhere you could run some form of computing, be it large, small, or tiny. Naturally, a family house, a hospital, a restaurant, a crop field, an underground mine, and a cargo ship are significantly different places in nature. Suitability to accommodate electronic components and connectivity conditions change radically and the mileage of the IT equipment running will vary.
To describe these components better when deployed in harmful and aggressive environments, this space is conceptualized as the rugged edge, where equipment must withstand harsh usage conditions and must incorporate design characteristics and features that allow prolonged, normal operation under those circumstances. Equipment built for this purpose boasts specs that allow for severe thermal, mechanical, and environmental conditions.
Today, cloud companies are challenging themselves to create technologies that will propel the ultra-connected world where technology is pervasive, data is collected massively everywhere, and information is nearly real-time. Hybrid solutions play a fundamental role in this game, paving the way for cloud providers to extend all over the world and become the infrastructure, not one infrastructure.
What is AWS Outposts?
For years, AWS was clear on its messaging that customers should stop spending money on undifferentiated heavy lifting. This is AWS verbatim, as can be seen in the design principles for the Cost Optimization pillar of the AWS Well-Architected Framework (https://docs.aws.amazon.com/wellarchitected/latest/framework/cost-dp.html). As it says, racking, stacking, and powering servers fall into this category, with customers advised to explore managed services to focus on business objectives rather than IT infrastructure.
From that statement, it would be reasonable to conclude that AWS would hardly give customers an offering that could resemble the dreaded kind of equipment that needs power, racking, and stacking. The early strides of AWS bringing physical equipment to customers were in the form of the AWS Snow family: AWS Snowball Edge devices and their variants (computing, data transfer, and storage).
It does sport the title of being the first product that could run AWS compute technology on customer premises, being able to deliver compute using specific Amazon Elastic Compute Cloud (EC2) instance types and the AWS Lambda function, locally powered by AWS IoT Greengrass functions. Despite this fact, it was advertised as a migration device that enabled customers to move large local datasets to and from the cloud, supporting independent local workloads in remote locations.
In addition, Snowball Edge devices can be clustered together and locally grow or shrink storage and compute jobs on demand. AWS Snowball Edge supports a subset of Amazon Simple Storage Service (S3) APIs for data transfer. Being able to create users and generate AWS Identity and Access Management (IAM) keys locally, it can run in disconnected environments and has Graphic Processing Unit (GPU) options.
Launched in 2015, the first generation was called AWS Snowball and did not have compute capabilities, which would appear in 2016 when the product was rebranded as Snowball Edge. Today, AWS Snowball refers to the name of the overall service. The specs are impressive, with 100 GB network options and the ability to cluster up to 400 TB of S3 compatible storage. SBE-C instances are no less impressive, featuring 52 vCPUs and 208 GB of memory.
AWS invested a great deal to make the cloud not only appealing but also accessible. Remove that scary thought of having to change something drastically and radically, that awful sensation of having to rebuild the IT infrastructure on top of a completely different platform. AWS even gave various customers a soft landing and easy path to AWS when they announced (https://aws.amazon.com/blogs/aws/in-the-works-vmware-cloud-on-aws/) their joint work with VMware in 2016 to bring its capabilities to the cloud, which debuted in 2017 (https://aws.amazon.com/blogs/aws/vmware-cloud-on-aws-now-available/).
With these capabilities and Edge appended to the service name, it seemed that moving forward, the path was set with Snowball. It was not without surprise that AWS Outposts was announced in November 2018 during Andy Jassy’s keynote at re:Invent. On stage, it was shown as a conceptualized model, but one could clearly see it had the shape and form of a server rack.
AWS Outposts debuted on video in 2019 (https://youtu.be/Q6OgRawyjIQ), introduced by Anthony Liguori, the VP and distinguished engineer at AWS. By that time, it became clear that a server rack was in the making inside AWS and it was targeting the traditional data center realm. However, it was against the AWS philosophy of asking customers to stop spending money on traditional infrastructure. Anyone staring at an AWS Outposts rack could be intrigued.
At re:Invent in 2019, Andy Jassy revealed the use case for Outposts during his keynote. He started by acknowledging some workloads that would have to remain on-premises because even companies who had been strong advocates for cloud adoption had also struggled at times to move certain workloads that proved to be very challenging, and eventually stumbled along their way.
Outposts was characterized as a solution to run AWS infrastructure on-premises for a truly consistent hybrid experience. The feature set was enticing: the same hardware that AWS runs on its data centers, seamlessly connecting to all AWS services, with the same APIs, control plane, functionality, and tools as used when operating in the Region. On top of it, it is fully managed by AWS. In the same opportunity, he showcased one specific Outposts variant for VMware, which was a bold move for a cloud company advocating to stop investing in data centers.
That was not the only announcement targeting the edge space. At that same event, AWS Local Zones and AWS Wavelength were announced. While these offerings fall beyond the scope of this book, it’s worth noting that they weave together to compound an array of capabilities to address the requirements and gaps in the edge space and get a strong foothold in it. So, it suffices to say, AWS Local Zones are built using slightly modified (multi-tenant) AWS Outposts racks.
Now, we have finally set the stage to introduce AWS Outposts. Let us begin with the product landing page (https://aws.amazon.com/outposts/). At the time of writing, it is now dubbed Outposts Family, due to the introduction of two new form factors at re:Invent in 2021. The 42U Rack version, the first to be launched, is now called an AWS Outposts rack. The new 1U and 2U versions are called AWS Outposts servers.
Regardless of family type, three outstanding statements that are valid across the family and strongly establish the value proposition of this offering:
- Fully managed infrastructure: Operated, monitored, patched, and serviced by AWS
- Run AWS Services on-premises: The same infrastructure as used in AWS data centers, built on top of the AWS Nitro System
- Truly consistent hybrid experience: The same APIs and tools used in the region, a single pane of management for a seamless experience
Let us cover each in detail.
One of the key aspects of positioning AWS Outposts in customer conversations revolves around explaining how AWS Outposts is different from ordering commodity hardware from traditional hardware vendors. That is exactly where these three statements come into play, highlighting differentiators that cannot be matched by competing offerings.
AWS Outposts is fully managed by AWS. While others may claim their products are also fully managed, AWS takes it to the ultimate level: it is an AWS product end to end. The hardware is AWS, purchase and delivery are managed and conducted by AWS, product requirements are strongly enforced by AWS, and site survey, installation, and servicing are conducted by AWS. No third parties are involved – the customer’s point of contact is AWS.
AWS Outposts enables customers to run a subset of AWS services on-premises and allows applications running on Outposts to seamlessly integrate with AWS products in the region. Single-handedly, the first line itself knocks out traditional hardware. For example, you can’t run EC2 on it. To amend the case, while applications running on traditional hardware can interact with AWS via API calls, AWS Outposts once again takes it to a whole new level, stretching an AWS Availability Zone in a given Region to the confines of an Outposts rack, allowing workloads to operate as if they lived in the same Region.
Customers are extremely sensitive to consistent processes. The use of multiple tools, multiple management consoles, and various scripting languages is cumbersome and error-prone. When you craft a solution where multiple parts come from multiple vendors and are all assembled, that is what ends up happening.
You will need to use a myriad of tools, interfaces, and scripts to configure and make it work. Long and complex setup processes, multiple vendors involved in troubleshooting errors, and multiple teams conducting various stages of the process lead to inefficiency, inconsistency, security problems, and significant delays in being ready for production.
IT professionals normally try to avoid this pitfall by pursuing a solution provided by a single vendor, even with the risk of the infamous single vendor lock-in. However, one hardware provider hardly ever designs and manufactures all the constituent technologies involved, such as the compute, storage, networking, power, cooling, and rack structures. More often than not, the OEM of some of the components is a third-party vendor, if not a third-party brand itself. In the end, these solutions are a collection of individual parts with some degree of consistency.
Here is another significant differentiator of AWS Outposts, which is a thoroughbred AWS solution. AWS Outposts employs the same technology used in AWS data centers whose hardware designs and solutions have undergone significant advancements over time and have been battle-tested in production for several years. With this level of integration and control, AWS can explore and tweak the components for highly specialized tasks, as opposed to the more general-purpose approach of commodity hardware.
AWS developed a technology called the AWS Nitro System (https://aws.amazon.com/ec2/nitro/), which is a set of custom application-specific integrated circuits (ASICs) dedicated to handling very specialized functions. AWS Outposts uses the same technology, standing in line to receive any of the latest and greatest advancements AWS can bring into the hardware technology space. Being such a uniform and purpose-built solution, it benefits from a fully automated, zero-touch deployment for maximum frictionless operations.
Now, we are equipped to widely understand the AWS Outposts offering as a stepping stone deployed outside the AWS cloud, with strong network connection requirements to an AWS Region, capable of running a subset of AWS services and capabilities, and conceived and designed by AWS with its own DNA.
AWS Outposts is not a hardware sell, it is not a general-purpose infrastructure to deploy traditional software solutions, and it is not meant to run disconnected from an AWS Region. AWS Outposts is a cloud adoption decision because you are running your workloads not in a cloud-like infrastructure but rather, in a downscaled cloud infrastructure. This is evident because, during the due-diligence phase, an AWS Outposts opportunity can be disqualified by the field teams if the customer workloads are capable of running in an AWS Region. AWS believes in the philosophy that if workloads are capable of running in AWS Region, they should run in an AWS Region.
Basically, AWS is asking what the use cases and business requirements are that prevent certain workloads from operating in the cloud, something that could defy common sense. Does that mean AWS is trying to discourage the customers from taking the Outposts route in favor of bringing them from the edge to the core Region?
Very much the reverse – AWS wants to make sure customers are making informed decisions. It wants them to understand the use cases for Outposts. Fundamentally, they understand they are effectively setting foot in the cloud with Outposts being the enabler to galvanize cloud adoption and the catalyst for companies to upskill their teams to build a cloud operations model and become trained in AWS technologies and services.
At this point, you should be able to identify the edge IT space, the gap between the cloud and the on-premises data center, and also understand the historical challenges associated with operating infrastructure spanning these significantly different domains.
As the initial solutions to address this problem were not good enough, AWS developed Outposts to be the answer to seamlessly bridging these two worlds. Now, it is time to frame AWS Outposts in this edge space to see how it handles the assignment.
Hybrid architecture tenets
Now that we understand AWS and its purpose, it is time to show how it fits in the so-called hybrid space. As we outlined in the first section, this concept has been redefined since it was initially coined with a vision originating from the data center to the outside world, where the cloud infrastructure was out there somewhere. From this perspective, the edge was anything outside the data center, with the cloud providers being just one of the alternatives for running a compute.
As the advent of the cloud gained traction, solidified, and became the de facto standard as an IT choice for any business to run its workloads and applications, the movement now originates in the cloud reaching out into the world, where the edge nodes are. Any given data center now represents just one of these nodes where compute processing power takes place.
When designing a product to address specific use cases or be fit for certain purposes, one of the strategies is to define the tenets of the architecture. Simply put, the tenets express a belief about what is important or guide us on how to make decisions, which is vital in helping teams to remain focused on what is most important and move quickly as we scale and deliver on our promises. Most good tenets tell you the how and not the what.
If correctly defined, tenets help to resolve critical decisions by lowering the cognitive load, promoting consistent decision-making over time and across teams, and effectively educating all involved personnel on the thinking and approach to a problem, which, in turn, produces richer feedback. It is a relieving mechanism to gain velocity in delivering a product and a guiding structure to keep the process on track, avoiding derails.
To define some tenets for the hybrid space, enterprises started thinking about the challenges involved in this space and how to address them. Any solution has to be a mechanism driving hybrid adoption and must demonstrate how it meets business demands. It needs to have a clear focus on the business problems that it tries to solve and uniquely describe the use cases it is best suited for.
Modern hybrid cloud infrastructure is starting to sediment around some tenets. Here is a list of commonly identified beliefs for a hybrid cloud architecture:
- Inherently secure
- Reliable and available
- Simplicity of use
- Build once and deploy anywhere
- Leverage existing skill sets and tools
- Same pace of innovation as running in the cloud
Here, we can see the power of the tenets in driving decision-making. For example, if we are confronted with a security decision with multiple options on how to address the requirements, the tenet on simplicity of use comes into play to help rule out solutions with a high level of complexity. This can be a tremendous advantage within technical debates.
Tenets are fundamental to narrowing down our choices by creating these soft boundaries on how to select, in this example, candidate security solutions and approaches. Only then can the debate focus on what to do to utilize any selected choices. Without tenets, there could be turmoil within the process of deciding the ways of doing something, where any decision brings the risk of discomfort and disagreement from some unfavored party.
Now that we have our bedrock, let’s frame Outposts into the hybrid space to see how it fits:
- Inherently secure: AWS says security is paramount and that holds for every AWS product. Any AWS product is secure by default, and it does not grant full, unlimited, or public access unless strictly told to do so. AWS Outposts is no different.
Let’s showcase features geared towards making AWS Outposts, as any AWS solution, fully furnished with security capabilities.
On the physical structure, AWS Outposts features a built-in tamper detection, which is powered by the AWS Nitro System. The Nitro System is a set of peripheral component interconnect express (PCIe) cards purposely built to offload tasks related to security, network, and storage to be executed by dedicated ASICs and spare CPU cycles.
Outposts comes in an enclosed rack with a lockable door and features the Nitro Security Key (NSK), a physical encryption module with destruction available that is equivalent to physical data destruction (data shredding on the hardware).
In the data protection realm, AWS Outposts comes with encryption at rest enabled by default. Amazon Elastic Block Store (EBS) volumes are encrypted using AWS Key Management Service (AWS KMS), where customers store their customer master keys (CMKs). For Outpost servers, the Amazon EC2 instance store is encrypted by default. Moreover, encryption is required. You cannot create unencrypted EBS volumes.
All data traffic is protected by enforcing encryption in transit. Any communications between an Outpost and its AWS Region use an encrypted set of VPN connections called a service link. User data, management, and application traffic fall under the responsibility of the customer, but AWS strongly advises using an encryption protocol such as Transport Layer Security (TLS) to encrypt sensitive data in transit through the local gateway to the local network.
When an Amazon EC2 instance is stopped or terminated, any memory space allocated to it is scrubbed (as in, set to zero) before it can be reallocated to any other instance. The data blocks of storage are also reset.
In the IAM realm, AWS leverages the capabilities of its IAM service. Authentication and authorization are handled by this service. As mentioned, by default, IAM users don’t have permissions for AWS Outposts resources and operations. Any permissions must be explicitly granted by attaching policies to the IAM users or groups that require those permissions.
Being a managed service, AWS Outposts operates under the strict AWS global network security procedures that are described in the Overview of Security Processes white paper available at (https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf). API calls to interact with AWS Outposts are encrypted with TLS and all requests must be signed using an access key ID and a secret access key associated with an IAM principal.
- Reliable and available: Reliable can be characterized in short as something that may be relied on and is fit to be depended on and trustworthy. Available can be something that is said to be capable of producing the desired effect. Using these dictionary definitions as lenses, let’s glance at how AWS Outposts takes shape.
The rack comes with all the expected bells and whistles highly available. Redundancy can be identified across power components and networking gear. When it comes to server units, it is a strategy derived from customer availability requirements. AWS recommends the allocation of additional capacity, especially to operate mission-critical applications. One strategy is to order the rack with a built-in capacity to support N+1 instances for each instance family to enable recovery and failover if there is an underlying host issue.
When you create AWS Outposts, it is tied to a single Availability Zone in a given Region. The control plane exists in the Region and is fully managed by AWS. When designing for failure, benefitting from multiple Availability Zones that exist in any AWS Region, the strategy should involve having individual racks associated with distinct Availability Zones.
This approach can be stretched even further. Designs may take into consideration different power grids, but AWS recommends at least dual power sources in a given location. AWS may also consider different physical locations or place the racks apart in different buildings or a Metro area. Certainly, multiple network connections are something to be strictly enforced, optimally using different providers and making sure they operate using different circuits to deliver communication ports at the customer site.
Lastly, you can leverage EC2 placement groups to make sure compute instances are placed onto distinct hosts in a given rack or distinct racks. This capability works in the same manner as in the Region, supporting cluster, partition, or spread placement groups, where a cluster strategy requires a single rack and partition and spread strategies require multiple racks.
Naturally, the application architecture must offer some high availability or fault tolerance feature – otherwise, it will solely rely on server hardware redundancies and mechanical or electronic components in the Mean time between failures (MTBF). Some third-party solutions promote the ability to replicate operating systems and their changes using tracking features such as Changed Block Tracking (CBT), but those must be tested and validated by the customer.
- The simplicity of use: The AWS Outposts experience is designed to be frictionless and streamlined as much as possible. Starting with the outstanding status of a managed service, customers don’t have to touch the hardware, they don’t need to configure anything on the rack, they don’t have to connect to serial ports or management modules, and they don’t have to replace spare parts. AWS is fully responsible for carrying out these tasks. Customers order the rack, and once it is deployed and paired with its peered Region, they begin consuming its resources. Period.
This may appear to be a simple concept but it is huge. If you ever used traditional hardware in your IT department, you certainly know everything that comes associated with it. The selection process, while it can be compared to a fun exercise of choosing food from a menu at the restaurant, frequently may turn into a nightmare of comparing long lists of specifications and features because procurement and purchasing departments always want apple-to-apple comparisons.
Okay, let’s say you survived the quest to find a cheaper price for commoditized hardware that should, ideally, use the same components and deliver the same performance with a similar set of technical specifications, so much so that they could be considered interchangeable. You order it, you get your estimated delivery date, it arrives on site, and now the fun begins.
The task of receiving boxes, unpacking and seating racks, racking individual components, setting up wires to bring power and cables to enable the network, and starting up, configuring, and provisioning the services is all yours unless you buy vendor services for that purpose. Again, let’s say it all goes well. Now, it is time to put it to the test, starting by carving out resources and interoperating with other components. All good – it worked. End of story? No, not at all!
Now, it is time to maintain the infrastructure. The significant overhead of patching and upgrading the equipment and managing against a complex compatibility matrix across various hardware and software components is risky, potentially disruptive, time-consuming, and often nerve-racking. As time goes on, the cycle continues, only to start all over again when the next hardware refresh period kicks in.
None of these is an issue for AWS Outposts, as you are always dealing with AWS. Procurement is tremendously simplified – the business has decided to move to the cloud using AWS technology, but there is nothing to compare or match. Delivery, installation, power up, and startup are operated and supervised by AWS.
All the necessary information, preparation, and site readiness are organized in advance to make sure everything works neatly. Technicians are sent on site to verify whether network configurations are correct and automated processes are working as expected. Service readiness can be achieved in a matter of hours by the time Outposts is brought to life. By the time the bootstrap is finished and the logical Outpost ID is up and running, you can jump into AWS Console and begin using your AWS Outposts rack. Don’t worry: AWS takes care of maintenance.
- Build once, deploy anywhere: We are now using Outposts. How does this solution measure against this tenet? As mentioned before, it is not AWS-like infrastructure – it is AWS-downscaled infrastructure. Its design, components, technical solutions, and capabilities are the same as those used inside AWS Regions. This is mind-boggling – by ordering AWS Outposts, you are effectively bringing a piece of AWS to your site, for your use, at your disposal.
That translates into the power of consistency. If you have ever used AWS in the region, it is the same experience with AWS Outposts—same console, same concepts, same configurations. If you provision EC2 in the region, it is the same EC2 provisioned in the rack. If you leverage EC2 service capabilities, they will likely be available to be used in the rack. A service that is available on Outposts has very similar capabilities when compared with the same service operating in a region.
While, understandably, a service running on Outposts will hardly have all the feature sets available in the region, it is absolutely and truly the same foundation. The only difference is the natural constraint of operating within the limits of the Outposts hardware. Within a region, we have that sense of virtually unlimited hardware and the hundreds of AWS services available and connected using gigantic network pipes. Within the confines of an Outposts rack, other restrictions and considerations may apply.
Physical limitations aside, it is powerful to be able to build applications using AWS technologies and solutions present on Outposts and deploy them seamlessly to any rack on any location. No adaptations, changes, conversions, or adjustments are needed. It runs on a given Outposts SKU and it will run on any similar Outposts, anywhere, anytime. It runs on Outposts, so it will run even more comfortably in the Region.
- Leverage existing skill sets and tools: This tenet relates to productivity and the DevOps community. There are challenges associated with the usage of different APIs and tools to build apps for the cloud and on-premises environments, and then the need to re-architect them to work in other environments. The question arises on how to build applications once, run on-premises or in the cloud using the same APIs and tools, and use the wide range of popular services and APIs that you use in the cloud for the applications that run on-premises.
Enter AWS Outposts. You build on Outposts the same way you build in the region. You use the same AWS Console to view and manage their resources, whether those resources and services are in the AWS cloud or on-premises. You can use the same AWS CLI and SDKs to run and deploy applications and use the same API endpoints to send requests to applications running in the AWS cloud.
AWS services commonly used by applications running in the cloud are also available when these are deployed on Outposts. Foundational components such as IAM policies and permissions, VPCs, security groups, and access control lists work the same way as in a Region.
API calls will automatically be logged via AWS CloudTrail and tools such as AWS CloudFormation, Amazon CloudWatch, AWS Elastic Beanstalk, and others can be used to run and manage applications running on-premises just the same as they are used for cloud workloads today. If you already use CloudFormation, existing templates will also work with minor tweaks.
Businesses are very sensitive to the impacts of productivity. The time to market is vital, as the pace of innovation is tremendous. Competition is fierce and developers must constantly be launching new features and improving applications. Development cycles have been shortened. Multiple commits are made per day, resulting in multiple daily build cycles, often in the magnitude of tens but stretching to hundreds, even thousands.
Outposts can rightfully make the case for increased development productivity because it is a genuine portion of AWS tech reaching out to the customer premises for their private use and not a venture of multiple suppliers and vendors compounding a solution that is delivered to a customer to act as a bridge between both locations.
- Same pace of innovation as in the cloud: Here, we consider the challenges associated with supporting the business. Inefficiency in IT results in poor Return on Investment (ROI) for technology investments and slower business growth. As a result, on-premises environments lag behind the cloud when considering the pace of innovation. In a fast-paced, ever-changing environment, this can ultimately appear to be the death knell for a company.
How do we leverage new technology innovations for better and faster deployment of services and deeper business insights? How do we enable innovation in your on-premises environments using services that exist in the cloud?
The answer can only be to have a solution at the edge that evolves in close parity with its parent cloud. AWS Outposts reaches a breakthrough, being the only solution that employs the same foundations as its parent cloud company and being able to evolve in close contact with AWS’s latest data center advancements.
- A truly consistent hybrid experience: This is the statement coalescing the AWS vision about hybrid environments, which ultimately led to the development of AWS Outposts, and it excels in this aspect. If you look at other cloud providers and their proposed solutions to conquer the edge, look at how many can proclaim to be as seamless, comprehensive, and integrated as Outposts.
Are they designing and manufacturing their hardware infrastructure? Do they use the exact same technologies as used in their data centers? Outposts can rightfully make these claims and they hold on both counts. To date, AWS Outposts is an unmatched product when it comes to fulfilling the promise of the everywhere cloud.
Now that we have explored the concept of tenets and how they can be helpful to drive, among other things, the development of a product, let’s go ahead and see how AWS Outposts as a solution match the tenets for a hybrid space with the help of some use cases.
Use cases for AWS Outposts
AWS has a method for product development called working backward. This is their approach to innovation and the stepping stones to creating a new solution. There is an excellent talk recorded during re:Invent 2020 about this mechanism, available at https://www.youtube.com/watch?v=aFdpBqmDpzM.
One step of this mechanism involves asking five questions about the customer, as follows:
- Who is the customer?
- What is the customer’s problem or opportunity?
- What is the most important customer benefit?
- How do you know what your customer needs or wants?
- How does the experience look?
The process is composed of several steps to assess the opportunity, propose solutions, validate with stakeholders, and finally, build the product roadmap. Permeating all this process is the concept of a use case. Simply put, this consists of exercising a hypothetical scenario to determine how a user interacting with the product can achieve a specific goal.
Use cases are so important because they are the North Star guiding product development. A product must be tailored to address the use case, meaning it will validate the scenario and effectively achieve the pursued goal. A very complex and elaborated product created without a clear purpose can be a display of technical prowess and craftsmanship, and can also carry the risk of not being successful because of the inability to describe what it is good for.
For those positioning AWS Outposts as a solution, this is one of the significant challenges. The first callout should undoubtedly show pictures of the products, either the rack or server, with no detailed explanations prior. This will unequivocally trigger in the minds of IT professionals peeking at AWS Outposts for the first time, “AWS is now selling hardware!” This is not the case and is a very common pitfall.
This is the first opening to make a statement that AWS is not selling hardware. Start by saying the hardware does not belong to the customer – this is not an option. Their minds will switch to thinking it is a hardware rental or leasing contract. This is the time to pull the fully-managed service card; AWS takes care of absolutely everything and the customer does not touch the hardware.
This is the part where customers switch to thinking about the legacy hosting model, believing that AWS is now supplying Hardware as a Service (HaaS). Certainly, there is a taste of this model in AWS Outposts, but the trump card to be played is simply the statement that you can’t just run your platform of choice in it – you only run AWS Services. No commercial hypervisors and no bare metal servers to install on your preferred operating system. It runs the AWS platform.
At this point, you have paved the way to go full throttle into what AWS Outposts aims for at its core: taking the AWS Outposts route effectively means you decided to move to the cloud with AWS. You opened the door for AWS to establish an embassy in your territory to work in close cooperation with you and the crucial reason for this decision should be that you are already looking forward to using AWS services.
If long-term, AWS is not in your equation and you are just looking at AWS Outposts as a potential solution to be used until the next business cycle, where you will re-evaluate cloud providers and look at their similar offerings, trying to make a strong price argument point to justify migrating everything running on Outposts over that other solution, you are potentially treating IT infrastructure as an item in a reverse auction. The cloud provider with the lowest bid wins.
As natural as it may sound, because this is ultimately the market forces in action, it may also cast a shadow on these IT departments, implying, from their point of view, that cloud providers are all the same and there is no real difference, so they can also be treated as commodities. This statement could not be more naïve; choosing a cloud provider is a decision requiring thorough consideration and an extensive amount of work assessing and evaluating their services and capabilities, combined with a long-term view.
From this aspect, AWS does an excellent job at communicating the value proposition of Outposts and helping customers to make an informed decision. AWS believes in working backward from the customer’s requirements and wants to be absolutely sure in understanding who its customers are, what the customer’s problem is, and what the benefit is for the customer. However, the real deal here is that it goes both ways, and AWS also wants to make sure the customer thoroughly understands what selecting AWS Outposts as their answer for the hybrid challenge means.
- Latency sensitive applications
- Local data processing
- Data residency requirements
Let’s examine each one in detail.
The term latency is defined as the time that elapses between a user request and the completion of that request. When a user, application, or system requests information from another system, data packets are sent over the network to a server or system for processing. Once it reaches the destination, it is processed and a response is formed and transmitted back, completing the reply. This process happens many times over, even for a simple operation such as loading a web page on a browser.
There might be several network components involved in order to complete this process and each one adds a tiny delay while forwarding the data packet. Depending on the number of simultaneous transmissions and user requests, the traffic mounts up to a point that these delays become perceptive to the user in the form of wait times. This effect is even worse when the data packets need to traverse long geographical distances.
For the end user requesting information from a website, this translates into a long wait time until the web page finally loads. However, some applications simply rely on low-latency networks to work predictably and smoothly – therefore, this characteristic becomes a requirement. Some applications may require ultra-low latency (measured in nanoseconds, while low latency is measured in milliseconds). Other factors to be taken into consideration are latency jitter (the variation in latency) and network congestion.
Good examples of applications and use cases that require low latency can be found across various industries: life sciences and healthcare, manufacturing automation, and media and entertainment. Use cases encompass content creation, real-time gaming, financial trading platforms, electronic design automation, and machine learning inference at the edge. Let’s cite a few:
- Healthcare: Surgical devices, Computerized Tomography (CT) scanners, and Linear Accelerators (LINACs)
- Life sciences: Molecular modeling applications such as GROMACS (https://www.gromacs.org), and 3D analysis software for life sciences and biomedical data
- Manufacturing: Medical device manufacturing, pharmaceutical and over-the-counter (OTC) manufacturing, integrations with IoT, a digital twin strategy (https://aws.amazon.com/iot-twinmaker/faqs/), Supervisory Control and Data Acquisition (SCADA), Distributed Control Systems (DCSs), Manufacturing Execution Systems (MESs), and engineering workstations
- Media and entertainment: Content creation and media distribution (streaming)
- Financial services: Next-generation trading and exchange platforms
Local data processing
Some use cases may end up generating large datasets that need to be processed locally. Because of their size, the cost to migrate them to the cloud may be unfeasible because the back-and-forth of pre- and post-processing data between the cloud and the site may end up generating significant egress charges and can also lead to packet loss, resulting in data integrity problems.
Moreover, the time it would take may be unrealistic for the use case and effectively will defeat its purpose. Additionally, customer requirements may dictate processing data on-premises and the ability to easily move data to the cloud for long-term archiving purposes or workloads that may need to be available during a network outage.
The same types of industries mentioned before have use cases with this requirement:
- Healthcare: Remote surgery robots, computer vision (for medical image analysis), Picture Archiving and Communication Systems (PACS), Vendor Neutral Archiving (VNA) solutions, and taking emergency actions on patients carrying wearable devices capable of making decisions using inference at the edge
- Life sciences: Cryo-electron microscopes, genomic sequencers, molecular modeling with 3D visualization (requires GPUs), and Research and Development (R and D)
- Manufacturing: Smart manufacturing (https://aws.amazon.com/manufacturing/smart-factory/), site optimization, and predictive maintenance
Data residency requirements
Here, let’s briefly examine some of the terminology involved as well. Data residency is the requirement that all customer content must be processed and stored in an IT system that remains within a specific locality’s borders. Data sovereignty is the control and governance over who can and cannot have legal access to data, its location, and its usage.
- The obligation to meet legal and regulatory demands, including data locality laws. This requirement can affect, for example, financial services, healthcare, oil and gas, and other highly regulated industries having to store all user and transaction data within the country’s borders, or public entities may be subject to a requirement that data produced by local and national government needs to be stored and processed in that country.
- The organization’s business and operating model, where the majority of activities take place within a certain country’s geography. In this scenario, the company falls under the financial rules of a national entity, which may require storing or processing some or all of its data within that nation state.
- There may be contractual requirements to store data in a particular country as well. Businesses may have to agree to keep the data of specific customers in a given jurisdiction to meet the data residency requirements of those clients.
- Lastly, it could be mandated for business or public sector entities that certain data must be stored or processed in a specified location due to corporate policy. This mandate could be partially or fully derived from one of the previous drivers.
As some use cases of storing sensitive data on AWS Outposts, we can cite patient records, medical device intellectual property (IP), – as in, copyrights, trademarks, and patents – government records, genomic data, and proprietary manufacturing info.
These are potential ways to use a product that can propel, expedite, or catalyze the cloud as an option to run a workload. They are potential uses of the product that can help businesses to strengthen their arguments for building a hybrid cloud by adding more strategic use cases. Let us look at some of them:
- Application migration to the cloud
- Application modernization
- Data center extension
- Edge computing
Let’s examine each one in detail.
Application migration to the cloud
This may not be immediately perceived as a potential use case, but it turns out to be a powerful one. Large migrations from on-premises data centers to AWS may involve a myriad of applications and can take several years. The risk involved is tremendous if the environments are significantly different, not to mention the operational burden to use multiple management tools, APIs, and interfaces.
AWS Outposts can significantly mitigate, if not eliminate, this problem. Because it is a portion of AWS, it provides a consistent operational environment across the hybrid cloud while migrating applications to the cloud, ensuring business continuity. Your workloads will not need tweaks and adjustments – if they run on Outposts, they will run in the Region just as well. The only point of attention is the strength and sensitivity of its ties with on-premises services.
This is achieved by employing a strategy called two-step migration. Instead of having to migrate applications and critical dependencies all at once, AWS Outposts offers a safe haven where you can begin migrating in steps to Outposts while keeping close contact with the on-premises applications. This enables customers to slowly move all individual components into Outposts and when they are all together, you can easily move to the region.
Still, in the migration realm, AWS offers a tool to expedite migrations called CloudEndure (https://www.cloudendure.com/) While it is also a disaster recovery tool, CloudEndure allows all migration paths: from on-premises servers (whether physical or virtual) to AWS Outposts, from AWS Regions to Outposts, from other clouds to Outposts, and even from Outposts to Outposts. Recently, AWS launched a new service for migrations called AWS Application Migration Service (https://aws.amazon.com/application-migration-service/), which is the next generation of CloudEndure migration that will remain available until the end of 2022.
Moreover, there is an Outposts flavor that runs VMware Cloud on AWS. VMware customers can easily and seamlessly interoperate and migrate their existing VMware vSphere workloads while benefiting from leveraging their investments on the VMware platform.
Modernizing while you are still on-premises may be the best approach for some workloads that are tightly coupled to the existing infrastructure. There are many opportunities in this area, such as moving legacy monolithic workloads to containers, modernizing mainframe applications, and enabling CI/CD and a DevSecOps approach. AWS Outposts offers the ability to run Amazon ECS or Amazon EKS on-premises to power this transformation.
Modernization with AWS Outposts can be the first step towards the bold objective of re-invention. At this stage, customers can use AWS Lambda at their disposal and explore serverless containers with AWS Fargate for both Amazon ECS and Amazon EKS.
Mainframe modernization stands out from the crowd because of the powerful driving forces behind it. Cost savings is the first and the most obvious, the obsolescence of the platform and the business risk it represents are also there, while the ever-growing shortage of skilled professionals to support this legacy is well-known and the reason for some amusing stories.
One particular driving force that normally falls off the radar is the constraints mainframes impose on businesses preventing them from using modern technologies. Keeping the business locked into the limitations imposed by mainframes can be the poison holding companies off from unlocking their market potential.
Data center extension
In this realm, the infrastructure of your cloud provider is treated as an extension of your on-premises infrastructure. This gives you the ability to support applications that need to run at your data center. There are four broad use-cases:
- Cloud bursting: In this application deployment model, the workload runs primarily in on-premises infrastructure. If the demand for capacity increases, you branch out and AWS resources are utilized. There are two main reasons triggering the need for cloud bursting:
- Bursting for compute resources: You consume a burst compute capacity on AWS through Amazon EC2 and managed the container services of Amazon ECS, Amazon EKS, and AWS Fargate.
- Bursting for storage: In this case, you can integrate your applications with Amazon S3 APIs and leverage AWS Storage Gateway. This offering enables on-premises workloads to use AWS cloud storage, which is exposed to on-premises systems such as network file shares (File Gateway) for file storage or iSCSI targets (Tape Gateway and Volume Gateway) for block storage.
- Backup and disaster recovery: Customers can leverage the power of object storage with Amazon S3 and use data bridging strategies presented by AWS Storage Gateway, back up their applications with AWS Backup, and move or synchronize data between sites and AWS with AWS DataSync. For disaster recovery strategies based on file data hosted premises that need to be transferred to the AWS cloud, you can leverage AWS Transfer for Secure File Transfer Protocol (SFTP).
- Distributed data processing: Certain applications can be deployed with functionality split between on-premises data centers and the AWS cloud. In this scenario, we normally expect the low-latency or local data processing components to stay close to the local network on-premises and other components delivering additional functionality to reside on AWS. In the cloud portion, you can benefit from a myriad of services such as massive asynchronous data processing, analytics, compliance, long-term archiving, and machine learning-based inference. These capabilities are powered by services such as AWS Storage Gateway, AWS Backup, AWS DataSync, AWS Transfer Family, Amazon Kinesis Data Firehose, and Amazon Managed Streaming for Apache Kafka (Amazon MSK), which act as enablers to use the imported data as the source for analytics, machine learning, serverless, and containers.
- Geographic expansion: AWS is constantly expanding and evaluating the feasibility to deploy new Regions across the globe, but it’s unrealistic to expect regions to be deployed to the tune of thousands or even hundreds of locations. It may need to deploy an application in a place where you are still unable to leverage an AWS Region. Eventually, there might be reasons why workloads need to stay close to your end users, such as low latency, data sovereignty, local data processing, or compliance. Traditional approaches such as deploying your own physical infrastructure can become challenging, costly, or constrained by legal requirements and local laws, but AWS Outposts can be instrumental in fulfilling this use case if it is available in that geography. This information is easily accessible on the product FAQ page (https://aws.amazon.com/outposts/rack/faqs/).
Certain environments such as factories, mines, ships, and windmills may have edge computing needs. Outposts addresses this use case with its smallest form factor, Outposts servers – these scenarios are unlikely to be addressed with the Outposts rack. However, the connection requirement to one parent region is still there. When the requirements specifically involve harsh conditions, disconnected operation, or air-gapped environments, customers can use AWS Snowball Edge computing. These ruggedized devices are capable of operating while fully disconnected and use Amazon EC2 compute resources to perform analytics, machine learning, and run traditional IT workloads at the edge. Data can be preprocessed locally and further transferred to the AWS cloud for subsequent advanced analysis and durable retention.
Another edge computing offering is AWS IoT Greengrass, which you can run on Outposts servers. Edge applications generate data that may need to be consumed locally to identify events and trigger a near real-time response from onsite equipment and devices. With AWS IoT Greengrass, you can deploy Lambda functions to core devices using resources such as cameras, serial ports, or GPUs. Applications on these devices will be able to quickly retrieve and process local data while remaining operational, withstanding fluctuations in connectivity to the cloud. You can optimize the cost of running apps deployed at the edge using AWS IoT Greengrass to analyze locally before forwarding to the cloud.
Closing this use cases section, it is worth highlighting the uniqueness of AWS Outposts. This is a product designed with one clear statement in mind: it must be, as much as possible, a portion of an AWS data center stretching out to a customer facility. This paradigm drove product development and will drive product evolution. Anyone using AWS Outposts expects nothing other than AWS technology and this expertise being applied to the product so it can become increasingly valuable.
If we look at the pace of the innovation of AWS, how innovative and visionary their teams are, and how resolute AWS is in advancing with speed and strength without being careless or resting on its laurels, we can safely say that the best is yet to come.
In this chapter, you learned about the rise of the hybrid IT space and how it transitioned from an approach expanding out from the data center toward the cloud to a movement where the cloud is an expanding stronghold with precedence over data centers.
Next, you learned about AWS Outposts and how it was born to seamlessly bridge the AWS cloud with customer data centers. Then, you were introduced to the concept of tenets, and we defined the tenets for a hybrid cloud solution.
We contrasted AWS Outposts against these tenets to assess how it compares and highlighted use cases for this product. Now, it’s time to take a peek at the real thing and take a tour of an AWS Outposts rack. With the knowledge you gained in this chapter, you now have the foundation to understand why AWS Outposts was designed and engineered the way it was, which you will see in greater detail in the next chapter.