Introduction to Cloud Transformation
Innovation, efficiency, and profitability are some of the main tenets for businesses to be able to adapt to the changing needs of the world. Amazon, Microsoft, and Apple are organizations that effectively continue to strive for innovation and manage to reinstate their successes at multiple turning points in their journeys. Netflix was able to reinvent its business and make the streaming experience more enjoyable by innovating its platforms so that they can withstand disruptions. Technology plays a crucial role in helping organizations increase their innovation and agility.
Cloud transformation becomes a topmost priority for organizations who want to explore, improve their day-to-day operations, and succeed in their businesses in these constantly changing times. Businesses around the world are embracing the cloud to supercharge their organization’s growth, as well as innovating, running, scaling, delivering, optimizing, and mitigating any business risks quickly and efficiently. Cloud transformation often poses barriers that are difficult to break down and requires a clear vision of where to start.
In this chapter, we will cover the following topics:
- Introduction to the cloud
- Key characteristics of cloud computing
- Motivators for cloud adoption
- Cloud service providers at a glance – AWS, GCP, Azure, and more
- Service models (IaaS, PaaS, and SaaS)
- Exploring the deployment models (private, public, hybrid, multi, and community)
Introduction to the cloud
Many aspects of our everyday life have been transformed by ever-evolving digital solutions. Technology is changing rapidly, and industries are adapting to these changes at a rapid pace. The cloud has become the dominant term in technology in the past few years and the impact it’s brought to businesses is beyond resounding. Before we learn more about cloud transformation, let’s look at the cloud and what cloud computing is.
Cloud transformation is the step-by-step process of moving your workloads from local servers to the cloud. It is a process that brings technology and organizational processes together to accelerate the development, implementation, and delivery of new services.
The cloud is referred to as a collection of software, servers, storage, databases, networking, analytics, and intelligence that can be accessed via the internet instead of being locally available on your computer or device. These services are delivered through data centers located across the globe and linked through the internet.
The following diagram depicts the use of cloud computing and the accessibility of various devices via the cloud:
Figure 1.1 – Cloud computing
The origins of cloud computing
The history of cloud computing began almost 70 years ago, when corporations and large organizations began exploring computers and mainframe systems. In the 1950s and 1960s, these were only a reality for organizations with sufficient financial resources. Computers were simply large, expensive interfaces that required human operators to interact with the mainframe computer terminals to process complex data.
These early mainframe clients had limited computing power and needed the bulk of the available physical servers to get the work done. This model of computing is the predecessor of cloud computing.
In the 1970s, hardware-assisted virtualization was first introduced by IBM, which allowed organizations to run many virtual servers on a physical server at a given time. This was a milestone for mainframe owners as they could run virtual machines using the VM’s operating system. Virtualization has come a long way and VM operating systems are a deployment option for many organizations for building and deploying applications. Today’s cloud computing model couldn’t have been possible without the concept of virtualization.
Technically, the concept of virtualization evolved with the internet as businesses started providing virtual private networks as a paid service. This resulted in a great momentum back in the 1990s leading to the development of a foundational block for modern cloud computing.
The term cloud computing specifies where the boundaries of computing follow the economic rationale rather than the technical limits alone.
Virtualization is the process of running a virtual instance by creating an abstraction layer over dedicated amounts of CPU, memory, and storage that are borrowed from a physical host computer.
In the following diagram, each VM runs an operating system (OS) of choice with its own software, libraries, and so on that are needed for its applications. This VM silo runs the hypervisor on top of the bare-metal environment:
Figure 1.2 – Cloud computing
This virtualization technique forms the foundational component of cloud computing, where a hypervisor runs on a real machine and creates virtual operating systems on that particular machine.
In 2006, Amazon launched Amazon Web Services (AWS), the first cloud provider to offer online services to other websites of customers. In 2007, IBM, Google, and several other interested parties such as Carnegie Mellon University, MIT, Stanford University, the University of Maryland, and the University of California at Berkeley joined forces and developed research projects. Through these projects, they realized that computer experiments can be conducted faster and for cheaper by renting virtual computers rather than using their hardware, programs, and applications. The same year also saw the birth of Netflix’s video streaming service, which uses the cloud and has revolutionized the practice of binge-watching.
AWS states that “Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider”.
This completes our introduction to the cloud, the history of cloud computing, and its evolution. In the next section, we will look at the key characteristics of cloud computing so to understand how cloud computing is beneficial for businesses in this new computing era.
Key characteristics of cloud computing
- On-demand self-service
- Wide range of network access
- Multi-tenant model and resource pooling
- Rapid elasticity
- PAYG model
- Measured service and reporting
In traditional enterprise IT settings, companies used to build the required infrastructure to run their applications locally; that is, on-premises. This means that the enterprises must set up the server’s hardware, software licenses, integration capabilities, and IT employees to support and manage these infrastructure components. Because the software itself resides within an organization’s premises, enterprises are responsible for the security of their data and vulnerability management, which entails training IT staff to be aware of security vulnerabilities and installing updates regularly and promptly.
Cloud computing is different from traditional IT hosting services since consumers don’t have to own the required infrastructure to run their applications. With the cloud, a third-party provider will host and maintain all of this for you. Provisioning, configuring, and managing the infrastructure is automated in the cloud, which reduces the time to streamline activities and make decisions about capacity and performance in real time.
Cloud automation is the process of automating tasks such as discovering, provisioning, configuring, scaling, deploying, monitoring, and backing up every component within the cloud infrastructure in real time. This involves streamlining tasks without human interaction and caters to the changing needs of your business.
On-demand makes it possible for consumers to benefit from the resources on the cloud as and when required. The cloud supplier caters to the demand in a real-time manner, enabling the consumers to decide on when and how much to subscribe for these resources. The consumers will have full control over this to help meet their evolving needs.
The self-service aspect allows customers to procure and access the services they want instantaneously. Cloud providers facilitate this via simple user portals to make this quick and easy. For example, a cloud consumer can request a new virtual machine and expect it to be provisioned and running within a few minutes. On-premises procurement of the same typically takes 90-120 days and also requires accurate forecasting to purchase the required RAM specifications and the associated hardware for a given business use case.
Wide range of network access
Global reach capability is an essential tenet that makes cloud computing accessible and convenient. Consumers can access cloud resources they need from anywhere, and from any device over the network through standard mechanisms such as authentication and authorization. The availability of such resources from thin or thick client platforms such as tablets, PCs, smartphones, netbooks, personal digital assistants, laptops, and more help the cloud touch every possible end user.
Multi-tenant model and resource pooling
Multi-tenancy is one of the foundational aspects that makes cloud services practical. To help you understand multitenancy, think of the safe-deposit boxes that are located in banks, which are used to store your valuable possessions and documents. These assets are stored in isolated and secure vaults, even though they’re stored in the same location. Bank customers don’t have access to other deposit boxes and are not even aware of or interact with each other. Bank customers rent these boxes throughout their lifetime and use security mechanisms to provide identification and access to their metal boxes. In cloud computing, the term multi-tenancy has a broader meaning, where a single instance of a piece of software runs on a server and serves multiple tenants.
Multi-tenancy is a software architecture in which one or more instances of a piece of software are created and executed on a server that serves multiple, distinct tenants. It also refers to shared hosting, where server resources are divided and leveraged by end users.
Figure 1.3 – Single-tenancy versus multi-tenancy
As an example of a multi-tenancy model, imagine an end user uploading content to social media application(s) from multiple devices.
Using the multi-tenant model, cloud resources are pooled via resource pooling. The intention behind resource pooling is that the consumers will be provided with ways to choose from an infinite pool of resources on demand. This creates a sense of immediate availability to those resources, without them being bound to any of the limitations of physical or virtual dependencies.
Resource pooling is a strategy where cloud-based applications dynamically provision, scale, and control resource adjustments at the meta level.
Resource pooling can be used for services that support data, storage, computing, and many more processing technologies, thereby facilitating dynamic provisioning and scaling. This enables on-demand self-service for services where consumers can use these services and change the level of their usage as per their needs. Resource pooling, coupled with automation, replaces the following mechanisms:
- Traditional mechanisms
- Labor-intensive mechanisms
With new strategies that rely on increasingly powerful virtual networks and data handling technologies, cloud providers can provide an abstraction for resource administration, thereby enhancing the consumer experience of leveraging cloud resources.
Elasticity is one of the most important factors and experts indicate this as the major selling point for businesses to migrate from their local data centers. End users can take advantage of seamless provisioning because of this setup in the cloud.
Before we answer these questions, let’s take a look at the definition of elasticity.
Another criterion that is used in the cloud is scalability. Let’s look at what it is and how it differs from cloud elasticity.
Although the fundamental theme of these two concepts is adaptability, both of these differ in terms of their functions.
Scalability versus Elasticity
Scalability is a strategic resource allocation operation, whereas elasticity is a tactical resource allocation operation. Elasticity is a fundamental characteristic of cloud computing and involves taking advantage of the scalable nature of a specific system.
For example, take an online retail shipping website that is experiencing sudden bursts of popularity and their volume of transactions is peaking. To handle the workload, the website can leverage the cloud’s rapid elasticity by adding resources to meet the transaction spikes. When the workloads do not have to meet such peaks, the services can be taken down just as quickly as they were added. You only pay for the services that you use at any given point.
Automatically commissioning and decommissioning resources is inherent to cloud elasticity and can be used to meet the in and out demands of businesses, thereby helping them manage and maintain their operating expenditure (OpEx) costs without having to put in any upfront capital expenditure (CapEx) costs and being locked into any long-term contracts.
The pay-per-use or Pay As You Go (PAYG) pricing model is a major highlight that’s geared toward an economic model for organizations and end users. The per-second billing pricing plans that are provided by the cloud providers make it easy for businesses to witness a major shift from CapEx to OpEx. This enables the businesses to not worry about the upfront capital that they need to spend on on-premises infrastructures and capacity planning to meet ongoing demands. The traditional self-provisioning processes are often prone to extreme inefficiency and waste due to the complex supply chain model, which usually involves seamless communication between decision-makers and stakeholders.
However, cloud-based architectures and their inherent design models allow you to scale up your applications on the cloud during peak traffic and scale back down during periods where they’re not needed as much, without having to worry about annual contracts or long-term license termination fees.
What are CapEx and OpEx?
CapEx involves funds that have been incurred by businesses to acquire and upgrade a company’s fixed assets. This includes expenditures toward setting up the technology, the required hardware and software to run the services, and more.
OpEx involves the expenses that have been incurred by businesses through the course of their normal business operations. Such expenses include property maintenance, inventory costs, funds allocated for research and development, and more.
The businesses witness heavy OpExs when it comes to service and software procurement and management, tasks that are often expensive and inefficient. This model also often leads to complex payment structures and makes it difficult for businesses to fluctuate their usage. With the PAYG model, you pay for the resource charges for user-based services, versus an entire infrastructure. Once you stop using the service, there is typically no fee to terminate, and the billing for that service stops immediately.
Let’s look at an example of how the PAYG model is applied to cloud resources. A user provisioning a cloud compute instance is generally billed for the time that the instance is used. You can add or remove the compute capacity based on your application’s demands and only pay for what you used by the second, depending on the cloud provider you chose.
Measured service and reporting
The ability to measure cloud service usage is an important characteristic to ensure optimum usage and resource spending. This characteristic is key for both cloud providers and end users as they can measure and report on what services have been used and their purpose.
NIST states the following:
The cloud provider’s billing component is mainly dependent on the capability to measure customers’ usage and calculate the billing invoices accordingly. Cloud providers can understand the overall consumption and potentially improve their infrastructure’s and service’s processing speeds and bandwidth.
Businesses get the visibility and transparency they need to utilize their rates costs across large enterprises, which is limited in traditional IT environments. This is especially helpful for usage accounting, reporting, chargebacks, and also for monitoring purposes for their key IT stakeholders. In addition to the billing aspect, rapid elasticity and resource pooling feed into this characteristic, where end users can leverage monitoring and trigger automation to scale their resources.
In this section, we learned about the essential characteristics of cloud computing: on-demand self-service, elasticity, resource pooling, the PAYG model, measured services, CapEx/OpEx, and reporting abilities. In the next section, we look at what makes businesses inclined toward moving to the cloud.
Understanding the motivators for cloud adoption
The cloud has numerous offerings that help many organizations run their workloads on the cloud. By enabling cloud adoption, companies can accelerate their business transformations and expansions. Operating on the cloud helps companies classify and find motivations to help evaluate the necessities of migrating to the cloud. Let’s look at some motivation-driven strategies that enterprises can expect as business outcomes upon performing cloud migration.
The cloud’s infrastructure is built on virtual servers that are built to handle substantial computing power and data volume changes. This helps cloud consumers leverage and build their applications so that they run without interruption. The cloud offers durable, redundant, pre-configured, and distributed resources that can be accessed from a variety of devices, such as laptops, smartphones, PCs, and more. The sophistication of this infrastructure allows you to build heterogeneous and multi-layer architectures that can withstand failures that are caused by unanticipated configuration changes or natural disasters when they’re built right.
Having high levels of real-time monitoring and reporting capabilities in cloud environments to guarantee service-level agreements (SLAs) is nearly impossible for traditional data centers to build without substantial costs. This characteristic makes it easy for businesses to build robust and resilient applications with resource guarantees.
Service-level agreement (SLA)
An SLA is a measurement parameter (often expressed as a percentage) that defines a cloud service’s expected performance and often serves as an agreement between the cloud service provider and the cloud consumer.
Note that cloud resiliency still requires businesses to build their critical systems with the right design, architecture, monitoring, orchestration, reporting, and governance to continue to run the businesses in the event of a disruption. However, with the cloud’s underlying infrastructure, you can assess, evaluate, plan, implement, and manage your critical workloads and drive resiliency for your businesses as per your recovery time objective (RTO) and recovery point objective (RPO).
What are RTO and RPO?
RTO is a business continuity metric that measures the amount of time a given application can stop working and the business can withstand the damage, as well as the time spent restoring the application and its data.
Cloud offerings, when used the right way with the proper security controls, can bring increased security to cloud consumers. The cloud service providers architect their infrastructures according to security standards and best practices to provide secure computing environments. The security-specific tools that are offered by the cloud service providers use controls to build their data center and network architectures, which are designed for high security and tightly restrict access to your data.
Carbon footprint reduction
Cloud computing continues to play a key role in reducing global energy consumption rates. Cloud computing is becoming an increasingly popular option for replacing on-premises server rooms and closets, which often lack the operational practices to consume energy efficiently, causing environmental impacts. The cloud enables organizations to share resources globally, resulting in higher efficiency and resource utilization compared to small private organizations that depend on standalone data centers.
As environment and climate awareness grows around the world, cloud service providers (CSPs) are continuously embracing and building their core physical infrastructure assets, which feed off of renewable energy. As the consumption of renewable energy increases, the overall carbon intensity will steadily decrease, resulting in energy transitions that help with the global climate and clean energy challenges.
At the macro level, cloud data centers invest in newer, more efficient equipment to achieve extremely high virtualization ratios, which are less likely to occur for typical enterprise data centers. The equipment’s power consumption and cooling characteristics are an ever-evolving exercise that also helps reduce carbon emissions.
Improved optimization and efficiency
Cost savings is one of the key motivators for companies who are thinking of moving to the cloud. The setup and maintenance costs are usually reduced significantly by implementing cloud apps and their infrastructure. Surveys on cost savings and driving factors indicate that companies could save up to 50% on IT costs and cut down on the in-house equipment and the ongoing costs of maintaining IT departments with growing capacity needs.
Let’s discuss a few factors that can drive cost savings:
- Underlying hardware costs: You don’t need to invest in in-house equipment with cloud computing. This is a major cost cut for companies that don’t have to worry about investing in upfront expenses to acquire underlying hardware and build on-premises server rooms or data centers. You can maximize the real-estate and office space, which also cuts down on costs.
- IT operation costs: You don’t have to invest in employing any in-house staff to repair or replace equipment as this responsibility shifts from you to the cloud vendor when you migrate to the cloud. This is a major shift from capital expenditure to operational expenditure. You can free up your staff and focus on diversifying your workforce, who can work from anywhere with an internet connection.
- Hardware maintenance: Labor and maintenance costs are significant when it comes to building and maintaining an in-house data center. Ongoing upgrades or repairs are not your responsibility anymore, given that your data is stored offsite. This task will fall to the vendors, resulting in spending less time on installations from weeks or months to hours.
Moving to the cloud alone doesn’t help with maximizing cost savings. You have to establish a cadence or a routine of monitoring cloud spending with the available tools by shutting down idle resources or rightsizing resources to realize extreme cost savings.
Faster innovation and business agility
- Faster time to market: Cloud-native offers end-to-end automation platforms that enable you to release code into production any number of times per day. As a result, businesses can adopt and bring new business use cases to the market about 40% faster.
- Accelerates the innovation of business offerings: Many popular cloud service providers have hundreds of native services in domains such as networking, databases, compute, machine learning (ML), security, storage, artificial intelligence (AI), business analytics, and many more. These can serve almost any industry, especially automotive, advertising and marketing, consumer packaged goods, education, energy, financial services, game tech, government, healthcare, and life sciences. Cloud offers a wide range of options for you to build, deploy, and host any application and this empowers companies to innovate rapidly.
In this section, we looked at why many businesses are moving to the cloud. We learned about the various factors that help them reduce IT operation costs and increase their business agility. Next, we’ll learn about some of the leading cloud service providers and how their infrastructure is configured.
When it comes to the on-demand availability and accessibility of cloud computing, CSPs offer these resources in many forms and sizes to businesses and individuals. Cloud consumers can rent access to any form of computing resources from applications to storage through these CSPs.
What is a CSP?
A CSP is a third party that offers on-demand cloud computing in the form of computing resources to other businesses or individuals without having them manage anything directly.
Some of the prominent cloud service providers across the worldwide cloud market are AWS, Microsoft Azure, Google Cloud, IBM Cloud, Alibaba Cloud, Salesforce, SAP, Rackspace Cloud, and VMware.
Let’s take a look at a few of these cloud service providers and see what their offerings look like.
Amazon Web Services (AWS)
Launched in 2006, AWS is a cloud service provider that aims to offer a platform that is highly reliable and scalable. Over the years, AWS had strived to provide services that span geographical regions across the world. With over 170 fully-featured services, AWS is the world’s most comprehensive and broadly adopted cloud platform.
Its service offerings feature across technical categories such as compute, databases, infrastructure management, data management, migration, networking, application development, security, AI, ML, and more.
As of 2022, AWS cloud spans 26 geographic regions and 84 availability zones around the world:
Regions and availability zones in AWS
Figure 1.4 – Magic Quadrant for Cloud Infrastructure and Platform Services
Each CSP has terminology to indicate the cloud regions for the consumer’s needs based on technical and regulatory considerations.
Azure’s global infrastructure is made up of two key components – physical infrastructure and connective network components. The physical component comprises 200+ physical data centers, arranged into regions, and linked by one of the largest interconnected networks on the planet (source: https://docs.microsoft.com/en-us/azure/availability-zones/az-overview).
Regions and availability zones in Microsoft
Unique physical locations within a region are called availability zones. Each zone is made up of one or more data centers.
Google Cloud Platform
Launched in 2008, Google Cloud Platform (GCP) is a suite of over 100 products and services offered by Google. Its core service offerings include compute, networking, storage and databases, AI, big data, identity and security, and more.
As of 2022, Google Cloud spans over 28 cloud regions, 85 zones, and 146 network edge locations across 200+ countries and territories.
Regions and zones in Google
Each data center that has a location that comprises physical assets such as virtual machines, hard disk drives, and more is defined as a region.
Each region is a collection of zones that are isolated from each other within the region.
Founded in 2009, Alibaba Cloud’s wide range of high-performance cloud products include large-scale computing, networking, databases, storage security, Internet of Things (IoT), media services, and more.
As of 2022, Alibaba Cloud operates around the world with over 78 availability zones in 24 regions.
Regions and zones in Alibaba
In this section, we looked at some of the popular companies that are managing cloud computing through their cloud technology offerings. Next, we will provide an overview of the cloud service models and discuss the level of management each model provides.
Exploring the service models – SaaS, PaaS, and IaaS
As you navigate your path to the cloud, there are key decisions that you must make that revolve around how much you want to manage yourself and how much you want your service provider to manage. These cloud service models can be put into three categories that match your current needs so that you’re prepared for the future:
- Infrastructure as a Service (IaaS): IaaS is a service model that offers consumers on-demand access to virtualized compute, storage, and networking.
- Platform as a Service (PaaS): PaaS is a service model that offers consumers on-demand access to a ready-to-use cloud-native platform for developing, running, hosting, managing, and maintaining applications.
- Software as a Service (SaaS): SaaS is a service model that offers consumers on-demand access to ready-to-use software for cloud-hosted applications.
The following diagram shows what you manage for each type of model:
Figure 1.5 – Cloud models
Let’s discuss each of these in more detail.
Infrastructure as a Service (IaaS)
Infrastructure services enable companies to acquire resources on-demand and as-needed. This gives users cloud-based alternatives instead of them having to buy the required hardware, which is often expensive and labor-intensive. The main offerings include computing resources such as storage, servers, and networking.
- Highly flexible and highly scalable
- On-demand offerings that can be accessed via the internet
- Highly redundant as data lives in the cloud
- Zero management is needed for the virtualization tasks
Ease of use
Figure 1.6 – The IaaS model
- High-performance computing: Performing groundbreaking complex calculations for batch processing workloads, media transcoding, scientific modeling, and gaming requires a high-performance computing architecture with clustered compute servers and data storage. IaaS can be leveraged to take advantage of its rapid scalability and support for networking compute resources.
- Disaster recovery and backup solutions: Building a disaster recovery plan on-premises involves a complex infrastructure that requires fixed capital expenses. With IaaS, this can be achieved in a few steps; all you need to do is set up the required infrastructure services for disaster recovery and backup solutions.
- Real-time data analytics: The ever-increasing requirement of applications to analyze data in real time requires decisions to be made in seconds. Collecting real-time data and processing it can be a time-consuming and expensive development endeavor.
IaaS can be used to manage, store, and analyze big data and handle large workloads while easily incorporating business intelligence tools. Getting business insights out of this raw data and predicting trends can be rendered effectively.
You should consider the following factors if you wish to choose an IaaS provider:
- Security: Protecting sensitive data, standardizing identity management procedures, and evaluating compliance standards are some of the security procedures that can dramatically impact your security posture when you’re using an IaaS model. It’s important to make sure that the IaaS provider is protected against security risks.
- Pricing model: In addition to the initial expenses, make sure that you understand your IaaS provider’s pricing structure and the different monitoring tools and mechanisms that they are providing for monitoring and tracking your spending. Sometimes, the initial pricing may convince you to migrate from your on-premises infrastructure, but laying out a long-term plan with expected savings will enable you to plan for resource provisioning effectively.
- SLA and support process: Knowing your vendor’s SLAs to ensure any infrastructure issues are resolved promptly is crucial for your businesses to run without interruptions. Understanding the level of support they provide once you become a paying customer is crucial.
- Integration capabilities: When you’re migrating to an IaaS model, it is important to understand how your current workflows can be incorporated into the cloud without major customizations. Without proper integration, your products may suffer from additional development and administrative efforts, which will often translate into higher costs and application support.
- Latency requirements: Analyzing which IaaS will provide you with the closest/less latency to your customers is also important. Also, if the data you need must be located physically in a country, make sure that the IaaS provider has facilities in that country. The following questions can help you address this:
- Where are this IaaS provider’s closest data centers?
- How many data centers are in the region I’m interested in?
- Is there a region/data center/facility in the country that my data needs to live in?
In summary, IaaS represents general purpose compute resources to support customers-facing websites, web applications or customers that are heavy on data, analytics and warehousing. IaaS supports a diverse set of workloads and, as we explore in later chapters, we will look into the emerging compute models that are positioned for modern application architectures such as microservices.
Platform as a Service (PaaS)
Platform services enable developers to build applications by providing hardware and software tools that can be accessed from the internet. Businesses have the freedom to incorporate special software components while they are designing and creating applications. The cloud’s inherent characteristics enable these components to be highly scalable and available. PaaS provides application life cycle management tools and integrated development environments (IDEs) to help you select the best for your needs.
An important characteristic of PaaS is that it lets you manage how different tenants are isolated. So, if the load on one tenant becomes high, the demand is distributed to the right instances of the applications. This function enables high scaling and availability. Developers can build applications anywhere in the world and don’t have to worry about operating systems, storage, the underlying infrastructure, or software updates:
Figure 1.7 – The PaaS model
- Built on virtualization technology
- Scalable and highly available
- Quicker churn in coding and testing
- Pluggable customizations
- Available to multiple developers at the same time
- Easy integration of web services and databases
- Lower capital commitment
- You can manage applications easily
With PaaS, you get support for application development, operating software, deploying to IaaS infrastructure automatically, handling runbook scenarios automatically, and bringing many improvements, including monitoring your end-to-end applications. Let’s look at why companies usually implement PaaS:
Many forward-thinking companies, including large businesses, small start-ups, and everything in-between want to build projects and create an open source-like world where everybody inside the company can have access to the code of all the other projects to reuse. They are hoping that they will be able to use common code and services to increase their productivity and innovation.
With PaaS, developer efficiency can be tremendously enhanced by leveraging common tools so that they can realize the benefits of the open source technology approach.
A key success criterion is moving as many teams and projects to the new PaaS as quickly as possible. This ensures that there’s sufficient mass within the company to create the desired innovation and shared development benefits such as the following:
- Cut down on costs: Many companies invest large amounts of capital to run existing legacy applications where they would like to reduce the ongoing spending. To run mainframe-like applications, companies need to invest in special hardware or build the virtualization themselves. Legacy applications cannot be changed because of resource constraints such as the original owner being gone or out of the business. With PaaS offerings, you can reduce specialized management, increase the ability to share resources, and cut the costs of operating these applications by 50-80%.
- Increase reusability: Many companies are looking for ways to build several APIs or applications as quickly as possible. PaaS provides the required middleware so that you have a reusability paradigm where developers can design and build common features that can be reused.
- Faster time to market: Due to its cost-effectiveness and access to state-of-the-art resources, companies of any size can build robust systems at a faster pace and accelerate their launch times.
The idea behind reusability is to use something that already exists rather than investing money and resources into creating something new. This is especially critical for new companies that are starting from scratch and it is even more important for their businesses to do things right. Reusability is an important aspect, where the developers can reuse existing APIs, existing services, or components that will help you to start with low costs and develop faster. There are a few factors to consider when you’re choosing the right PaaS provider for you:
- Developer support: Learning whether your PaaS vendor providers platforms that support all major programming languages, easy-to-use templates to build and deploy applications, and support for relational and non-relational databases is recommended. Researching how well-equipped your vendor is when it comes to streamlining the developer’s processes and deployment procedures is also helpful before locking in on a provider.
- Compliance and Regulation: Vendors that adhere to the industry standards and regulatory requirements will be helpful for you to be in alignment with the best practices in the industry.
- Reliability and Performance: Your potential PaaS provider should be implementing disaster recovery and fault-tolerant techniques to ensure the RTO and RPO of your business systems. An ideal provider will have strategies and processes to guarantee both planned and unplanned events.
- Data Security: Data is the heart of any application and providers need to be able to guarantee SLAs, and support guidelines that will respect the confidentiality of ensuring data security and privacy.
In summary, the PaaS market is competitive and offers unique solutions for customers of any size or vertical to build customized applications faster.
Software as a service (SaaS)
SaaS is the most common category of cloud computing that enables users to leverage software through the web or APIs. SaaS offloads the users from downloading or installing applications on their local devices that they like to use in collaboration with their projects. Storing and analyzing the data is also done via a remote cloud network. The SaaS solutions are hosted centrally in the cloud and can be accessed from anywhere, anytime.
The vendor is fully responsible for the consumer’s software experience. Users are not responsible for hardware management, nor the functionality of these vendor-hosted applications. Many popular solutions such as Salesforce (customer relationship management software), Canva (graphics), and Slack (collaboration and messaging) are all examples of SaaS.
- Reduced risk: Companies don’t have to build the software that meets their needs; they can try out the SaaS products for a zero or low fee. This characteristic enables them to explore and evaluate the best product that can be used with their existing system with no financial risk.
- Increased productivity: SaaS products, as opposed to the traditional model, can enable users to provision their resources and start using them in a few hours. This cuts down on the installation, configuration, and deployment time so that the users can focus more on building their applications and innovate faster.
- Increased data protection: Data is routinely stored in the cloud and all software maintenance in the form of updates, version upgrades, and support is taken care of by the service provider. As a result, data loss is protected from hardware failures.
- Customers don’t manage, install, or upgrade the software: The provider takes care of making upgrades available to their customers.
- Easy to access: Applications are accessed via internet-connected devices. Analytic tools for data reporting and intelligence tools, which are often expensive and difficult to configure, are easily available.
- Flexible pay model: Customers pay to gain access to the software and applications. Start-ups and small businesses can easily leverage SaaS applications, given that they are easy to set up and no capital or expertise is required to build these tools and applications. There is a variety of business scenarios where you can take advantage of the SaaS model and its self-provisioned nature.
- Applications with web and mobile access: Companies that are building applications that require web and mobile access can take advantage of the SaaS technology.
- Short-term projects: Companies often use SaaS technology when they’re building applications that are short-term projects and that are not required all year long.
- Startups and small businesses: You will find that SaaS comes in handy when there is not much time, capital, or expertise available to build your applications and host them on-premises.
While choosing the right SaaS vendor, make sure that you consider the following factors:
- Limited customizations: SaaS applications are available out of the box with limited customizations from the vendor. If you have a strong requirement that is dependent on the SaaS application, you may want to consider building some customizations to work around this limitation.
- Limited compliance: Vendors own the SaaS applications and continually release new features and fixes. However, you lose a degree of control when it comes to meeting ever-increasing legal, regulatory, and compliance requirements to keep your organization away from non-compliance. Make sure that you ask your SaaS provider about uptime, resiliency, and any critical compliance requirements ahead of signing up.
- Vendor viability: Acquisitions are commonplace for the SaaS market and often result in terminations of services with very short notice. Ensure that you agree on an SLA with an uptime guarantee and also have a thorough understanding of the duration of your contract.
In this section, we looked at various cloud computing models and their use cases, benefits, and limitations. Next, we will discuss the cloud deployment models in detail
Exploring the deployment models – public, private, hybrid, multi, and community
As the cloud is increasingly becoming the default option for many companies, you must choose the cloud model that is most suitable for your needs. Choosing a cloud environment type and a deployment model that aligns with your business goals is a process that you can dive into before you start your cloud journey.
Cloud deployment model
The cloud deployment model is defined by a combination of deployment types that control parameters such as the accessibility, location, and proprietorship of the infrastructure, network, and storage size.
When it comes to cloud deployment models, there are five main types:
We will discuss each of them in detail in the following subsections.
The public cloud
The public cloud deployment is accessible by anyone and is the most commonly used model. The main feature of this deployment type is that you don’t know or own any hardware. The service providers manage the server infrastructure for you, they administer the resources and maintain the hardware, and you are charged on a pay-per-use basis in most cases. Data is created and stored on the servers and these servers are shared between all the consumers:
Figure 1.8 – The public cloud
- Easy to set up: Most CSPs have intuitive and easy-to-use portals to set up resources.
- No infrastructure maintenance: The CSP is responsible for maintaining the underlying infrastructure.
- Elasticity: It's easy to acquire or release resources to meet your business requirements.
- Highly available: The extensive ecosystem of your provider’s resources provides the required controls to run your workloads with improved uptime.
- Cost-effective: There’s a PAYG model for the services that you use and no upfront investments to purchase hardware or software.
- Security and risk mitigation: While the CSPs implement many mechanisms to make the cloud highly secure, your applications and data in the public cloud are only secure with your help. Many CSPs come with native encryption, automation, access control, orchestration, and endpoint security mechanisms to manage risk effectively.
- Prone to large-scale infrastructure events: Cloud service providers strive for high availability, but public clouds have suffered outages in the past that caused huge losses. You must do your research before deciding on the cloud computing provider for your applications that need to have the highest uptime. Irrespective of the cloud provider you choose, it is important to have an enterprise-wide incident management and remediation platform strategy to handle events effectively.
- Standard features a “one-size-fits-all” model: Many cloud service providers offer a standard set of features that cater to most companies. However, you will need to consider additional customizations or workarounds if you have applications that require CSPs to develop complex features. Some primary examples of public cloud models include Amazon Web Services (AWS), Microsoft Azure, IBM Cloud, and Google Cloud.
The private cloud
The private cloud, as its name suggests, is a dedicated cloud model where a specific business or company owns the private cloud. While the architecture of the public and private cloud is similar, the difference is in the way you own and manage the hardware. Most commonly, the hardware will be dedicated to you and you don’t share it with any other users outside your company.
The service provider will provide you with an abstraction layer for all the hardware. Here, you will be able to add new hardware to your cloud but will not be responsible for configuring it, given the semi-automatic nature of the provisioning process. You may choose this model when you have stringent security and compliance restrictions regarding the nature of the applications that you may want to run and are ready to pay high costs for the dedicated setup:
Figure 1.9 – The private cloud
- Increased security: Cloud access through private and secure network links, along with the native antivirus, firewall protection, and encryption mechanisms, makes the private cloud environment more secure.
- Increased regulatory compliance: Due to its security and control benefits, the private cloud can help address regulatory compliance hosting requirements.
- More flexible infrastructure model: Many organizations that are moving their workloads from legacy on-premises to the cloud find it difficult to meet the customization requirements that support their applications. The infrastructure of the private cloud can be configured to provide services and support for such stringent requirements.
- Increased costs: The private cloud model can be more expensive than the public cloud because of the infrastructure expenses that you have to spend.
- Maintenance and deployment: Continuous deployment and maintenance require additional setup and staff, which can be time-consuming.
- Limited remote access: The private cloud has limited remote access, so mobile users may not be able to connect to the cloud whenever they want. Some examples of CSPs that provide the private cloud include Amazon, IBM, Cisco, Dell, Red Hat, Rackspace, Microsoft Azure, Red Hat OpenStack, and VMware.
The hybrid cloud
The hybrid model is a combination of on-premises, private cloud, and/or public cloud services that lets you get value from all the features of all the models. This model allows you to mix and match the other models’ capabilities to best suit your business requirements.
The hybrid cloud deployment model facilitates data and application portability to safeguard and control your assets strategically. Being able to balance multiple deployment models not only safeguards the controls but helps maximize the benefits of cost and resource utilization. Many organizations are evaluating this as a transitional model that eases you into the public cloud over a longer period:
Figure 1.10 – The hybrid cloud
- Improved speed: The mobility between cloud models gives you greater speed and agility for innovation and speed to market. You don’t have to be limited to your private on-premises environment and can expand your workload quickly to test, prototype, and launch new solutions.
- Business continuity: The hybrid model helps reduce potential downtime and impacts in the event of a failure or a disaster. You get improved business continuity and can continue with business operations when you opt for the hybrid model as a backup option during interruptions.
- Improved security and privacy: Due to security restrictions or data protection requirements, few companies cannot operate only in the public cloud. This model provides an improved security model platform for mission-critical applications with sensitive data on-premises while you’re running the remaining applications in the public cloud.
- Improved risk management: You get more control over your data and improved security, which means you can reduce data exposure. You get to standardize cloud storage and implement stronger security controls to manage risk effectively.
- Managing multiple vendors and platforms: You will have to keep track of and manage multiple vendors and platforms to have effective computing environments. Having runbooks, workflows, and processes with a good team understanding and effective coordination of vendors is a must to make sure your environments are running without interruptions.
- Hardware costs: The cost that’s associated with hardware procurement, setup, maintenance, and installation of the hybrid cloud infrastructure is high. Organizations will have to prepare for this upfront cost, as well as train their IT staff to cope with the cloud and on-premises expenses.
- Security: On-premises and the cloud require different approaches to secure your applications. Using a blend of public, private, and/or on-premises makes it difficult to be free of intrusion risks.
- Lack of visibility: Hybrid increases the number of environments that the operations teams need to keep track of and achieve a clear view of. Management becomes difficult if you don’t have a good understanding of the current infrastructure and operations, which leads to missed opportunities regarding potential issues.
Cloud providers recommend many design patterns to achieve high availability for applications running on the cloud. When you are using more than one cloud provider at a time to achieve high availability, you are using the multi-cloud deployment model. Companies may also use the multi-cloud option when they need a specific service from a CSP X and another specific service from a CSP Y.
The multi-cloud approach involves adopting a mixture of services from multiple cloud providers, sharing workloads between them, and picking services that meet specific business needs to achieve greater flexibility and reliability:
Figure 1.11 – Multi-cloud
The following are the advantages that businesses can reap while using the multi-cloud approach:
- Multiple best-in-class cloud providers: Each cloud provider has its strengths and weaknesses when it comes to providing features that you need to use for your applications. The foremost benefit of the multi-cloud strategy is the ability to take advantage of the unique best-in-class services that each cloud provider offers. You get to pick and enable your developers to focus on innovation and unblock any limitations that a specific cloud provider may have.
- Avoid vendor lock-ins: Many businesses worry about getting locked into a specific cloud provider or infrastructure and pricing model when using a single cloud provider strategy. You have greater flexibility in choosing the multi-cloud to leverage the best of the services that the cloud providers offer. You get to pick the vendor that has a specialized and evolved set of services.
- Risk mitigation and enhanced resiliency: Continuous availability is a key aspect for any business that runs mission-critical workloads. You get the option to run your applications and store data on multiple clouds to fall back on and restore in the event of a service outage.
- Flexibility and scalability: Multiple cloud vendors invest in a higher amount of space, security, and protection to offer a perfect place for your businesses to process and store information. With the right expertise at hand and having a good multi-cloud operations runbook, you can achieve greater scalability, which allows your applications to scale the storage or compute up or down based on the ongoing demand.
Although there are many advantages of using multiple cloud vendors, building and managing a multi-cloud architecture can have its downsides:
- Building the expertise: The need for cloud computing expertise is growing at a rapid pace. Many companies are having trouble recruiting cloud professionals that have the knowledge and skillset of a single cloud provider. It is a challenge to find network specialists, security experts, architects, and engineers that have expertise in multiple clouds. Within your organization, you will need to plan out how you will recruit the right workforce and develop their skill set on multiple cloud platforms to build, secure, manage, and operate your applications across multiple clouds.
- Cost tracking and optimization: Each cloud provider has a specialized set of tools and reporting platforms to help you manage the financial costs of your resources running on their cloud. Consolidating these costs and having a good handle on their pricing model to navigate through the math and pricing structures is recommended when you’re operating on multiple clouds.
- Increased complexity on operations: Many companies find moving to the cloud a long and daunting task. In addition to that, managing workloads on multiple clouds may add to the complexity if it’s not planned well. With your applications and their resources spread across multiple clouds, operational management such as patching, monitoring, logging, and backing up your resources are all details that you have to consider when planning.
- Security risks: It is important to understand the blast radius of security attacks when it comes to applications that are deployed on multiple clouds. Considering how well you configure, manage, alert, log, and respond to such security breaches must be accounted for. Many companies use third-party tools to manage their approaches on encryption keys, identity and access controls, and resource policies.
- Compliance: Creating a shared responsibility model with multiple cloud providers can be a daunting task. Simplifying how vulnerabilities are managed and solving compliance challenges can be a few drivers that can add complexity.
The community cloud
The community cloud, although less popular than the previously discussed models, is a hybrid form of a private cloud that has a similar architecture and the ability to use security and privacy controls. Organizations get to run their workloads on a shared platform where multiple consumers can work on projects and applications that may belong to specific industry segments. Businesses such as health care companies, financial institutions, governments, research, education, and even large manufacturing companies are ideal industries for community cloud environments:
Figure 1.12 – Community cloud
Let’s look at some of the advantages of adopting a community cloud strategy:
- Convenience and control: The community cloud offers the same flexibility as a public cloud environment and has the same security levels and privacy as a private cloud. This makes it accessible for a specific set of organizations and gives you much more confidence in the platform, as you can govern your applications with industry-tailored flexibility.
- Security and privacy: The community deployments are similar to that of the private cloud, where you can control security at more granular levels. This ensures that secure transactions align with regulatory protocols.
- Availability and reliability: Community clouds provide the same level of services to ensure the availability of your data and applications at all times. Replicating your data and applications in multiple locations enables you to implement redundant infrastructure for your critical applications where availability and reliability are topmost priorities.
However, there are a few concerns regarding the community cloud approach that you will want to identify and evaluate before adopting this model:
- Limited storage and bandwidth: Data storage and bandwidth are shared among other organizations, which limits the community members to a finite amount of data storage and bandwidth.
- Not a “one-size-fits-all” model: The community cloud approach is a new model that has recently started evolving as more and more businesses are finding a fit for their use cases. Small, medium, and large businesses must still evaluate this on a case-by-case basis, given that many public cloud providers are offering services that cater to the requirements of every business.
Comparison between the different cloud deployment models
Scalability and Flexibility
Table 1.1 – Deployment model comparison matrix
The preceding table can be used as a cheat sheet as you evaluate various deployment models and determine which model will be best suited for your business’s requirements.
In this chapter, we introduced the cloud and some of its concepts, such as on-demand self-service, resource pooling, multi-tenancy, elasticity, and scalability. We learned about the history of the cloud and discussed the key motivators for businesses to move to the cloud. We categorized the cloud service models and deployment models before looking at some of the commonly used cloud vendors and learning about their infrastructure in detail.
After that, we discussed when we should use a specific cloud model and the factors to consider while choosing a specific model. We familiarized ourselves with concepts such as public, private, hybrid, multi, and community cloud models while looking at each model’s benefits and additional factors to consider.
These concepts should have given you an in-depth understanding for the next chapter, where we will focus on cloud migration fundamentals and the different phases of cloud migration.
For more information on cloud fundamentals and various cloud providers, please read the following articles:
- Amazon Web Services – https://aws.amazon.com/about-aws/
- Global Infrastructure of AWS – https://aws.amazon.com/about-aws/global-infrastructure/
- Microsoft Azure – https://azure.microsoft.com/en-us/
- Global Infrastructure of Azure – https://azure.microsoft.com/en-us/global-infrastructure/
- Availability Zones with Microsoft – https://docs.microsoft.com/en-us/azure/availability-zones/az-overview
- Google Cloud – https://cloud.google.com/
- Google Cloud Overview – https://cloud.google.com/docs/overview