Home Cloud & Networking Cloud Computing Demystified for Aspiring Professionals

Cloud Computing Demystified for Aspiring Professionals

By David Santana
books-svg-icon Book
eBook $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Chapter 1: Introduction to Cloud Computing
About this book
If you want to upskill yourself in cloud computing domains to thrive in the IT industry, then you’ve come to the right place. Cloud Computing Demystified for Aspiring Professionals helps you to master cloud computing essentials and important technologies offered by cloud service providers needed to succeed in a cloud-centric job role. This book begins with an overview of transformation from traditional to modern-day cloud computing infrastructure, and various types and models of cloud computing. You’ll learn how to implement secure virtual networks, virtual machines, and data warehouse resources including data lake services used in big data analytics — as well as when to use SQL and NoSQL databases and how to build microservices using multi-cloud Kubernetes services across AWS, Microsoft Azure, and Google Cloud. You'll also get step-by-step demonstrations of infrastructure, platform, and software cloud services and optimization recommendations derived from certified industry experts using hands-on tutorials, self-assessment questions, and real-world case studies. By the end of this book, you'll be ready to successfully implement cloud computing standardized concepts, services, and best practices in your workplace.
Publication date:
March 2023
Publisher
Packt
Pages
474
ISBN
9781803243313

 

Introduction to Cloud Computing

As organizations today continue to move their customer interfacing services and internal line-of-business (LOB) applications to the cloud, it becomes imperative that IT professionals, developers, and enthusiasts understand the essentials and tenets that formed cloud computing. Practitioners should understand and explore the core advantages ingrained in cloud computing that empower users of any skill level to deploy, configure, and manage with confidence cloud-hosted applications, modern services, and core infrastructure resources optimally.

This chapter will lead you through a historic journey from traditional infrastructures to the rise of cloud computing as we know it and then describe in detail cloud computing and its various advantages over traditional technology infrastructures.

In this chapter, we’re going to cover the following topics:

  • Genesis
  • Monolithic on-premises technology
  • The advent of cloud computing
  • Cloud computing explored
  • Advantages of cloud computing

The fact that you’re reading this implies you know that cloud computing is not some fad on its way out to be discarded to the annals of time. It’s also an excellent modern technology to master, however improbable, due to its ambiguity and cosmic scale. I may be embellishing, but this is not a far cry from the undoubtable truth, which is there are no cloud gurus to speak of in the literal sense. It’s also an exciting cutting-edge service-oriented technology that will expand your existing capabilities, and successfully mastering cloud computing through attaining industry-standard certifications subsequently leads to sustainable careers with the promise of future growth in some of the most prestigious organizations in the private and public sectors. I speak from experience, and when I tell you it’s feasible, it is, because like you, I too was seeking but not finding, and my hopes are that you find your niche in modern technology by embracing the cloud.

 

Genesis

In this section, you’ll learn about the history of key technology being used in cloud computing, and you will also learn how these technologies are derived from the traditional mainframes used in data centers. Lastly, you will come to understand a technology architecture pattern known as distributed systems.

I will describe Advanced Research Projects Agency Network (ARPANET), Multics, and mainframes, and introduce virtualization.

“Ever since the dawn of civilization, people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world. Today we still yearn to know why we are here and where we came from. Humanity’s deepest desire for knowledge is justification enough for our continuing quest. And our goal is nothing less than a complete description of the universe we live in.” (Stephen Hawking)

The future of tomorrow’s technological wonders arose from the turbulent ’60s, arguably an era of counter-cultural beginnings. Through this chaos, humanity rose to accomplish impossible feats, such as the US and Soviet Union’s Space Race, but there are more developments that may have not been as significant to the public, such as the discovery of a rapidly spinning neutron star, the automotive industry’s contribution to acceleration in the literal sense with vehicles such as the Ford Mustang, and nuances in size classes, which included compact, mid-sized and full-sized, coming into being.

The very first computers were connected in 1969, and this was only possible due to the beginnings of the Advanced Research Projects Agency Network (ARPANET) in 1966, a project that would implement the TCP/IP protocol suite. Out of the project rose many nuances such as remote login, file transfer, and email. The project over the years continued to evolve; internetworking research in the ’70s cultivated transmission control programs’ later versions. These later TCP/IP versions were then used in 1983 by the Department of Defense.

Scientific communities in the early 1980s invested in supercomputing mainframes and supported network access and interconnectivity. In later years, interconnectivity flourished to develop what is known today as the internet.

Practical application concepts on remote job entry became realized due to market demands in the 1960s for time-sharing solutions, led by vendors such as IBM and others. GE was another major computer company—along with Honeywell, RCA, and IBM—that had a line of general-purpose computers originally designed for batch processing. Later, this extended into full-time-sharing solutions developed by GE, Bell Laboratories, and MIT, wherein the finished product was known as the Multics operating system, which ran on a mainframe computer. These mainframes became known as data centers, where users submitted jobs for operators to manage, which became prevalent in the years that followed.

The mainframes were mammoth physical infrastructures installed in what later was coined a server room. It became practical for multiple users to access the same data and utilize the CPUs’ power from a terminal (computer). This allowed enterprise organizations the ability to get a better return on their investment.

Following the early achievements of mainframes, corporations such as IBM developed virtual machines (VMs). VMs supported multiple virtual systems on a single physical node—in layman’s terms, multiple computer environments coexisting in the same physical environment. VM operating systems such as CP-40 and CP-67 potentially pathed the way for future virtualization technologies. Consequently, mainframes could run multiple applications and processes in parallel, making the hardware even more efficient and cost-effective.

IBM wasn’t the only corporation that developed and leveraged virtualization technologies. Compaq introduced Intel’s 80386 microprocessor, the Deskpro 386, in 1986. The Deskpro 386 included platform virtualization in the virtual 8086 mode for Windows 386, which supported Microsoft Windows running on top of the MS-DOS operating system. The virtual 8086 mode supported multiple virtual processors, optimizing performance.

In the years to come, virtualization functionality could be traced back to the earliest implementations. Virtual infrastructures can support guest operating systems, including VM memory, CPU cores, disk drives, input devices, output devices, and shared networking resources.

Telecommunication pioneers in the 1990s such as AT&T and Verizon, who previously marketed point-to-point data circuits, began offering virtual networking resources with a similar quality of service, but at a lower cost. Various telecommunication providers began utilizing cloud symbols to denote demarcation points between the provider and the users’ network infrastructure responsibilities.

As distributed computing became more mainstream throughout the years of 2000-2013, organizations examined ways to make scaled computing accessible to more users through time-sharing, underpinned by virtual internetworking. Corporations such as Amazon, Microsoft, and Google offered on-demand self-service, broad network access, resource pooling, and efficient elasticity, whereby compute, storage, apps, and networking resources can be provisioned and released rapidly utilizing virtualization technology.

The advantages of virtualization go far beyond what I have written here, but most notably include reducing electronic equipment costs, resource pooling (sharing), multi-user VM administration, and site-to-site internetworking implementation acceleration—moreover, decreasing exorbitant operational maintenance costs. Virtualization is one of the most pivotal protagonists that catalyzed organizations such as Microsoft, Amazon, and Google to introduce cloud resources in a fully managed service-oriented architecture (SOA) that presently spans the globe.

These virtualization ecosystems, now known as cloud computing services, are delivering turnkey innovations such as Amazon’s e-commerce software as a service (SaaS), known as Amazon Prime, specializing in distributing goods and services. The business benefits are staggering: agility, flexible costs, and rapid elasticity to support highly available access. General statistics shows that millions of consumers rely on Amazon services to facilitate daily shopping needs, especially during the holidays. Amazon has even surpassed retail organization giants such as Walmart as the world’s largest online public retailer. These facts conclusively support the benefits of services delivered using virtualization infrastructure resources. As the age-old adage proclaims, less is more!

 

Monolithic on-premises technology

In this section, you will learn about the core traditional data center resources. Moreover, I will describe data centers by type, maintenance, compliance, implementation, business continuity (BC), cost, energy, environmental controls, and administration.

To understand the advent of the cloud, we need to address the elephant in the room—while this is figurative in nature, this closely resembles the enormity of the current subject matter: the traditional on-premises data center. I’ll elaborate on the term on-premises in the current context in a later section of this chapter. For now, let’s focus on the traditional data center architecture.

In an unembellished but detailed description, data centers are physical facilities that host networking infrastructure services that manage data resources. The data center is known to house thousands of servers, formerly known as mainframes, and data centers are comprised of various resources, such as computers and networking devices, including utilities and environmental control systems supporting the physical infrastructure.

All data centers, regardless of type, include networking infrastructure resources such as media (cable), repeaters, hubs, bridges, switches, and routers to connect computer clients to servers. Networking devices support internetworking connectivity with internal and external networks. Even the traditional data center supported remote connectivity and had the capability to implement networking topologies such as site-to-site connectivity using an array of networking technologies that supported customers and remote workers who had to connect to the enterprise organization from outside of the company’s private local area network securely.

Storage system resources were prevalent and consisted of infrastructure resources such as storage area network (SAN), networked attached storage (NAS), and direct-attached storage (DAS) devices. Regardless of the data structure, volume, velocity, and accessibility pattern, data was stored in one of these primitives. Later, I’ll elaborate on the variances to help you not only differentiate but to better understand the advantages and disadvantages, which may ultimately drive organizations of all sizes to adopt modern cloud computing data services.

It’s important to note the essential legacy data center infrastructure hosted services on physical servers, which served as the compute infrastructure, typically mounted on physical racks and occasionally installed into cabinets. Here are some important facts: statistically, data centers are in office buildings or similar physical edifices, and data centers have either raised or overhead floored architecture that contains additional equipment, such as electrical wiring, cabling, and environmental controls required to sustain the data center services.

Overall maintenance is another important factor to consider for traditional data centers. This includes administering and maintaining industry-standard regulatory and non-regulatory business best practices, which is a perennial expense. Planning, preparing, and deploying new applications, as well as existing LOB applications or services, using monolithic infrastructures typically incurs substantial costs that directly impact capital and operational expenditures. And innovation, experimentation, and deployment iteration while plausible are not cost-effective in monolithic environments, which delays—if not prevents—new services from general availability. Decommissioning these hardware infrastructure resources is a process within itself, and nigh impossible for some organizations who do not have either the internal talent or budget to successfully complete the project. This more often than not leads companies to try other solutions based on different data center implementations.

The content herein only scratches the surface of traditional on-premises infrastructures’ compliance considerations, such as business requirements and system maintenance concerns. But make no mistake—whether your company is small or large, regulatory compliance policies are very important. I highly recommend reviewing governance and compliance documentation for any technology. Similarly, later sections will elaborate on various compliance controls that organizations such as Amazon, Microsoft, and Google must adhere to. Organizations that implement and manage data centers, traditional or cloud, must adhere to compliance controls set forth by various governmental or non-governmental agencies. My apologies—I have digressed a little from my previous paragraph’s subject. So, let us continue our journey regarding different data center implementations.

Did you know that on-premises data centers come in various implementations? Enterprise data centers are the most common. They are owned and operated by the company for their internal users and clientele. Managed data centers are operated by third parties on behalf of the organization. Companies typically lease the equipment instead of owning it. Some organizations rent space within a data center, where the data center is owned and operated by third-party service providers (SPs) that offer off-premises implementations known as colocation data center models. Each implementation includes redundant data center operational infrastructure resources, such as physical or virtual servers, storage systems, uninterruptable power systems, on-site direct current systems, networking equipment, cooling systems, data center infrastructure management resources, and—commonly—a secondary data center for redundancy.

High availability (HA) and disaster recovery (DR) are other important factors to weigh up. Data center infrastructures are categorized into tiers, which is an efficient way to describe the HA (or lack of HA) infrastructure components being utilized at each data center. Believe it or not, some organizations do not require the HA that a tier 4 data center proposes. Organizations run a risk if they do not plan carefully. For example, organizations that invest in only a tier 1 infrastructure might make a business vulnerable. However, organizations that decide on a tier 4 infrastructure might over-invest, depending on their budget constraints.

Let’s have a look at the various HA data center tiers:

  • Tier 1 data centers have a single power and cooling system with little, if any, redundancy. They have an expected uptime of 99.671% (28.8 hours of downtime annually).
  • Tier 2 data centers have a single power and cooling system with some redundancy. They have an expected uptime of 99.741% (22 hours of downtime annually).
  • Tier 3 data centers have multiple power and cooling systems with redundancy in place to update and maintain them without taking them offline. They have an expected uptime of 99.982% (1.6 hours of downtime annually).
  • Tier 4 data centers are built fault-tolerant and have redundant components. They have an expected uptime of 99.995% (26.3 minutes of downtime annually).

The total cost of ownership (TCO) may be too costly for some start-ups. Most enterprise organizations are looking to offload these costs using third-party vendors but learn eventually the upfront capital expenses are too much to bear, so they continue investing in their own on-premises data center. The public cloud provides advantages regarding capital expenditures and HA. We will discuss these topics in more detail in the section titled The advantages of cloud computing.

What about utility costs? Data centers use various IT devices to provide services, and the electricity used by these IT devices consequently converts to heat, which must be removed from the data center by heating, ventilation, and air conditioning (HVAC) systems, which also use electricity.

Did you know that utilities and environmental control systems are other important items to consider when reviewing on-premises data center costs? On average, on-premises data center infrastructure systems containing typically tens of thousands of network devices require an abundant amount of energy. Several case studies illustrate that traditional data centers use enough electricity to power an estimated 80,000 homes in the US.

These traditional on-premises data centers’ HVAC infrastructure systems are also sometimes not efficient and require the capability to deliver central and distributed cooling, which is costly due to some buildings’ older architectural designs. Moreover, to be more cost- and energy-efficient, newer modular data center models are required for optimal cooling paths, but that’s not always feasible in older structures. Achieving optimal performance from your computing infrastructure requires a different modern modular design that can support the ongoing business demands of today.

Managing a traditional data center requires employing large teams, and supervisors of varying skill sets. Operations team members are responsible for the maintenance and upkeep of the infrastructure within a data center. Governing data center standards for networking, compute, and storage throughout an organization’s application life cycle may not be efficient due to the monolithic architectural design of most traditional data centers. If a company required scaling, it would have to invest in expanding its data center resources by procuring more hardware, which is inevitable. Upgrading the on-premises hardware technology also requires multi-vendor support and sometimes even granting those third parties access to the data center, which poses numerous risks. These concerns would be far fewer if the overall quantity of physical servers in a traditional data center were proportional to the services rendered. But that idea becomes a reality when virtualization becomes prevalent.

Let’s summarize—traditional data centers are listed as a type, such as enterprise data centers, implemented and managed by the company. Data center locations are physical buildings and can include offices and closets. Data centers incur a myriad of costs, some functional and others non-functional. Data center governance includes conforming to policies, laws, regulations, and standards. A data center’s architectural design may impact energy and efficiency. More importantly, data center designs based on type have an impact on our environment. A data center’s power and cooling system design require consistent monitoring and optimization, which consequently will decrease emissions and energy consumption and decrease TCO. Finally, the quintessential traditional data center has a 1:1 ratio between physical servers and services published by an organization. This method of implementation is to be expected because of monolithic architecture. Consequently, this method incurs substantial capital and operational expenditures that have a direct negative impact on an organization’s return on investment (ROI).

 

The advent of cloud computing

In this section, I will introduce, describe, and define virtualization types and vendors, and I will describe how virtualization is different from physical servers. Then, I will explore the distributed computing API architecture. I will also describe how demand has driven technology. Finally, I will define cloud computing models.

This section’s objectives are the following:

  • From physical to virtual
  • Virtualization contributions by vendor
  • Distributed computing APIs
  • Exponential growth

From physical to virtual

Cloud computing technology emerged from a multitude of innovations and computing requirements. This emergence included computer science technology advancements that leveraged the underpinnings of mainframe computing, which changed the way we do business. Let us not forget the fickle customer service-level expectations related to IT BC.

The mainframe system features and architecture topology would be one of the important legacy technologies that, through several joint ventures from various stakeholders and evolution, contributed to the advent of cloud computing.

As described in the Genesis section, CP-40 provided a VM environment. Mainframes such as IBM’s 360-hosted CP-40, which supported multiple cloud computing engineer VM operating system instances—arguably, are the very first hardware virtualization prototype.

Let us define virtualization first before we explain how Amazon, Microsoft, and Google use this underpinning to drive their ubiquitous services.

In the Genesis section, we saw how achievements in virtualization technology played an important role in the emergence of cloud computing. Understanding the intricacies of VMs—arguably referred to as “server virtualization”—is critical in the grand scheme of things.

Virtualization abstracts physical infrastructure resources, which support running one or more VMs guest operating systems that resemble a similar or different computer operating system on one physical computer host. This approach was pioneered in the 1960s by IBM. IBM developed several products, such as CP-40 and CP-67, arguably the very first virtualization technologies. While virtualization is one of the key technologies in the advent of cloud computing, this book will not delve into virtualization implementations, such as hardware-assisted virtualization, paravirtualization, and operating system-level virtualization.

Over the years, many technology-driven companies have developed different virtualization offerings of varying types.

Virtualization contributions by vendor

VMware is a technology company known for virtualization. VMware launched VMware Workstation in the ’90s, heralding virtualization software that allowed users to run one or more instances of x86 or x86-64 operating systems on a single personal device.

Xen is another technology company known for developing hypervisors that support multiple computer operating systems running on the same hardware concurrently.

Citrix is a virtualization technology company that offers several virtualization products, such as XenApp (app virtualization), which supports XenDesktop (desktop virtualization). There is even a product for Apple devices that hosts Microsoft Windows desktops virtually. Citrix also offers XenServer, which delivers server virtualization. Additionally, Citrix offers the NetScaler product suite: in particular, software-defined wide area networking (SD-WAN), NetScaler SDX, and VPX networking appliances that support virtual networking.

Microsoft, known for its personal and business computer software, has contributed as well to virtualization. Microsoft started offering application virtualization products and services. Microsoft’s App-V delivered application virtualization, and soon thereafter, Microsoft developed Hyper-V, which supported server virtualization.

There are many more organizations that, through acquisition or development, have contributed to modern advancements in various virtualization nuances that are the foundation of cloud computing wonders today. But I would be remiss if I didn’t elaborate on the ubiquitous cloud’s distributed nature—or, more accurately denoted, distributed computing architecture.

Distributed computing APIs

Distributed computing, also known as distributed systems, rose out of the ’60s, and its earliest successful implementation was the ARPANET email infrastructure. Distributed computing architectures categorically are labeled as loosely coupled or tightly coupled architectures. Architectures such as client-server are the most known and were prevalent during the traditional mainframe era. N-tier or three-tier architectures provide many of today’s modern cloud computing service architecture characteristics: in particular, sending message requests to middle-tier services that queue requests for other consuming services—for example, in a three-tier web, application, and database server architecture. The application server or application queue-like service would be the middle-tier service, and then queue input messages for other distributed programs on the same server to consume (input) and if required send (output). Another aspect of distributed computing architectures is that of peer-to-peer (P2P), where all clients are peers that can provide either client or server functionality. Each peer or service communicates asynchronously, contains local memory, and can act autonomously. Distributed system architectures deliver cost efficiency and increased reliability. Cloud computing SPs offer distributed services that are loosely coupled, delivering cost-efficient infrastructure resources as a service. This is also due to distributed systems utilizing low-end hardware systems. The top three cloud computing providers are decreasing, if not eliminating single points of failure (SPOFs), consequently providing highly available resources in an SOA. These characteristics are derived from distributed computing.

Exponential growth

The rise of cloud computing is also arguably due to the exponential growth of the IT industry. This has a direct correlation with HA, scalability, and BC in the event of planned or unplanned failures. This growth also resulted in mass increases in energy consumption.

Traditional IT computing infrastructures must procure their own hardware as capital expenses. Additionally, they must encounter operating expenses, which include maintaining the computer operating systems and the operational costs incurred by human services. Here is something to ponder—variable operational costs and fixed capital investments are to be expected. Fixed or capital costs are upfront, which could be lowered by increasing the number of users. However, the operational costs may increase quickly with a larger number of users. Consequently, the total cost increases rapidly with the number of users. Modern IT computing infrastructures such as the cloud offer a pay-per-use model, which provides cloud computing engineers and architects greater control over operational expenditures not feasible in a traditional data center.

Meeting the demands of HA and the capability to scale becomes more and more important due to the growth of the IT industry. Enterprise data centers, which are operated and managed by the corporation’s IT department, are known to procure expensive brand-name hardware and networking devices due to their traditional implementation and familiarity. However, cloud architectures are built with commodity hardware and network devices. Amazon, Microsoft, and Google platforms choose low-cost disks and Ethernet to build their modular data centers. Cloud designs emphasize the performance/price ratio rather than the performance alone.

As the number of global internet users continues to rise, so too has the demand for data center services, giving rise to concerns regarding growing data center energy utilization. The quantity of data traversing the internet has increased exponentially, while global data center storage capacity has increased by several factors.

These growth trends are expected to continue as the world consumes more and more data. In fact, energy consumption is one of the main contributors to on-premises capital and operational expenses.

Inevitably this leads to rising concern in electricity utilization, consequently voicing concerns over environmental issues, such as carbon dioxide (CO2) emissions. Knowing the electricity use of data centers provides a useful benchmark for testing theories about the CO2 implications of data center services.

The cost of energy produced by IT devices impacts environmental and economic standards. Industrialized countries such as the US consume more energy than non-industrialized ones. The IT industry is essential to the global economy and plays a role in every sector and industry. Due to the frequency of IT usage, this will no doubt continue to increase demand, which makes it important that we consider designing an eco-friendly infrastructure architecture.

On-premises data centers, which are also referred to as enterprise data center types, require IT to handle and manage everything, including purchasing and installing the hardware, virtualization, operating system, and applications, and setting up the network, network firewall devices, and secure data storage. Furthermore, IT is responsible for maintaining the infrastructure hardware throughout an LOB app’s life cycle. This imposes both significant upfront costs for the hardware and ongoing data center operating costs due to patching. Don’t forget—you should also factor in paying for resources regardless of utilization.

Cloud computing provides an alternative to the on-premises data center. Amazon, Microsoft, and Google cloud providers are responsible for hardware procurement and overall maintenance costs and provide a variety of services you can use. Lease whatever hardware capacity and services you need for your LOB application, only when required, thus converting what had been a capital expense or fixed into an operational expense. This allows the cloud computing engineer to lease hardware capacity and deliver modern software services that would be too expensive to purchase traditionally.

 

Cloud computing explored

In this section, you will learn about the cloud computing concepts derived from the National Institute of Standards and Technology (NIST). Then I will describe the cloud computing models used by cloud computing providers today. Additionally, I will describe cloud computing deployment models.

Cloud computing plays an increasingly important role in IT. Therefore, as an IT professional, you must be cognizant of the fundamental cloud principles and methods. There are three main cloud computing deployment models: public, private, and hybrid. Each provides a range of services but implements the resources differently. Equally important to consider are the cloud computing service models: infrastructure as a service (IaaS), platform as a service (PaaS), and SaaS. They are the core tenets from which cloud computing is defined.

By now, you should understand the historic journey that led us to cloud computing. However, if you are still unsure, I highly recommend you revisit the section titled Genesis and then correlate the researched data with the section titled The advent of cloud computing. Nevertheless, the cloud is the culmination of evolutionary technological advancements in human history.

To understand cloud services, we first refer to standards.

There are undoubtedly many organizations that develop standards built to ensure we measure, innovate, and lead following industry best practices to reach an overarching goal, and that is to improve our quality of life.

These standard entities may be deemed regulatory or non-regulatory. For simplicity’s sake, regulatory organizations such as the International Energy Agency (IEA) are appointed by international legislation to devise energy requirement standards, and entities such as NIST are non-regulatory because they define supplemental standards that are not official rules enforced by some regulation delegated through legislation. However, in cloud computing, NIST is the gold standard.

NIST proclaims the following regarding cloud computing services:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

The cloud computing model(s), which we will define in grandiose detail in later sections, are derived from NIST standards, which you can review at your leisure, by navigating to their online website. You can use the URL located at http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.

Cloud computing provides a modern alternative to the traditional on-premises data center. Public cloud providers such as Amazon, Microsoft, and Google are responsible for hardware procurement and continual maintenance, and the public cloud provides on-demand resources. Rent hardware to support your software whenever you need it; organizations can convert what had been an upfront capital expenditure for hardware to an operating expense. This allows you to rent resources that would be traditionally too costly for some companies, and you pay if the resources are being utilized.

Cloud computing typically provides an online website experience, making it user-friendly to administrators who are responsible for managing compute, storage, networking, and other resources. For example, administrators can quickly define VMs by compute size, which includes VM capacity settings, such as virtual CPU core quantity, amount of RAM, disk size, disk performance, an operating system image such as Linux, preconfigured software, and the virtual network configuration, and then has the capability to deploy the VM using the described configuration anywhere in the world, and within several minutes securely access the deployed compute instance where the IT pro or developer can perform role-based tasks. This illustrates the rapid deployment capability of cloud computing defined by NIST.

Cloud computing supports various deployment options as well, such as public, private, and hybrid cloud. These options are known as cloud computing deployment models, not to be confused with IaaS, PaaS, and SaaS cloud computing models.

You will have a general understanding of the public cloud deployment model once you have completed reading this book. So, let me take a moment to elaborate on private and hybrid. In a private cloud, your organization creates a cloud environment in your on-premises data center and provides engineers in your organization with access to private resources. This deployment model offers services similar to the public cloud but exclusively for your users, but your organization remains responsible for procuring the hardware infrastructure and ongoing maintenance of the software services and hardware. In a hybrid-cloud deployment model, enterprise organizations integrate the public and private cloud, permitting you to host workloads in whichever cloud computing deployment model meets your current business requirements. For example, your organization can host highly available website services in the public cloud and then connect it to a non-relational database managed in your private cloud.

Planning, preparing, and implementing a cloud service model is as imperative as deciding whether to remain utilizing traditional systems built using monolithic architecture topology or choose an all-in cloud approach. From a consumer’s point of view, the myriad resources that cloud computing providers such as Amazon, Microsoft, and Google provide are daunting to the untrained eye. Thankfully, Amazon, Microsoft, and Google organize their distributed services into three major categories, referred to as cloud computing models.

One of the first cloud computing models is known as IaaS. In this model, the customer pays the cloud SP (CSP) to host VMs in the public cloud. Customers are responsible for managing the VM guest operating system, including hosted services or applications. This cloud computing model offers the customer complete control and flexibility.

The second cloud computing model is known as PaaS. In this cloud computing model, customers are responsible for the deployment, configuration, and management of applications in an agile manner using the cloud platform. The CSP manages the application runtime environment and is responsible for managing and maintaining the platform’s underlying VM guest operating system.

Another widely utilized cloud computing model is known as SaaS. In this model, clients utilize turnkey online software services such as storage, or email software managed by the cloud computing provider. Customers access cross-platform installable and online apps. These products are typically pay-as-you-go.

Cloud computing providers Amazon, Microsoft, and Google offer all three cloud computing models: IaaS, PaaS, and SaaS. These services are made available as consumption-based offerings. The cloud computing service models form three pillars on top of which cloud computing resources are administered.

All three service models allow the cloud computing engineer to access the services over the internet. The service models are supported by the global infrastructure of the CSP. Every service includes a service-level agreement (SLA) between the provider and the user. The SLA is addressed in terms of the services’ availability, performance, and general security controls.

To help you better understand these service models, I’ll describe in detail IaaS and PaaS enterprise implementation by sharing real-world examples in later chapters from Amazon, Microsoft, and Google.

All three major cloud providers’ solutions are built on virtualization technology, which abstracts physical hardware as a layer of virtualized resources for networking, storage, and processing. Amazon, Microsoft, and Google add further abstraction layers to define specific services that you can provision and manage. Regardless of the unique technology that one of these organizations uses to implement cloud computing solutions, the characteristics commonly observed remain on-demand, broad network access, shared resource pools, and rapid elasticity, and include metering capabilities, which allows enterprise organizations to track resource utilization at a cloud scale.

Cloud computing resources are built-in data centers that are commonly owned and operated by the cloud provider. The cloud core platform includes SANs, database systems, firewalls, and security devices. APIs enable programmatic access to underlying resources. Monitoring and metering are used to track the utilization and performance of resources dynamically provisioned.

The cloud platforms handle resource management and maintenance automatically. Internal services detect the status of each node and server joining and leaving and do the tasks accordingly. Cloud computing providers such as Amazon, Microsoft, and Google have built many economically efficient, eco-friendly data centers all over the world. Each data center theoretically houses tens of thousands of servers.

Here is a layered example of the cloud computing architecture: infrastructure, platform, and application. These layers are implemented with virtualization and provisioned in adherence to each cloud provider’s well-architected framework, which will be explored in a later section for each cloud provider. The infrastructure layer is implemented first to support IaaS resources. This infrastructure layer serves as the foundation to build PaaS resources. In turn, the platform layer is a foundation to implement the application layer for the SaaS.

This begs the question: What are the benefits of said services?

 

Advantages of cloud computing

In this section, you will learn to describe the advantages of cloud computing architecture. I will describe the benefits of trading capital expense for variable expenses, cloud economics, capacity planning, optimized agility, improved focus, and leveraging global resources in comparison to the traditional architecture, and will review and define HA and BC.

Cloud computing offers many advantages in comparison to traditional on-premises data centers. Let us review some of the key advantages.

Trade capital expense for variable expense

Organizations generally consider moving their workloads to the cloud because of the expense advantages. Instead of having to invest in data centers and servers before organizations know how they are going to use them, only pay when you consume cloud computing resources, and only pay for how much the organization consumes. This expense advantage allows any industry to rapidly get up and running while only paying for what is being utilized.

Benefit from massive economies of scale

Using cloud computing, organizations can achieve a lower variable cost than they can get on their own. Because usage from tens of thousands of customers is collected and combined in the cloud, cloud computing providers such as Amazon, Microsoft, and Google can achieve higher economies of scale, which translates into lower subscription prices.

And cloud computing providers such as Amazon, Microsoft, and Google invest in low-end commodity devices optimized for large-scale clouds instead of purchasing high-end devices. The volume of subscription purchases coupled with lower-cost commodity hardware grants cloud computing providers the ability to lower prices for new customers.

Stop guessing about capacity

As aforementioned, enterprise organizations only pay when utilizing cloud computing resources. Organizations access as much or as little as needed, and scale up and down, in and out as required on-demand.

Capacity planning is not only arduous but tedious and error-prone, particularly if you do not know what the customer’s response will be. Customers’ demands fluctuate dynamically, and the capability to scale becomes critical. Cloud computing engineers can demand more capacity during real-time shifts and spikes in customer demand, reduce costs using commodity compute, storage, and networking resources pooled by the cloud computing provider, and can be provisioned at a moment’s notice. For general concerns such as whether your LOB application needs more compute resources to meet increasing customer demands, hosting your workload in the cloud can help keep your customers satisfied. Does a decline in business mean that you don’t need all that capacity your cloud computing service is providing for your LOB applications? Cloud computing engineers can scale down compute capacity to control costs, offering a huge advantage over static on-premises data center solutions.

Increase speed and agility

On-premises data centers can take generally several weeks to months to provision a server. With cloud computing ecosystems, organizations can provision tens of thousands of resources in minutes, and the ability to rapidly scale your workloads both horizontally and vertically allows you to address SLAs that are in constant flux. Developing new applications in the cloud can significantly decrease time to market (TTM), which is an improvement over traditional monolithic development for several reasons. You do not have to deploy, configure, and maintain the underlying hardware for the compute, storage, and networking on which your applications will run. Instead, use the infrastructure resources accessible to you by your cloud computing provider.

Another reason why cloud computing-developed applications are faster to deploy has to do with how modern applications are developed. In an enterprise setting, developers create and test their applications in a test environment that simulates the final production environment. For example, an application might be developed and tested on a single-instance VM, also known as the dev environment, for eventual deployment onto two VM instances clustered across different Availability Zones (AZs) for HA and fault tolerance, which is common for production environments. Inconsistencies between your development and production environments can impact the development sprint cycle for business applications because problems might be missed in testing and only become apparent when the applications are deployed to production, which consequently necessitates further testing and development until the applications are behaving as intended. But with cloud computing, organizations can perform development and testing in the same kind of environment that their applications will be deployed upon. This allows you to quickly create resources and experiment iteratively. For start-ups, cloud computing grants them to start at a very low cost and scale rapidly as they gain customers. Start-ups would not encounter large upfront capital investment to create a new VM. This empowers any enterprise with the flexibility to rapidly set up development and test configurations. These can be programmed dynamically, giving you the ability to instantiate a development or test environment, do the testing, and tear it back down. This methodology keeps the cost very low, and maintenance is almost nonexistent.

Focus on what matters

Cloud computing lets organizations focus on their customers, rather than on expanding their data centers’ resources, which includes investing in infrastructure, racking, stacking, and powering servers.

Cloud computing providers have already done the heavy lifting for you. For most enterprises, their most inadequate resource is their software engineers, now referred to as developers. Development teams have various priorities and tasks that need to be successfully completed. It is an advantage to focus those resources on projects that move the organization’s campaign forward, rather than planning, procuring, preparing, and implementing an underlying infrastructure.

This makes economic sense for organizations when it comes to hardware acquisition costs because the cloud computing provider provides the core hardware resources. Traditionally, enterprises have often purchased and deployed large, scaled SANs from third-party vendors to meet business requirements. By utilizing storage resources from a cloud computing provider instead, enterprise organizations can significantly decrease overall storage procurement and long-term maintenance costs.

Cloud computing providers manage the data center, which means you do not have to manage your own IT infrastructure. Cloud computing enables you to access computing services, regardless of your location and the equipment that you use to access those services.

Go global in minutes

Cloud computing providers such as Amazon, Microsoft, and Google are constantly expanding their global presence to help all customers of varying sizes achieve lower latency and greater throughput and to ensure that an enterprise’s most important asset—that is, data—resides only in the region they specify. As organizations and customers continue to grow their businesses, cloud computing providers such as Amazon, Microsoft, and Google will continue to provide the infrastructure that meets any organization’s global business requirements.

Only the largest global enterprises can deploy data centers around the world. So, using Amazon, Microsoft, and Google entitles enterprises of any size to the capability to host an application or workload from any region to reduce latency to end users while avoiding the capital expenses, long-term commitments, and scaling challenges associated with maintaining and operating a global infrastructure.

In a later section, I will divulge each cloud provider’s mammoth global infrastructure regions and zones in detail. But before I do, here’s a brief insight into HA and DR, which are addressed by utilizing cloud computing global infrastructure.

An overview of HA

IT systems are considered critical business tools in most organizations. Outages of even a few hours reflect poorly upon the IT department and can result in lost sales or loss of business reputation. HA ensures that IT systems can survive the failure of a single server or even multiple servers.

Availability refers to the level of service that applications, services, or systems provide, and is expressed as the percentage of time that a service or system is available. Highly available architectures have minimal downtime, whether planned or unplanned, and are available more than 99% of the time, depending on the needs and the budget of the organization.

Here are some common target availability considerations:

  • Cloud data center infrastructure
  • Server hardware
  • Storage
  • Network infrastructure
  • Internet
  • Application services

Note

This is not an exhaustive list.

Cloud computing providers support the capability of any organization to design a highly available architecture. Cloud computing data centers are organized into AZs. Each AZ comprises one or more data centers, with some AZs having three to six data centers.

Each AZ is designed as an independent failure zone. This means that AZs are physically separated within a region and are in a specific flood zone by region. In addition to having separate uninterruptable power supplies and onsite backup generators, they are each connected to different electrical grids from independent utilities to further reduce SPOFs. AZs are all redundantly connected to multiple transit providers.

Enterprise organizations are responsible for selecting AZs where their systems reside. Some services can span multiple AZs. Every organization should design its systems to survive temporary or prolonged failure of an AZ if some disaster occurs. Utilizing distributed computing methods, organizations can distribute applications across multiple AZs, allowing them to remain resilient in most failure scenarios, including natural disasters or typical system failures.

An overview of DR

DR planning is an essential requirement for fulfilling SLAs. These agreements define when a service needs to be available, and how quickly it must be recovered if it fails. To ensure that organizations meet SLA requirements, site resiliency becomes a business requirement. Site resiliency is the ability of one or more systems and services to survive a site failure and to continue functioning using an alternate data center.

One of the advantages of cloud computing is the capability to implement multi-region with multiple AZs for DR and HA. The alternate data center is a site that can be in another region within a separate AZ dedicated only to DR. For example, the alternate data center could be another location in the same geographical region, such as the Johnson & Johnson primary data center, which is located in Virginia but has its secondary data center in Ohio; it’s in use but has sufficient capacity to handle the Ohio facility’s services in the event of an unplanned failure.

In summary, AZs are equivalent to a cluster of data centers. Amazon, Microsoft, and Google isolate their data centers using AZs so that they are not easily affected by natural disasters at the same time. AZs are a distinct group of data centers, whereas a region is made of multiple AZs to support the capability of spreading compute resources across multiple power providers.

Tip

Cloud computing providers recommend provisioning your resources across multiple AZs. If you implement multiple VM instances, you can spread them across more than one AZ and get added redundancy. If a single AZ has a problem, all assets in your second AZ will be unaffected. All recommendations are derived from online artifacts and can be corroborated by reviewing each cloud computing provider’s well-architected framework document. The well-architected frameworks define real-world best practices that support cloud adoption and ongoing governance and management for any workload.

 

Summary

In this chapter, you learned about the genesis of key technologies used in cloud computing and reviewed core traditional data center service concerns. You also learned how data center technologies transitioned into virtual offerings, reviewed key patterns to distribute workloads efficiently, and learned how the demand for data requires a cloud-scaled infrastructure. We also explored cloud computing underpinnings and key advantages over traditional on-premises data centers.

In the next chapter, you will learn the underlying technology that comprises cloud computing services.

 

Further reference

For more information about topics covered in this book, you are referred to the following resources:

 

Questions

  1. Who is ARPANET?
    1. Defense Advanced Research Projects Agency
    2. Department of Defense
    3. Bell Laboratories
    4. Advanced Research Projects Agency Network
  2. You are a cloud engineer tasked with deploying a Linux VM instance for Pharmakinematics, an enterprise organization. The IT department needs you to make use of the appropriate cloud computing service model. Which model do you make use of?
    1. FaaS
    2. PaaS
    3. Public
    4. IaaS
  3. Which design strategy should you consider when planning cloud computing architectural recommendations for distributed computing services across AZs that provide a highly available architecture?
    1. Design for agility
    2. Design for scalability
    3. Design for failure
    4. Design microservices
  4. Which data center tier supports an annual downtime of 26.3 minutes?
    1. Tier 4
    2. Tier 3
    3. Tier 2
    4. Tier 1
  5. Which of the following refers to company-owned and operated enterprise data centers?
    1. Off-premises
    2. Cloud
    3. Co-located
    4. On-premises
  6. What is virtualization?
    1. Docker resources
    2. Apple iOS
    3. The abstraction of physical resources
    4. Linux kernel
  7. What is the Microsoft application virtualization service?
    1. Citrix
    2. Container
    3. App-V
    4. Hyper-V
  8. What are some attributes that distributed computing systems display? (Choose all that apply.)
    1. SOA
    2. Monolithic
    3. DevOps
    4. Queue
  9. NIST declares the following regarding cloud computing (Choose all that apply):
    1. On-demand self-service
    2. Resource pooling
    3. Rapid elasticity
    4. Poly Cloud
  10. In terms of HA, what does an AZ provide?
    1. Fault tolerance
    2. Regional redundancy
    3. Energy reduction
    4. Identity and Access Management (IAM)
About the Author
  • David Santana

    David Santana is a multi-cloud architect and certified trainer, with over 20 years of IT training, consulting and leadership. He's been providing dedicated services to Microsoft, Amazon, and Google business partners like Deloitte, Accenture, Humana, ABB, and public sector agencies CJIS, including the DoD and Veteran services. He has created self-paced multi-cloud courses like Azure resources for AWS architects. He is also the lead program developer for the Los Angeles Veterans Technology Training Academy, an initiative that trains and mentors to empower veterans.

    Browse publications by this author
Cloud Computing Demystified for Aspiring Professionals
Unlock this book and the full library FREE for 7 days
Start now