1 Understanding AWS Cloud Principles and Key Characteristics
The last decade has revolutionized the information technology industry, where cloud computing was introduced, and now it is everywhere. Nowadays, the cloud is the new normal. Everyone in the industry is adopting the cloud or seriously thinking about it. It all started with Amazon launching a cloud service called Amazon Web Services (AWS) in 2006 with a couple of services. Netflix migrated to AWS in 2008 and became a market disrupter. After that, no looking back and many industry revolutions led by cloud-born startups like Airbnb in hospitality, Robinhood in finance, Lyft in transportation, and many more. Cloud rapidly gain market share, and now all big names like Capital One, JP Morgan Chase, Nasdaq, NFL, General Electric, everyone accelerating their digital journey by cloud adoption.Even though the term cloud is pervasive today, not everyone understands what the cloud is. The cloud can be different things for different people. Another reason is that the cloud is continuously evolving.In this chapter, we will put our best foot forward and attempt to define the cloud, and then we will try to define the AWS cloud more specifically. We will also cover the vast and ever-growing influence and adoption of the cloud in general and AWS in particular. After that, we'll start introducing some elementary cloud and AWS terms to start getting our feet wet with the lingo.We will then try to understand why cloud computing is so popular. Assuming you buy the premise that the cloud is taking the world by storm. We will then learn how we can take a slice of the cloud pie and build our credibility by becoming certified. Finally, toward the end of the chapter, we will look at some tips and tricks you can use to simplify your journey to obtain AWS certifications. We will look at some frequently asked questions about the AWS certifications.In this chapter, we will cover the following topics:
- What is cloud computing?
- What is AWS cloud computing?
- The market share, influence, and adoption of AWS
- Basic cloud and AWS terminology
- Why is cloud computing so popular?
- The six pillars of a well-architected framework
- Building credibility by becoming certified
- Learning tips and tricks to obtain AWS certifications
- Some frequently asked questions about AWS certifications
Let's get started, shall we?
What is cloud computing?
The best way to understand the cloud is to take the electricity supply analogy. To get electricity in your house, you just flip the switch on. Electric bulbs lighten your home and other appliances. In this case, you only pay for your electricity use when you need them. When you switch off the electric appliances, you are not paying anything. Now, imagine if you need to power a couple of appliances, and for that, you have to set up an entire powerhouse. It will be costly, right? as it involves the cost of maintaining the turbine, generator, and building the whole infrastructure. Utility companies make your job easier by supplying the electricity quantity you need. They maintain the entire infrastructure to generate electricity. They could keep the cost down by distributing electricity to millions of houses which helped them benefit from mass utilization. Now let's come to cloud computing; while consuming cloud resources, you pay for IT infrastructure such as computing and storage in the pay-as-you-go model. Here, public clouds like AWS do the heavy lifting to maintain IT infrastructure and provide you access over the internet under pay as you go, model. They are revolutionizing the IT infrastructure industry, where traditionally, you have to maintain your servers all by yourself on-premise to run your business, but now you can offload that to the public cloud and focus on your core business. For example, CapitalOne's core business is banking and does not run a large data center. Before going deeper into cloud computing, let's analyze some of the key characteristics of the public cloud.
One important characteristic of the public cloud providers such as AWS is the ability to quickly and frictionlessly provision resources. These resources could be a single instance of a database or a thousand copies of the same server used to handle your web traffic. These servers can be provisioned within minutes.Contrast that with how performing the same operation may play out in a traditional on-premises environment. Let's use an example. You need to set up a cluster of computers to host your latest service. Your next actions probably look something like this:
- You visit the data center and realize that the current capacity is insufficient to host this new service.
- You map out a new infrastructure architecture.
- You size the machines based on the expected load, adding a few more terabytes and a few gigabytes to ensure that you don't overwhelm the service.
- You submit the architecture for approval to the appropriate parties.
- You wait. Most likely for months.
It may not be uncommon once you get the approvals to realize that the market opportunity for this service is now gone or that it has grown more. The capacity you initially planned will not suffice. It isn't easy to overemphasize how important the ability to deliver a solution quickly is when you use cloud technologies to enable these solutions.Image what will happen if, after getting everything set up in the data center and after months of approvals, you told the business sponsor that you made a mistake. You ordered a 64 GB RAM server instead of a 128 GB, so you won't have enough capacity to handle the expected load. Getting the right server will take a few more months? Also, the market is moving fast, and your user workload increases 5x by the time you get the server. Now it's good news for business, but as you cannot scale your server so fast, the user experience will ultimately be compromised, and they will switch to other options. It is not necessarily a problem in a cloud environment because instead of needing months to provision your servers, they can be provisioned in minutes. Correcting the size of the server may be as simple as shutting down the server for a few minutes, changing a drop-down box value, and restarting the server again. Even you can go serverless and let the cloud handle the scaling for you while you focus on your business problems.Hopefully, the above example here drives our point home about the power of the cloud. The cloud exponentially improves time to market. And being able to deliver quickly may not just mean getting there first. It may be the difference between getting there first and not getting there. Another powerful characteristic of a cloud computing environment is the ability to quickly shut down resources and, significantly, not be charged for that resource while it is down. In our continuing on-premises example, if we shut down one of our servers. Do you think we can call the company that sold us the server and politely asks them to stop charging us because we shut the server down? That would be a very quick conversation. It would probably not be a delightful user experience, depending on how persistent we were. They will probably say, "You bought the server; you can do whatever you want with it, including using it as a paperweight." Once the server is purchased, it is a sunk cost for the duration of the server's useful life.In contrast, whenever we shut down a server in a cloud environment. The cloud provider can quickly detect that and put that server back into the pool of available servers for other cloud customers to use that newly unused capacity.
Virtualization is running multiple virtual instances on top of a physical computer system using an abstract layer sitting on top of actual hardware. More commonly, virtualization refers to the practice of running multiple operating systems on a single computer at the same time. Applications running on virtual machines are oblivious that they are not running on a dedicated machine. These applications are unaware that they share resources with other applications on the same physical machine.A hypervisor is a computing layer that enables multiple operating systems to execute in the same physical compute resource. These operating systems running on top of these hypervisors are Virtual Machines (VMs) – a component that can emulate a complete computing environment using only software but as if it was running on bare metal. Hypervisors, also known as Virtual Machine Monitors (VMMs), manage these VMs while running side by side. A hypervisor creates a logical separation between VMs. It provides each of them with a slice of the available compute, memory, and storage resources.It allows VMs not to clash and interfere with each other. If one VM crashes and goes down, it will not make other VMs go down with it. Also, if there is an intrusion in one VM, it is fully isolated from the rest.
Definition of the cloud
Let's now attempt to define cloud computing.The cloud computing model offers computing services such as compute, storage, databases, networking, software, machine learning, and analytics over the internet and on-demand. You generally only pay for the time and services you use. Most cloud providers can provide massive scalability for many of their services and make it easy to scale services up and down.As much as we tried to nail it down, this is still a pretty broad definition. For example, we specify that the cloud can offer software in our definition. That's a pretty general term. Does the term software in our definition include the following?
- Video Conferencing
- Virtual desktops
- Email services
- Contact Center
- Document Management
These are just a few examples of what may or may not be included as available services in a cloud environment. When it comes to AWS and other major cloud providers, The answer is yes. When AWS started, it only offered a few core services, such as compute (Amazon EC2) and basic storage (Amazon S3). As of 2022, AWS has continually expanded its services to support virtually any cloud workload. Currently, It has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning, artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual & augmented reality (VR and AR), media, application development, and deployment. As a fun fact, as of 2021, Amazon Elastic Cloud Compute (EC2) alone offers over 475 types of compute instances.For the individual examples given here, AWS offers the following:
- Video conferencing – Amazon Chime
- Virtual desktops – AWS Workspaces
- Email services – Amazon WorkMail
- Contact Center – Amazon Connect
- Document Management – Amazon Workdocs
As we will see throughout the book, here is a sample of AWS's offers many services. Additionally, since it was launched, AWS services and features have grown exponentially every year, as shown in the following figure:
There is no doubt that the number of offerings will continue to grow at a similar rate for the foreseeable future. AWS is a cloud market leader as it has a lot of functionality. They are innovating faster, especially in new areas such as Machine Learning and Artificial Intelligence, the Internet of Things, Serverless Computing, Blockchain, and even quantum computing.You must have heard cloud terms more often in different contexts, including the public and private clouds. Let's learn more about it.
Private versus public clouds
A private cloud is a service dedicated to a single customer—it is like your on-premise data center, which is accessible to one large enterprise. A private cloud has become a fancy name for a data center managed by a trusted third party. All the elasticity benefits wither away. This concept has gained momentum to ensure security. Initially, enterprises were skeptical about public cloud security, which is multi-tenant. But having your own infrastructure dimmish the value of the cloud as you have to pay for resources even if you are not running it. Let's use an analogy to understand the private cloud further. The gig economy has great momentum. Everywhere you look, people are finding employment as contract workers. Uber drivers are setting up Airbnbs, and people are doing contract work for Upwork. One of the reasons contract work is getting more popular as it enables consumers to contract services that they may otherwise not be able to afford. Could you imagine how expensive it would be to have a private chauffeur? But with Uber or Lyft, you almost have a private chauffeur who can be at your beck and call within a few minutes of you summoning them.A similar economy of scale happens with a public cloud. You could have access to infrastructure and services that would cost millions of dollars if you bought them on your own. Instead, you can access the same resources for a small fraction of the cost.Even though AWS, Azure, GCP, and the other popular cloud providers are considered mostly public clouds. There are some actions you can take to make them more private. For example, AWS offers Amazon EC2 dedicated instances, which are EC2 instances that ensure that you will be the only user for a given physical server. Further, AWS offers AWS Outpost, where you can order server rack and host workload in your premise using the AWS control plane. Dedicated instance and Outpost costs are significantly higher than on-demand EC2 instances. On-demand instances ?? may be shared with other AWS users. As mentioned earlier in the chapter, you will never know the difference because of virtualization and hypervisor technology. One common use case for choosing dedicated instances is government regulations and compliance policies. That requires certain sensitive data to not be in the same physical server with other cloud users.Indeed private clouds are expensive to run and maintain. For that reason, many of the resources and services offered by the major cloud providers reside in public clouds. But just because you are using a private cloud does not mean that it cannot be set up insecurely and conversely. Suppose you are running your workloads and applications on a public cloud. You can use security best practices and sleep well at night knowing that you use state-of-the-art technologies to secure your sensitive data.Additionally, most major cloud providers' clients use public cloud configurations, but there are a few exceptions even in this case. For example, the United States government intelligence agencies are a big AWS customer. As you can imagine, they have deep pockets and are not afraid to spend. In many cases with these government agencies, AWS will set up the AWS infrastructure and services on the agency's premises. You can find out more about this here:https://aws.amazon.com/federal/us-intelligence-community/Now that we have gained a better understanding of cloud computing in general. Let's get more granular and learn about how AWS does cloud computing.
What is AWS cloud computing?
AWS is the undisputed market leader in cloud computing today. Even though there are few worthy competitors, it doesn't seem like anyone will push them off the podium for a while. Why is this, and how can we be sure they will remain a top player for years? Because this pattern has occurred in the history of the technology industry repeatedly. IN THEIR BOOK THE GORILLA GAME: PICKING WINNERS IN HIGH TECHNOLOGY, Geoffrey A. Moore, Paul Thompson, and Tom Kippola explained this pattern best a long time ago.Some important concepts covered in their book are listed here:
- There are two kinds of technology markets: Gorilla Games and Royalty Markets. In a Gorilla Game, the players are dubbed gorillas and chimps. In a Royalty Market, the participants are kings, princes, and serfs.
- Gorilla Games exist because the market leaders possess proprietary technology that makes it difficult for competitors to compete. This proprietary technology creates a moat that can be difficult to overcome.
- In Royalty Markets, the technology has been commoditized and standardized. In a Royalty Market, it's challenging to become a leader, and it's easy to fall off the number one position.
- The more proprietary features a gorilla creates in its product, the bigger the moat they establish. The more difficult and expensive it becomes to switch to a competitor, the stronger the gorilla becomes.
- This creates a virtuous cycle for the market leader or gorilla. The market leader's product or service becomes highly desirable, meaning they can charge more and sell more. They can then reinvest that profit to improve the product or service.
- Conversely, a vicious cycle is created for second-tier competitors or chimps. Their product or service is not as desirable, so even if they charge as much money as the market leader because they don't have as many sales, their research and development budget will not be as large as the market leader.
- The focus of this book is on technology, but if you are interested in investing in technology companies, the best time to invest in a gorilla is when the market is about to enter a period of hypergrowth. At this point, the gorilla might not be fully determined, and it's best to invest in the gorilla candidates and sell stock as it becomes obvious that they won't be a gorilla and reinvest the proceeds of that sale into the emerging gorilla.
- Once a gorilla is established, the way a gorilla is vanquished is by a complete change in the game, where new disruptive technology creates a brand new game.
To better understand, let's look at a King Market and an example of a Gorilla Game.Personal computers and laptops – Back in the early 1980s, when PCs burst onto the scene, many players emerged that sold personal computers, such as these:
- Hewlett Packard
I don't know about you, but whenever I buy a computer, I go to the store, see which computer is the cheapest and has the features I want, and pull the trigger regardless of the brand. This is the perfect example of a King Market. It is difficult to differentiate yourself and stand out, and there is little to no brand loyalty among consumers.Personal computer operating systems – Whenever I buy a new computer, I make sure that the computer comes with Microsoft Windows, the undisputed market leader in the space. Yes, the Macintosh operating system has been around for a long time. Linux has been around for a while, making some noise. The Google Chrome operating system is making some inroads, especially in the educational market. But ever since it was launched in November 1985, Microsoft Windows has kept the lion's share of the market (or should we say the gorilla's share?).Of course, this is a subjective opinion, but I believe we are witnessing the biggest Gorilla Game in the history of computing with the advent of cloud computing. This is the mother of all competitive wars. Cloud vendors are not only competing to provide basic services, such as compute and storage but are continuing to build more services on top of these core services to lock in their customers further and further. Vendor lock-in is not necessarily a bad thing. Lock-in, after all, is a type of golden handcuff. Customers stay because they like the services they are being offered. But customers also realize that as they use more and more services, it becomes more and more expensive to transfer their applications and workloads to an alternate cloud provider.Not all cloud services are highly intertwined with their cloud ecosystems. Take these scenarios, for example:
- Your firm may be using AWS services for many purposes, but they may be using WebEx, Microsoft Teams, Zoom, or Slack for their video conference needs instead of Amazon Chime. These services have little dependency on other underlying core infrastructure cloud services.
- You may be using Amazon Sagemaker for artificial intelligence and machine learning projects, but you may be using the TensorFlow package in Sagemaker as your development kernel, even though Google maintains TensorFlow.
- If you are using Amazon RDS and choose MySQL as your database engine, you should not have too much trouble porting your data and schemas over to another cloud provider that supports MySQL if you decide to switch over.
It will be a lot more difficult to switch to some other services. Here are some examples:
- Amazon DynamoDB is a NoSQL proprietary database only offered by AWS. If you want to switch to another NoSQL database, porting it may not be a simple exercise.
- Suppose you are using CloudFormation to define and create your infrastructure. In that case, it will be difficult, if not impossible, to use your CloudFormation templates to create infrastructure in other cloud provider environments. Suppose the portability of your infrastructure scripts is important to you, and you are planning on switching cloud providers. In that case, Terraform by HashiCorp may be a better alternative since Terraform is cloud-agnostic.
- Suppose you have a graph database requirement and use Amazon Neptune (the native Amazon graph database offering). You may have difficulty porting out of Amazon Neptune since the development language and format can be quite dissimilar if you decide to use another graph database solution like Neo4j or TigerGraph.
- As far as we have come in the last 15 years with cloud technologies, I believe. I think vendors realize that these are the beginning innings, and locking customers in right now while still deciding who their vendor is will be a lot easier than trying to do so after they pick a competitor.
However, looking at a cloud-agnostic strategy has its pros and cons. You want to distribute your workload between cloud providers to have competitive pricing and keep open your option like old days. But each cloud has different networking needs, and connecting distributed workloads between clouds to communicate with each other is a complex task. Also, each major cloud provider like AWS, Azure, and GCP has a breadth of services, and building a workforce with all three-skill sets is another charge. Finally, Cloud-like AWS provides you economy of scale, which means the more you use, the price goes down, which may not benefit you if you choose multi-cloud. Again, it doesn't mean you cannot choose a multi-cloud strategy, but you have to think about logical workload isolation. It will not be wise to run the application layer in one cloud and the database layer in other, but you can think about logical isolation like running the analytics workload and application workload in a separate cloud.A good example of one of those make-or-break decisions is the awarding of the Joint Enterprise Defense Infrastructure (JEDI) cloud computing contract by the Pentagon. JEDI is a $10 billion 10-year contract. As big as that dollar figure is, even more important is that it would be nearly impossible for the Pentagon to switch to another vendor once the 10-year contract is up.Let's delve a little deeper into how influential AWS currently is and how influential it has the potential to become.
Basic cloud and AWS terminology
There is a constant effort by technology companies to offer common standards for certain technologies while providing exclusive and proprietary technology that no one else offers. An example of this can be seen in the database market. The Standard Query Language (SQL) and the ANSI-SQL standard have been around for a long time. The American National Standards Institute (ANSI) adopted SQL as the SQL-86 standard in 1986. Since then, database vendors have continuously supported this standard while offering various extensions to make their products stand out and lock in customers to their technology.Cloud providers provide the same core functionality for a wide variety of customer needs, but they all feel compelled to name these services differently, no doubt in part to try to separate themselves from the rest of the pack. As an example, every major cloud provider offers to compute services. In other words, it is simple to spin up a server with any provider, but they all refer to this compute service differently:
- AWS uses Elastic Cloud Computing (EC2) instances.
- Azure uses Azure Virtual Machines.
- GCP uses Google Compute Engine.
The following tables give a non-comprehensive list of the different core services offered by AWS, Azure, and GCP and the names used by each of them:
These are some of the other services, including serverless technologies services and database services:
These are additional services:
If you are confused by all the terms in the preceding tables, don't fret. We will learn about many of these services throughout the book and when to use them.The next section will learn why cloud services are becoming popular and why AWS adoption is prevalent.
Why is cloud computing so popular?
Depending on who you ask, some estimates peg the global cloud computing market at around USD 445 billion in 2021, growing to about USD 950 billion by 2026. This implies a Compound Annual Growth Rate (CAGR) of around 17% for the period.There are multiple reasons why the cloud market is growing so fast. Some of them are listed here:
- Faster hardware cycles
- System administration staff
- Faster time to market
- Access to emerging technologies
Let's look at the most important one first.
Elasticity may be one of the most important reasons for the cloud's popularity. Let's first understand what it is.Do you remember the feeling of going to a toy store as a kid? There is no feeling like it in the world. Puzzles, action figures, games, and toy cars are all at your fingertips, ready for you to play with them. There was only one problem: you could not take the toys out of the store. Your mom or dad always told you that you could only buy one toy. You always had to decide which one you wanted, and invariably, after one of two weeks of playing with that toy, you got bored with it, and the toy ended up in a corner collecting dust, and you have left longing for the toy you didn't choose.What if I told you about a special, almost magical, toy store where you could rent toys for as long or as little as you wanted, and the second you got tired with the toy, you could return it, change it for another toy, and stop any rental charges for the first toy? Would you be interested?The difference between the first traditional store and the second magical store is what differentiates on-premises environments and cloud environments.The first toy store is like setting up infrastructure in your own premises. Once you purchase a piece of hardware, you are committed to it and will have to use it until you decommission it or sell it at a fraction of what you paid for it.The second toy store is analogous to a cloud environment. If you make a mistake and provision a resource that's too small or too big for your needs, you can transfer your data to a new instance, shut down the old instance, and, importantly, stop paying for that instance.More formally defined, elasticity is the ability of a computing environment to adapt to changes in workload by automatically provisioning or shutting down computing resources to match the capacity needed by the current workload.In AWS and the main cloud providers, resources can be shut down without having to terminate them completely, and the billing for resources will stop if the resources are shut down.This distinction cannot be emphasized enough. Computing costs in a cloud environment on a per-unit basis may even be higher than on-premises prices, but the ability to shut resources down and stop getting charged for them makes cloud architectures cheaper in the long run, often in a quite significant way. The only time absolute on-premises costs may be lower than cloud costs is when workloads are extremely predictable and consistent. Let's look at exactly what this means by reviewing a few examples.
A famous use case for cloud services is to use it to run an online storefront. Website traffic in this scenario will be highly variable depending on the day of the week, whether it's a holiday, the time of day, and other factors—almost every retail store in the USA experiences more than a 10x user workload during Thanksgiving week. The same goes for boxing day in the UK, Diwali in India, Single day in china, and almost every country has some shopping festival. This kind of scenario is ideally suited for a cloud deployment. In this case, we can set up resource auto-scaling that automatically scales up and down compute resources as needed. Additionally, we can set up policies that allow database storage to grow as needed.
Apache Spark and Hadoop workloads
The popularity of Apache Spark and Hadoop continues to increase. Many Spark clusters don't necessarily need to run consistently. They perform heavy batch computing for a period and then can be idle until the next batch of input data comes in. A specific example would be a cluster that runs every night for 3 or 4 hours and only during the working week.In this instance, you need decoupled compute and data storage where you can shutdown resources that may be best managed on a schedule rather than by using demand thresholds. Or, we could set up triggers that automatically shut down resources once the batch jobs are completed. AWS provides that flexibility where you can store your data in Amazon Simple Storage Service (S3) and spin-up Amazon Elastic Map-reduce cluster (EMR) to run spark jobs and shut them down after storing results back in decoupled Amazon S3. You will learn more about these services in upcoming chapters.
In an on-premise setting, you provide a high configuration desktop/laptop to your development team and pay for 24 hours, including weekends. However, they are using one-fourth of capacity considering eight hrs. workday. Cloud provides workspaces accessible by low configuration laptop, and you can schedule them to stop during off-hours and weekends, saving almost 70% of the cost.Another common use case in technology is file and object storage. Some storage services may grow organically and consistently. The traffic patterns can also be consistent. This may be one example where using an on-premises architecture may make sense economically. In this case, the usage pattern is consistent and predictable.Elasticity is by no means the only reason that the cloud is growing in leaps and bounds. The ability to easily enable world-class security for even the simplest applications is another reason why the cloud is becoming pervasive. Let's understand this at a deeper level.
The perception of on-premises environments being more secure than cloud environments was a common reason companies big and small would not migrate to the cloud. More and more enterprises now realize that it is tough and expensive to replicate the security features provided by cloud providers such as AWS. Let's look at a few of the measures that AWS takes to ensure the security of its systems.
You probably have a better chance of getting into the Pentagon without a badge than getting into an Amazon data center. AWS data centers are continuously upgraded with the latest surveillance technology. Amazon has had decades to perfect its data centers' design, construction, and operation.AWS has been providing cloud services for over 15 years, and they have an army of technologists, solution architects, and some of the brightest minds in the business. They are leveraging this experience and expertise to create state-of-the-art data centers. These centers are in nondescript facilities. You could drive by one and never know what it is. It will be extremely difficult to get in if you find out where one is. Perimeter access is heavily guarded. Visitor access is strictly limited, and they always must be accompanied by an Amazon employee.Every corner of the facility is monitored by video surveillance, motion detectors, intrusion detection systems, and other electronic equipment. Amazon employees with access to the building must authenticate themselves four times to step on the data center floor.Only Amazon employees and contractors that have a legitimate right to be in a data center can enter. Any other employee is restricted. Whenever an employee does not have a business need to enter a data center, their access is immediately revoked, even if they are only moved to another Amazon department and stay with the company. Lastly, audits are routinely performed and are part of the normal business process.
AWS makes it extremely simple to encrypt data at rest and data in transit. It also offers a variety of options for encryption. For example, for encryption at rest, data can be encrypted on the server-side, or it can be encrypted on the client-side. Additionally, the encryption keys can be managed by AWS, or you can use keys that are managed by you using tamper-proof appliances like Hardware Security Module (HSM). AWS provides you with dedicated cloud HSM to secure your encryption key if you want one.
AWS supports compliance standards.
AWS has robust controls to allow users to maintain security and data protection. We'll be discussing how AWS shares security responsibilities with their customers, but the same is true with how AWS supports compliance. AWS provides many attributes and features that enable compliance with many standards established in different countries and organizations. By providing these features, AWS simplifies compliance audits. AWS enables the implementation of security best practices and many security standards, such as these:
- SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70)
- SOC 2
- SOC 3
- FISMA, DIACAP, and FedRAMP
- PCI DSS Level 1
- DOD CSM Levels 1-5
- ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018
- MTCS Level 3
- FIPS 140-2
- I TRUST
In addition, AWS provides enables the implementation of solutions that can meet many industry-specific standards, such as these:
- Criminal Justice Information Services (CJIS)
- Family Educational Rights and Privacy Act (FERPA)
- Cloud Security Alliance (CSA)
- Motion Picture Association of America (MPAA)
- Health Insurance Portability and Accountability Act (HIPAA)
Another important thing that can explain the meteoric rise of the cloud is how you can stand up to high-availability applications without paying for the additional infrastructure needed to provide these applications. Architectures can be crafted to start additional resources when other resources fail. This ensures that we only bring additional resources when necessary, keeping costs down. Let's analyze this important property of the cloud in a deeper fashion.
When we deploy infrastructure in an on-premises environment, we have two choices. We can purchase just enough hardware to service the current workload or ensure that there is enough excess capacity to account for any failures. This extra capacity and eliminating single points of failure is not as simple as it may seem. There are many places where single points of failure may exist and need to be eliminated:
- Compute instances can go down, so we need a few on standby.
- Databases can get corrupted.
- Network connections can be broken.
- Data centers can flood or hit by earthquakes.
Using the cloud simplifies the "single point of failure" problem. We have already determined that provisioning software in an on-premises data center can be long and arduous. Spinning up new resources can take just a few minutes in a cloud environment. So, we can configure minimal environments knowing that additional resources are a click away.AWS data centers are built in different regions across the world. All data centers are "always-on" and deliver services to customers. AWS does not have "cold" data centers. Their systems are extremely sophisticated and automatically route traffic to other resources if a failure occurs. Core services are always installed in an
N+1 configuration. In the case of a complete data center failure, there should be the capacity to handle traffic using the remaining available data centers without disruption.AWS enables customers to deploy instances and persist data in more than one geographic region and across various data centers within a region. Data centers are deployed in fully independent zones. Each data center is constructed with enough separation between them such that the likelihood of a natural disaster affecting two of them simultaneously is very low. Additionally, data centers are not built in flood zones.Data centers have discrete Uninterruptable Power Supplies (UPSes) and onsite backup generators to increase resilience. They are also connected to multiple electric grids from multiple independent utility providers. Data centers are connected redundantly to multiple tier-1 transit providers. Doing all this minimizes single points of failure.
Faster hardware cycles
When hardware is provisioned on-premises, it starts becoming obsolete from the instant that it is purchased. Hardware prices have been on an exponential downtrend since the first computer was invented, so the server you bought a few months ago may now be cheaper, or a new version of the server may be out that's faster and still costs the same. However, waiting until hardware improves or becomes cheaper is not an option. A decision needs to be made at some point to purchase.Using a cloud provider instead eliminates all these problems. For example, whenever AWS offers new and more powerful processor types, using them is as simple as stopping an instance, changing the processor type, and starting the instance again. In many cases, AWS may keep the price the same or even cheaper when better and faster processors and technology become available, especially with their own preoperatory technology like the Graviton chip.
System administration staff
An on-premises implementation may require a full-time system administration staff and a process to ensure that the team remains fully staffed. Cloud providers can handle many of these tasks by using cloud services, allowing you to focus on core application maintenance and functionality and not have to worry about infrastructure upgrades, patches, and maintenance.By offloading this task to the cloud provider, costs can come down because the administrative duties can be shared with other cloud customers instead of having a dedicated staff.
The Six pillars of a well-architected framework
That all leads us nicely into this section. The cloud, in general, and AWS, in particular, are so popular because they simplify the development of well-architected frameworks. If there is one must-read AWS document, titled AWS Well-Architected Framework, which spells out the six pillars of a well-architected framework. The full document can be found here:https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html AWS provides Well-Architected Review (WAR) Tool, which provides prescriptive guidance about each pillar to validate your workload against architecture best practices and generate a comprehensive report. Please find a glimpse of the tool below:
To kick off a well-architected review for your workload, first, you need to provide the workload information such as name, environment type (production or pre-production), AWS workload hosting regions, industry, reviewer name, etc. After submitting the information, you will see in the above WAR tool screenshot that there is a set of questions in relation to each well-architected pillar with the option to select what is most relevant to your workload. AWS provides prescriptive guidance and various resources to apply architecture best practices against each question asked on the right navigation.As AWS has provided detailed guidance for each WAR pillar in their document, let's look at the main points about the six pillars of a well-architected framework.
The first pillar – Security
Security should always be a top priority in both on-premises and cloud architectures. All security aspects should be considered, including data encryption and protection, access management, infrastructure security, network security, monitoring, and breach detection and inspection.
- To enable system security and to guard against nefarious actors and vulnerabilities, AWS recommends these architectural principles:
- Implement a strong identity foundation
- Enable traceability.
- Apply security at all levels.
- Automate security best practices.
- Protect data in transit and at rest:
- Keep people away from data:
- Prepare for security events:
You can find the security pillar checklist from Well-Architected Tool below, which has ten questions with one or more options relevant to your workload:The next pillar, reliability, is almost as important as security, as you want your workload to perform its business function consistently and reliably.
The second pillar – Reliability
Another characteristic of a well-architected framework is minimizing or eliminating single points of failure. Ideally, every component should have a backup. The backup should be able to come online as quickly as possible and in an automated manner, without human intervention. Self-healing is another important concept to attain reliability. An example of this is how Amazon S3 handles data replication. At any given time, there are at least six copies of any object stored in Amazon S3. If, for some reason, one of the resources storing one of these copies fails, AWS will automatically recover from this failure, mark that resource as unavailable, and create another copy of the object using a healthy resource to keep the number of copies at six. The well-architected framework paper recommends these design principles to enhance reliability:
- Automatically recover from failure
- Test recovery procedures
- Scale horizontally to increase aggregate workload availability
- Stop guessing capacity
- Manage change in automation
You can find the reliability pillar checklist from Well-Architected Tool below:
To retain the users, you need your application to be high performant and respond within seconds or milliseconds as per the nature of your workload. This makes performance a key pillar when building your application. Let's learn more details on performance efficiency.
Third pillar – performance efficiency
In some respects, over-provisioning resources are just as bad as not having enough capacity to handle your workloads. Launching a constantly idle or almost idle instance is a sign of bad design. Resources should not be at full capacity and should be utilized efficiently. AWS provides various features and services to assist in creating architectures with high efficiency. However, we are still responsible for ensuring that the architectures we design are suitable and correctly sized for our applications.When it comes to performance efficiency, the recommended design best practices are as follows:
- Democratize advanced technologies.
- Go global in minutes.
- Use serverless architectures
- Experiment more often
- Consider mechanical sympathy
You can find the Performance efficiency pillar checklist from Well-Architected Tool below with eight questions covering multiple aspects to make sure your architecture is optimized for performance :
Cost optimization is one of the primary motivators for businesses to move to the Cloud. However, Cloud can become expensive if you don't apply best practices and run the cloud workload the same way you run the on-premise workload. The Cloud can save you tons of money with proper cost optimization techniques. Let's look into the next pillar, cost optimization.
Fourth pillar – cost optimization
This pillar is related to the third pillar. Suppose your architecture is efficient and can accurately handle varying application loads and adjust as traffic changes. Additionally, your architecture should identify when resources are not being used and allow you to stop them or, even better, stop these unused compute resources for you. In this department, AWS also allows you to turn on monitoring tools that will automatically shut down resources if they are not being utilized. We strongly encourage you to adopt a mechanism to stop these resources once they are identified as idle. This is especially useful in the development and test environments.To enhance cost optimization, these principles are suggested:
- Implement cloud financial management
- Adopt a consumption model
- Measure overall efficiency
- Stop spending money on undifferentiated heavy lifting
- Analyze and attribute expenditure
Whenever possible, use AWS-managed services instead of services you need to manage yourself. Managed cloud-native services should lower your administration expenses. You can find the cost optimization pillar checklist from Well-Architected Tool below with ten questions covering multiple aspects to make sure your architecture is optimized for cost :
Significant work starts after deploying your production workload, making operational excellence a critical factor. You need to make sure your application maintains the expected performance in production and improves efficacy by applying maximum automation. Let's look at more details into the operational excellence pillar.
Fifth pillar – operational excellence
The operational excellence of a workload should be measured across these dimensions:
The ideal way to optimize these metrics is to standardize and automate the management of these workloads. To achieve operational excellence, AWS recommends these principles:
- Perform operations as code
- Make frequent, small, reversible changes
- Refine operations procedures frequently
- Anticipate failure
- Learn from all operational failures
You can find the operational excellence pillar checklist from Well-Architected Tool below with 11 questions covering multiple aspects to make sure your architecture is optimized to run in production:
Sustainability is now the talk of the town where all the organizations worldwide recognize their social responsibilities and take the pledge to make business more sustainable. As a leader, AWS is the first cloud provider to launch suitability as architecture practice in re: invent 2021. Let's look into more details of the sustainability pillar of the Well-Architected framework.
Sixth pillar – sustainability
As more and more organizations adopt Cloud, cloud providers can lead the charge to make the world more sustainable in improving the environment, economics, social and human life. The "United Nations World Commission on Environment and Development" defines sustainable development as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs." Your organization can have direct or indirect negative impacts on the earth's environment by carbon emissions or damaging natural resources like clean water or farming land. To reduce environmental impact, it's important to talk about sustainability and adapt it in practice wherever possible. AWS is achieving that by adding the sixth pillar to its well-architected framework with the following design principle :
- Understand your impact
- Establish sustainability goals
- Maximize utilization
- Anticipate and adopt new, more efficient hardware and software offerings
- Use managed services
- Reduce the downstream impact of your cloud workloads
You can find the sustainability pillar checklist from Well-Architected Tool below with six well thoughts questions covering multiple aspects to make sure your architecture is sustainable:
While a Well-Architected framework provides more generic guidance to optimize your architecture which is applicable across workloads, there is a need for more specific architectural practice for the specialized workload. That's where AWS published well-architected lenses to address workload and domain-specific needs. Let's take an overview of AWS's well-architected lenses.
AWS Well-Architected Lenses
As of April-2022, AWS has launched 13 well-architected lenses addressing architecting needs specific to technology workload and industry domain. The followings are the important available lenses for AWS's well-architected framework:
- Serverless Applications Lens – Building a serverless workload saves the cost and offloads infrastructure maintenance to the Cloud. The serverless lens provides details on best practices to architect serverless application workloads in the AWS Cloud. More information on design principles is available on the AWS website - https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens.
- Internet of Things (IoT) Lens – To design an IoT workload, you must know how to manage and secure million on devices those need to connect over the internet. IoT lens provides details on designing IoT workload. More details on design principles are available on the AWS website - https://docs.aws.amazon.com/wellarchitected/latest/iot-lens
- Data Analytics Lens – Data is the new gold. Every organization is trying to put their data to the best use to get insight for their customer and improve their business aspect. The data analytics lens provide best practice to build a data pipeline. More details on design principles are available on the AWS website - https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens
- Machine Learning (ML) Lens – ML applies to almost any workload, especially getting future insight from historical data. With the ever-increasing adoption of ML workload, it is essential to have the ability production the ML model and use it at scale. ML lens provides best practices to train, tune and deploy your ML model. More details on design principles are available on the AWS website - https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens.
- Hybrid Networking Lens – Networking is the backbone of any application workload, whether on-premise or Cloud. As enterprises are adopting the Cloud, there is need for a hybrid cloud setup is increasing every day to establish communication between on-premise and cloud workloads. AWS hybrid networking lens takes you to the best practice of designing networks for the hybrid Cloud. More details on design principles are available on the AWS website - https://docs.aws.amazon.com/wellarchitected/latest/hybrid-networking-lens
Above, we have covered some of the important lenses, but I will encourage you to explore other industry focus well-architected lenses such as Game, Streaming Media, Finance, and workload-specific lenses, including SAP, SaaS, HPC (High-Performance Computing), and FTR (Functional Technical Review) to validate your cloud platforms. You can apply various lenses when defining your workload in AWS's well-architected Tool, as shown below:
After applying the lens to your workload, you will get a best practice checklist specific to the domain; for example, the below screenshot shows a Well-Architected checklist for Serverless Lens:
AWS users need to constantly evaluate their systems to ensure that they are following the recommended principles of the AWS Well-Architected Framework and AWS Well-Architected Lenses and that they comply with and follow architecture best practices. As you are getting more curious about AWS by now, let's learn how to build your knowledge in the AWS cloud and establish yourself as a subject matter expert.
Building credibility and getting certified
It is hard to argue that the Cloud is not an important technology shift. We have established that AWS is the clear market and thought leader in the cloud space. Comparing the Cloud to an earthquake, we could say that it started slowly as a slight rumbling that started getting louder and louder, and we are now at a point where the walls are shaking, and it's only getting stronger.In the market share, influence, and adoption of AWS section, we introduced the concept of FOMO. We mentioned that enterprises are now eager to adopt cloud technologies because they do not want to fall behind their competition and become obsolete. Hopefully, by now, you are excited to learn more about AWS and other cloud providers, or at the very least, you're getting a little nervous and catching a little FOMO yourself.We will devote the rest of this chapter to showing you the path of least resistance for becoming an AWS guru and someone who can bill themselves as an AWS expert. As with other technologies, it is hard to become an expert without hands-on experience, and it's hard to get hands-on experience if you can't demonstrate that you're an expert. The best method, in my opinion that you use to crack this chicken-and-egg problem is to get certified.Fortunately, AWS offers a wide array of certifications to demonstrate your deep AWS knowledge and expertise to potential clients and employers. As AWS creates more and more services, they continue to offer new certificates aligned with these new services. Following are the available AWS certification listed on the AWS website as of April 2022.
Note: Above list does not include AWS SAP specialty certification, which is currently under beta and will be available for everyone to take tests starting April 26, 2022.AWS continuously updates existing certification exams to accommodate all new services and feature launches. Let's review the available certifications and how they fit into your career aspiration to enhance your existing skill in the Cloud.
Building AWS Cloud Career for non-tech
You may often see it as a very tech-savvy job for the Cloud. However, that is not always the case. Several Cloud roles don't require deep technical knowledge; just a basic understanding will get your foot in the door to start a cloud career. For example, anyone from a sales and marketing background can thrive in cloud marketing, Cloud business development, or cloud sales role without deep technical knowledge. Similarly, program managers are required in any industry where basic cloud knowledge will help you get started on the role. However, it's recommended to build cloud foundation knowledge to prepare yourself self-better, which you can gain from AWS Certified Cloud practitioner certification. Let's look into more details.
AWS Certified Cloud Practitioner – Foundational
This is the most basic certification offered by AWS. It is meant to demonstrate a broad-stroke understanding of the core services and foundational knowledge of AWS. It is also a good certification for non-technical people that need to be able to communicate using the AWS lingo but are not necessarily going to be configuring or developing in AWS. This certification is ideal for demonstrating a basic understanding of AWS technologies for people such as salespeople, business analysts, marketing associates, executives, and project managers.
AWS Solutions Architect Path
A solutions architect is one of the most sought roles in the cloud industry. Often, solutions architects carry the responsibilities of designing your workload in the Cloud and applying architecture best practices using the AWS Well-Architected Tool. Following AWS certifications can help you kick-start your career as an AWS cloud solutions architect.
AWS Certified Solutions Architect – Associate
IMPORTANT NOTE - Starting August 30, 2022, a new version of the AWS Certified Solutions Architect - Associate exam will be available.
This is the most popular certification offered by AWS. Many technically minded developers, architects, and administrators skip taking the Cloud Practitioner certification and start by taking this certification instead. If you are looking to demonstrate technical expertise in AWS, obtaining this certification is a good start and the bare minimum to demonstrate AWS proficiency. However, to demonstrate proficiency in architecting IT workload in AWS cloud, you should pursue Solution Architect professional certification as mentioned below:
AWS Certified Solutions Architect – Professional
This certification is one of the toughest to get and at least five to six times harder than the Associate-level certification. Earning this certification will demonstrate to employers that you have a deep and thorough understanding of AWS services, best practices, and optimal architectures based on the particular business requirements for a given project. Obtaining this certification shows potential employers that you are an expert in designing and creating distributed systems and applications on the AWS platform. It used to be that having at least one of the Associate-level certifications was a prerequisite to sitting for the professional-level certifications, but AWS has eliminated that requirement.You refer to "Solution Architect's Handbook 2nd Edition," available on Amazon (https://www.amazon.com/gp/product/1801816611) for more details on the AWS Solutions architect role and to gain in-depth knowledge building use case focused architecture in the AWS platform. DevOps is one of the key components to operationalizing any workload. Let's learn more about the DevOps path in AWS.
AWS Cloud DevOps Engineer Path
DevOps is a critical engineering function that makes the development team more agile by automating the deployment pipeline. Automation is key to adopting Cloud and using its full potential, where DevOps engineer plays an essential role. Followings AWS certification can help you navigate the DevOps path in the AWS cloud.
AWS Certified SysOps Administrator – Associate
This certification will demonstrate to potential employers and clients that you have experience deploying, configuring, scaling up, managing, and migrating applications using AWS services. You should expect the difficulty level of this certification to be a little bit higher than the other Associate-level certifications, but also expect quite a bit of overlap in the type of questions that will be asked with this certification and the other Associate-level certifications.
AWS Certified DevOps Engineer – Professional
This advanced AWS certification validates knowledge on how to provision, manage, scale, and secure AWS resources and services. This certification will demonstrate to potential employers that you can run their DevOps operations and proficiently develop solutions and applications in AWS. This certification is more challenging than any associate certification but easier than AWS Solutions Architect Professional certification.
AWS Cloud Developer Path
Developers are central to any IT application. They are builders who bring life to ideas that make developers a vital role in the Cloud. However, software developers are more focused on programming language and algorithms but build software in Cloud; they need to be aware of various development tools that cloud providers facilitate. Following is the certification to gain the required cloud knowledge for building software in AWS.
AWS Certified Developer – Associate
Obtaining this certification will demonstrate your ability to design, develop, and deploy applications in AWS. Even though this is a developer certification, do not expect coding in any questions during the exam. However, knowing at least one programming language supported by AWS will help you achieve this certification. Expect to see many of the same concepts and similar questions to what you would see in the Solutions Architect certification. AWS doesn't have any professional certification for developers, but it is recommended to pursue AWS DevOps Engineer certifications to scale and operationalize your software application in Cloud. While we talked about the generalist career path in the Cloud, several specialty paths are available where AWS has certifications to validate your knowledge. Let's look into the AWS certifications overview if you have expertise in a specific area.
AWS Specialty Solutions Architect Path
While generalist solutions architects design overall workload, they need to dive deep into certain areas where more in-depth knowledge is required. In that case, specialist solutions architects come to the rescue; they provide their expertise to apply best practices for a specific domain such as security, networking, analytics, machine learning, etc. You have seen in Well-Architected tool sections where AWS has domain-specific lenses to optimize specialty workload and engage specialist solutions architects. The following are AWS certifications to validate your specialty knowledge in the AWS cloud.
AWS Certified Advanced Networking – Specialty
IMPORTANT NOTE – New version of networking specialty certifications will be available starting July-2022.
This AWS specialty certification demonstrates that you possess the skills to design and deploy AWS services as part of comprehensive network architecture and the know-how to scale using best practices. This is one of the hardest certifications to obtain, like AWS Certified Solutions Architect Professional.
AWS Certified Security – Specialty
Possessing the AWS Certified Security – Specialty certification demonstrates to potential employers that you are well-versed in AWS the ins and outs of AWS security. It shows that you know security best practices for encryption at rest, encryption in transit, user authentication and authorization, penetration testing, and generally being able to deploy AWS services and applications in a secure manner that aligns with your business requirements.
AWS Certified Machine Learning – Specialty
This is an excellent certification to have in your pocket if you are a data scientist or a data analyst. It shows potential employers that you are familiar with many of the core machine learning concepts and the AWS services that can be used to deliver machine learning and artificial intelligence projects.
AWS Certified Database – Specialty
Having this certification under your belt demonstrates to potential employers your mastery of the persistence services in AWS and your deep knowledge of the best practices needed to manage them. Some of the services tested are these:
- Amazon RDS
- Amazon Aurora
- Amazon Neptune
- Amazon DynamoDB
- Amazon QLDB
- Amazon DocumentDB
AWS Certified Data Analytics – Specialty
Completing this certification demonstrates to employers that you have a good understanding of the concepts needed to perform data analysis on petabyte-scale datasets. This certification shows your ability to design, implement, and deploy analytics solutions that deliver insights by enabling data visualization and implementing the appropriate security measures.
AWS Certified SAP – Specialty
SAP specialty is a brand new certification exam that anyone can attempt starting April-2022. AWS SAP specialty certification is for SAP professionals to demonstrate their knowledge in the AWS cloud. It shows your ability to implement, migrate and support SAP workload in AWS using AWS's well-architected framework. While AWS continues adding new certifications to validate your cloud skill, they also retire old certifications that are not relevant with time; for example, AWS had bigdata specialty certification, which checks your knowledge across database, ML, and analytics. With time as a number, so services increased in Database and AIML, AWS launched separate certifications called AWS database specialty and AWS machine learning Specialty. In April-2020, AWS deprecated bigdata specialty certification and renamed it to AWS analytics specialty certifications to focus just on data analytics services. Similarly, AWS retired the AWS Certified Alexa Skill Builder - Specialty exam on March 23, 2021. Let's learn some tips and tricks to achieve AWS certifications.
Learning tips and tricks to obtain AWS certifications
Now that we have learned about the various certifications offered by AWS let's learn about some of the strategies we can use to get these certifications with the least amount of work possible and what we can expect as we prepare for these certifications.
Focus on one cloud provider
Some enterprises are trying to adopt a cloud-agnostic or multi-cloud strategy. The idea behind this strategy is not to depend on only one cloud provider. In theory, this seems like a good idea, and some companies such as Databricks, Snowflake, and Cloudera offer their wares to run using the most popular cloud providers.However, this agnosticism comes with some difficult choices. One way to implement this strategy is to choose the least common denominator, for example, only using compute instances so that workloads can be deployed on various cloud platforms. Implementing this approach means that you cannot use the more advanced services offered by cloud providers. For example, using AWS lambda in a cloud-agnostic fashion is quite tricky.Another way that a multi-cloud strategy can be implemented is by using the more advanced services, but this means that your staff will have to know how to use these services for all the cloud providers you decide to use. You will be a jack of all trades and a master of none, to use the common refrain.Similarly, it isn't easy to be a cloud expert across vendors individually. It is recommended to pick one cloud provider and try to become an expert on that one stack. AWS, Azure, and GCP, to name the most popular options, offer an immense amount of services that continuously change and get enhanced, and they keep adding more services. Keeping up with one of these providers is not an easy task. Keeping up with all three, in my opinion, is close to impossible.Pick one and dominate it.
Getting Started in AWS
AWS launched a skill builder portal (https://explore.skillbuilder.aws/) which enhances the version of AWS's training portal. AWS Skill builder has thousands of self-paced digital training and a learning path, as shown below.
You can pick any learning path you need and explore related digital courses. If you want classroom training, those are available in the AWS training portal; however, those may come with a price. AWS provides free cloud practitioner training in their skill center, where you can register and get instructor-led training for free. AWS has started its first free training center located in Seattle and planning to expand more in the coming months. If you have skill centers in your location, you can get benefits by registering on the AWS website directly https://aws.amazon.com/training/skills-centers/.
Focus on the Associate-level certifications
As we mentioned before, there's quite a bit of overlap between the Associate-level certifications. In addition, the jump in difficulty between the Associate-level certificates and the Professional-level ones is quite steep.Its highly recommend sitting down for at least two, if not all three, of the Associate-level certifications before attempting the Professional-level certifications. Not only will this method prepares you for the professional certifications, but having multiple Associate certifications will make you stand out against others that only have one Associate-level certificate.
Get experience wherever you can
AWS recommends having one year of experience before taking the associate-level certifications and two years of experience before sitting for the professional-level certifications. This may seem like a catch-22 situation. How can you get experience if you are not certified? However, it's a recommendation and not a mandatory requirement. This means that you can gain experience in training and study for the exam. You can do your project using an AWS free-tier account with pretty decent amounts of services available in the first year, and you can gain good hands-on experience. Now, let's address some of the questions that frequently arise while preparing to take these certifications.
Some frequently asked questions about the AWS certifications
While preparing for certifications, you may have several questions like where to start and how to finish. The following section will list frequently asked questions that often come to mind.
What is the best way to get certified?
Before we get to the best way to get certified, let's look at the worst way. Amazon offers extremely comprehensive documentation. You can find this documentation here:https://docs.aws.amazon.com/AWS docs are a great place to help you troubleshoot issues you may encounter when you are directly working with AWS services or perhaps to size the services you will be using correctly. However, it is not a good place to study for the exams. It will get overwhelming quickly, and much of the material you will learn about will not be covered in the exams.The better way to get certified is to use the training materials that AWS specifically provides for certification, starting with the roadmaps of what will be covered in each certificate. These roadmaps are a good first step to understanding the scope of each exam.You can begin to learn about all these roadmaps, or learning paths, as AWS likes to call them, here: https://aws.amazon.com/training/learning-paths/.You will find free online courses and paid intensive training sessions in these learning paths. While the paid classes may be helpful, those are not mandatory for you to pass the exam.Before you look at the learning paths, the first place to find out the scope of each certification is the study guides available for each certificate. In these study guides, you will learn at a high level what will and what won't be covered for each exam. For example, the study guide for the AWS Cloud Practitioner Certification can be found here:https://d1.awsstatic.com/training-and-certification/docs-cloud-practitioner/AWS-Certified-Cloud-Practitioner_Exam-Guide.pdfNow, while the training provided by AWS may be sufficient to pass the exams, and I know plenty of folks that have passed the certifications using only those resources, there are plenty of third-party companies that specialize in training people with a special focus on the certifications. The choices are almost endless. Let's look at a few more resources here.
In addition to the course provided by AWS, other training organizations and independent content creators provide excellent courses to achieve AWS certification.
A Cloud Guru
A cloud. guru has been around since 2015, which is a long time in cloud years. A cloud. guru has courses for most of the AWS certifications. They have a few other courses unrelated to certifications that are also quite good. Linux academy used to be another good resource to prepare for the certification exam, but that got acquired by A Cloud Guru, which means now you can access the best of these in one place.They constantly update and refresh their content, which is good because AWS constantly changes its certifications to align with new services and features. Sam and Ryan Kroonenburg, two Australian brothers, started the company in Melbourne, Australia. Initially, Ryan was the instructor for all the courses, but they now have many other experts on their staff to help with the course load.They used to charge by the course, but a few years back, they changed their model to a monthly subscription, and sign up for it gives you access to the whole site. The training can be accessed here: https://acloud.guru/.
- A cloud guru is the site used the most to prepare for my certifications. Following is the recommendation for tackling the training:
Unless you have previous experience with the covered topics, watch all the training videos at least once. If it's a topic you feel comfortable with, you can play the videos at a higher speed, and then you will be able to watch the full video faster.
- For video lessons that you find difficult, watch them again. You don't have to watch all the videos again – only the ones that you found difficult.
- Make sure to take any end-of-section quizzes.
- Once you finish watching the videos, the next step is to attempt some practice exams. One of my favorite features of A cloud. guru is the exam simulator. Keep taking practice exams until you feel confident and consistently correctly answer a high percentage of the questions (anywhere between 80 and 85%, depending on the certification).
The questions provided in the exam simulator will not be the same as the ones from the exam, but they will be of a similar difficulty level, and they will all be in the same domains and often about similar concepts and topics.By using the exam simulator, you will achieve a couple of things. First, you will be able to gauge your progress and determine whether you are ready for the exam. I suggest you keep taking the exam simulator tests until you consistently score at least 85% and above. Most real certifications require you to answer 75% of the questions correctly, so consistently scoring a little higher should ensure that you pass the exam.Some of the exams, such as the Security Specialty exam, require a higher percentage of correct answers, so you should adjust accordingly. Using the exam simulator will also enable you to figure out which domains you are weak. After taking a whole exam in the simulator, you will get a list detailing exactly which questions you got right and which were wrong, and they will all be classified by domain.So, if you get a low score on a certain domain, you know that's the domain that you need to focus on when you go back and review the videos again. Lastly, you will be able to learn new concepts by simply taking the tests in the exam simulator. Let's now learn about another popular site that I highly recommend for your quest toward certification.You can also explore other training providers such as Cloud Academy and Coursera. However, you don't need to sign-up for multiple course providers.
Several independent content creators on Udemy, such as Stephane Maarek and Jon Bonso, have excellent content and are passionate about AWS with a growing following. For example, as of April 2022, Stephane Maarek's Solution Architect Associate course has over half a million students with over 120,000 ratings and a satisfaction rating of 4.7 stars out of a possible five stars.The pricing model used is also similar to Whizlabs. The practice exams are sold separately from the online courses.
As always, YouTube is an excellent source of free learning. AWS has its own YouTube channel with nearly 600K subscribers and 14000 videos. These videos cover AWS services from 100 to 400 levels by AWS product manager and solutions architect. AWS uploads all re: invent and summits videos in the YouTube channel, the best resources to dive deep for any services. You can find serval playlists people have created to prepare for certifications.
If you are a book reader, there are multiple AWS certification-related books available on Amazon, which you can refer to prepare for the exam. If you are preparing for the AWS Solutions Architect Professional exam and solidifying your concept, refer to "Solution Architect's Handbook " (https://www.amazon.com/gp/product/1801816611). It explains multiple architectural patterns using the AWS platform and goes deep into each of the Well-Architected pillars to apply architectural best practices.
Practice Exam websites
It doesn't matter how much you are reading or watching courses. There are always knowledge gaps, and practice exams are the best sources to identify and focus on weak areas. Whizlabs (https://www.whizlabs.com/ ) is suitable for associate-level certification and test your knowledge in multiple areas to find weak points. Whizlab also provides answers with details explanations and associated resources which can help you fill the knowledge gap by exploring related content against questions you got wrong. Whizlabs divides the charges for their training between their online courses and their practice tests. One disadvantage of Whizlabs is that unlike the exam simulator with A cloud.guru, where they have a bank of questions and randomly combine them, the Whizlabs exam questions are fixed and will not be shuffled to create a different exam. They also have a free version of their practice exams for most certifications, with 20 free questions.Like Whizlabs, you can use Brain Cert for AWS professional and specialty level certification (https://www.braincert.com). They have a perfect set of questions near exam difficulty level with details explanations for each answer. While Whizlabs practice exams have lifetime validity, Braincert provides only one-year validity.The same strategy as mentioned before can be used with Whizlabs or Braincert. You don't need to sign up for multiple vendors for the more straightforward exams, but you can combine a couple for the harder exams.
How long will it take to get certified?
A question frequently comes about how many months you should study before sitting down for the exam. At least look that into hours instead of months.As you can imagine, you will be able to take the exam a lot sooner if you study 2 hours every day instead of only studying 1 hour a week. If you decide to take some AWS-sponsored intensive full-day or multi-day training, that may go a long way toward shortening the cycle.One way to optimize your time is instead of watching the videos, you can listen to them in the car or while on the train going into the city. Even though watching them is much more beneficial, you can still embed key concepts while listening to them, and that time would have been dead time anyway.You don't want to space the time between study sessions too much. If you do that, you may find yourself in a situation where you start forgetting what you have learned. The number of hours it will take you will also depend on your previous experience. If you are working with AWS for your day job, that will shorten the number of hours needed to complete your studies.The following subsections will give you an idea of the amount of time you should spend preparing for each exam.
Cloud Practitioner certification
Be prepared to spend anywhere from 15 to 25 hours preparing to complete this certification.
If you don't have previous AWS experience, plan to spend between 70 and 100 hours preparing. Also, keep in mind that there is considerable overlap between the other certifications once you pass one of the associate certifications. It will not take another 70 to 100 hours to obtain the second and third certifications. As mentioned in this chapter, it is highly recommended to take the two other associate-level certifications soon after passing the first one.Expect to spend another 20 to 40 hours studying for the two remaining certifications if you don't wait too long to take them after passing the first one.
There is quite a leap between the associate-level certifications and the professional-level certifications. The domain coverage will be similar, you will need to know how to use the AWS services covered in much more depth, and the questions will certainly be harder. Assuming you took at least one of the associate-level certifications, expect to spend another 70 to 100 hours watching videos, reading, and taking practice tests to pass this exam.AWS removed the requirement of having to take the associate-level certifications before being able to sit for the professional-level certifications. However, it is still probably a good idea to take at least some associate exams before taking the professional-level exams.As is the case with the associate-level exams, once you pass one of the professional-level exams, it should take much less study time to give the other Professional exam as long as you don't wait too long to take the second exam and forget everything.
I am lumping all the Specialty certifications under one subheading, but there is significant variability in the difficulty level between all the Specialty certifications. If you have a background in networking, you will be more comfortable with the Advanced Networking certification than with the Data Science certification.When it comes to these certifications, you may be better off focusing on your area of expertise unless you are collecting all certifications. For example, if you are a data scientist, the Machine Learning Specialty certification and analytics certification may be your best bet.Depending on your experience, expect to spend about these amounts of time:
- Security Specialty – 40 to 60 hours
- SAP Specialty – 40 to 60 hours
- Machine Learning Specialty – 50 to 70 hours
- Data Analytics Specialty– 40 to 60 hours
- Database Specialty – 30 to 50 hours
- Advanced Networking Specialty – 50 to 70 hours
What are some last-minute tips for the days of the exam?
A decent half marathon time is about 90 minutes, which is when you get to take the Associate-level exams, and a good marathon time is about 3 hours, which is how long you get to take the Professional-level exams.Keeping focus for that amount of time is not easy. For that reason, you should be well-rested when you take the exam. It is highly recommended to take the exam on a day when you don't have too many other responsibilities; I would not take it after working a full day. You will be too burnt out.Make sure you have a light meal before the exam – enough so that you are not hungry during the test and feel energetic, but not so much that you feel sleepy from digesting all that food.Just as you wouldn't want to get out of the gate too fast or too slow in a race, keep pace yourself during the exam. You also don't want to be beholden to the clock, checking it constantly. The watch will always appear in the top-right part of the exam, but you want to avoid looking at it most of the time. I recommend writing down the three sheets you will receive where you should be after every 20 questions and checking the clock against these numbers only when you have answered 20 questions. This way, you will be able to adjust if you are going too fast or too slow, but you will not spend excessive time watching the clock.Let's now summarize what we have learned in this chapter.
This chapter could piece together many of the technologies, best practices, and AWS services we covered in the book. We weaved it all together into a generic architecture that you should be able to leverage and use for your projects.As fully featured as AWS has become, AWS will certainly continue to provide more and more services to help enterprises, large and small, simplify the information technology infrastructure.You can rest assured that Amazon and its AWS division are hard at work creating new services and improving the existing services by making them better, faster, easier, more flexible, and more powerful, and adding more features.As of 2022, AWS offers 200+ services and 3000+ features. That's a big jump from the two services it offered in 2006. AWS's progress in the last 16 years has been monumental. We also covered some reasons that the Cloud in general and AWS, in particular, are so popular. As we learned, one of the main reasons for the Cloud's popularity is the concept of elasticity, which we explored in detail.After reviewing the Cloud's popularity, we have hopefully convinced you to hop aboard the cloud train. Assuming you want to get on the ride, we covered the easiest way to break into the business. One of the easiest ways to build credibility is to get certified. We learned that AWS offers 12 certifications. We learned that the most basic one is AWS Cloud Practitioner and that the most advanced certifications are the Professional-level certifications. In addition, as of 2022, we learned that there are six Specialty certifications for various domains. We also covered some of the best and worst ways to obtain these certifications.Finally, we hope you became curious enough about potentially getting at least some of AWS's certifications. I hope you are as excited as I am about the possibilities that AWS can bring.The next chapter will cover how the AWS infrastructure is organized and understand how you can use AWS and cloud technologies to lead a dig.