Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Modern Data Architecture on AWS
Modern Data Architecture on AWS

Modern Data Architecture on AWS: A Practical Guide for Building Next-Gen Data Platforms on AWS

By Behram Irani
$15.99 per month
Book Aug 2023 420 pages 1st Edition
eBook
$35.99 $24.99
Print
$44.99
Subscription
$15.99 Monthly
eBook
$35.99 $24.99
Print
$44.99
Subscription
$15.99 Monthly

What do you get with a Packt Subscription?

Free for first 7 days. $15.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details


Publication date : Aug 31, 2023
Length 420 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781801813396
Category :
Concepts :

Estimated delivery fee Deliver to Indonesia

Standard 10 - 13 business days

$12.95

Premium 5 - 8 business days

$45.95
(Includes tracking information)
Table of content icon View table of contents Preview book icon Preview Book

Modern Data Architecture on AWS

Prologue

The Data and Analytics Journey So Far

“We are surrounded by data but starved for insights”

– Jay Baer

We have been surrounded by digital data for almost a century now and every decade has had its unique challenges regarding how to get the best value out of that data. But these challenges were narrow in scope and manageable since the data itself was manageable. Even though data was rapidly growing in the 20th century, its volume, velocity, and variety were still limited in nature. And then we hit the 21st century and the world of data drastically changed. Data started to exponentially grow due to multiple reasons:

  • The adoption of the internet picked up speed and data grew into big data
  • Smartphone devices became a common household entity and these devices all generated tons of data
  • Social media took off and added to the deluge of information
  • Robotics, smart edge devices, industrial devices, drones, gaming, VR, and other artificial intelligence-driven gadgets took the growth of data to a whole new level.

However, across all this, the common theme that exists even today is that data gets produced, processed, stored, and consumed.

Now, even though the history of data and analytics goes back many decades, I don’t want to dig everything up. Since this book revolves around cloud computing technologies, it is important to understand how we got here, what systems were in place in the on-premises data center world, and why those same systems and the architectural patterns surrounding them struggle to cater to the business and technology needs of today.

In this prologue, we will cover the following main topics:

  • Introduction to the data and analytics journey
  • Traditional data platforms
  • Challenges with on-premises data systems
  • What this book is all about

If you are already well versed with the traditional data platforms and their challenges, you can skip this introduction and directly jump to Chapter 1.

Introduction to the data and analytics journey

The online transaction processing (OLTP) and online analytical processing (OLAP) systems worked great by themselves for a very long time when data producers were limited, the volume of data was under control, and data was mostly structured in tabular format. The last 20 years have seen a seismic shift in the way new businesses and technologies have come up.

As the volume, velocity, and variety of data started to pick steam, data grew into big data and the data processing techniques needed a major rehaul. This gave rise to the Apache Hadoop framework, which changed the way big data was processed and stored. With more data, businesses wanted to get more descriptive and diagnostic analytics out of their data. At the same time, another technology was gaining rapid traction, which gave organizations hope that they could look ahead to the future and predict what may happen in advance so that they could take immediate actions to steer their businesses in the right direction. This was made possible by the rise of artificial intelligence and machine learning and soon, large organizations started investing in predictive analytics projects.

And while we were thinking that we got the big data under control with new frameworks, the data floodgates opened up. The last 10 to 15 years have been revolutionary with the onset of smart devices, including smartphones. Connectivity among all these devices and systems made data grow exponentially. This was termed the Internet of Things (IoT). And to add to the complexity, these devices started to share data in near real time, which meant that data had to be streamed immediately for consumption. The following figure highlights many of the sources from where data gets generated. A lot of insights can be derived from all this data so that organizations can make faster and better decisions:

Figure 00.1 – Big data sources

Figure 00.1 – Big data sources

This also meant that organizations started to carefully segregate their technical workforce to deal with data in personas. The people processing big data came to be known as data engineers, the people dealing with data for future predictions were the data scientists, and the people analyzing the data with various tools were the data analysts. Each type of persona had a well-defined task and there was a strong desire to create/purchase the best technological tool out there to make their day-to-day lives easier.

From a data and analytics point of view, systems started to grow bigger with extra hardware. Organizations started to expand their on-premises data centers with the latest and greatest servers out there to process all this data as fast as possible, to create value for their businesses. However, a lot of architecture patterns for data and analytics remained the same, which meant that many of the old use cases were still getting solved. However, with new demands from these businesses, pain areas started popping up more frequently.

Traditional data platforms

Before we get into architecting data platforms in a modern way, it is important to understand the traditional data platforms and know their strengths and limitations. Once we understand the challenges of traditional data platforms in solving new business use cases, we can design a modern data platform in a holistic matter.

Three-tier architecture

Throughout the 1980s and 1990s, the three-tier architecture became a popular way of producing, processing, and storing data. Almost every organization used this pattern as it met the business needs with ease. The three tiers of this architecture were the presentation tier, the application tier, and the data tier:

  • The presentation tier was the front-facing module and was created either as a thick client – that is, software was installed on the client’s local machine – or as a thin client – that is, a browser-based application.
  • The application tier would receive the data from the presentation tier and process this data with business logic hosted on the application server.
  • The data tier was the final resting place for the business data. The data tier was typically a relational database where data was stored in rows and columns of tables.

Figure 00.2 represents a typical three-tier architecture:

Figure 00.2 – A traditional three-tier architecture pattern

Figure 00.2 – A traditional three-tier architecture pattern

This three-tier architecture worked well to meet the transactional nature of businesses. To a certain extent, this system was able to help with creating a basic reporting mechanism to help organizations understand what was happening with their business. But the kind of technology used in this architecture fell short of going a step further – to identify and understand why certain things were happening with their business. So, a new architecture pattern was required that could decouple this transitional system from the analytics type of operations. This paved the way for the creation of an enterprise data warehouse (EDW).

Enterprise data warehouse (EDW)

The need for a data warehouse came from the realistic expectations of organizations to derive business intelligence out of the data they were collecting so that they could get better insights from this data and make the necessary adjustments to their business practices. For example, if a retailer is seeing a steady decline in sales from a particular region, they would want to understand what is contributing to this decline.

Now, let’s capture the data flow. All the transactional data is captured by the presentation tier, processed by the application tier, and stored in the data tier of the three-tier architecture. The database behind the data tier is always online and optimized for processing a large number of transactions, which come in the form of INSERT, UPDATE, and DELETE statements. This database also emphasizes fast query processing while maintaining atomicity, consistency, isolation, and durability (ACID) compliance. For this reason, this type of data store is called OLTP.

To further analyze this data, a path needs to be created that will bring the relevant data over from the OLTP system into the data warehouse. This is where the extract, transform, and load (ETL) layer comes into the picture. And once the data has been brought over to the data warehouse, organizations can create the business intelligence (BI) they need via the reporting and dashboarding capabilities provided by the visualization tier. We will cover the ETL and BI layers in detail in later chapters, but the focus right now is walking through the process and the history behind them.

The data warehouse system is distinctly different from the transactional database system. Firstly, the data warehouse does not constantly get bombarded by transactional data from customer-facing applications. Secondly, the types of operations that are happening in the data warehouse system are specific to mining information insights from all the data, including historical data. Therefore, this system is constantly doing operations such as data aggregation, roll-ups (data consolidation), drill-downs, and slicing and dicing the data. For this reason, the data warehouse is called OLAP.

The following figure shows the OLTP and OLAP systems working together:

Figure 00.3 – The OLTP and OLAP systems working together

Figure 00.3 – The OLTP and OLAP systems working together

The preceding diagram shows all the pieces together. This architectural pattern is still relevant and works great in many cases. However, in the era of cloud computing, business use cases are also rapidly evolving. In the following sections, we will take a look at variations of this design pattern, as well as their advantages and shortcomings.

Bottom-up data warehouse approach

Ralph Kimball, one of the original architects of data warehousing, proposed the idea of designing the data warehouse with a bottom-up approach. This involved creating many smaller purpose-built data marts inside a data warehouse. A data mart is a subset of the larger data warehouse with a focus on catering to use cases for a specific line of business (LOB) or a specific team. All of these data marts can be combined to form an enterprise-wide data warehouse. The design of data marts is also kept simple by having the data model as a star schema to a large extent. A star schema keeps the data in sets of denormalized tables. There are known as fact tables, and they store all the transactional and event data. Since these tables store all the fast-moving granular data, they accumulate a large number of records over a short period. Then, there are the dimension tables, which typically store characteristics data such as details about people and organizations, product information, geographical information, and so forth. Since such information doesn’t rapidly get produced or changed over a short period, compared to fact tables, dimension tables are relatively smaller in terms of the number of records stored. The following figure shows a bottom-up EDW design approach where individual data marts contribute toward a bigger data warehouse:

Figure 00.4 – Bottom-up EDW design

Figure 00.4 – Bottom-up EDW design

Benefits of the bottom-up approach

Let’s look at a few benefits of the bottom-up approach:

  • The EDW gets systemically built over a certain period with business-specific groupings of data marts.
  • The data model’s design is typically created via star schemas, which makes the model denormalized in nature. Some data becomes redundant in this approach but overall, it helps in making the data marts perform better.
  • An EDW is easier to create since the time taken to set up individual business-specific data marts is shorter compared to setting up an enterprise-wide warehouse.
  • An EDW that contains data marts also makes it better suited for setting up data lakes. We will cover everything about data lakes in subsequent chapters.

Shortcomings of the bottom-up approach

Now, let’s look at the shortcomings of the bottom-up approach:

  • It is challenging to achieve a fully harmonized integration layer because the EDW is purpose-built for each use case in the form of data marts. Data redundancy also makes it difficult to create a single source of truth.
  • Normalized schemas create data redundancy, which makes the tables grow very large. This slows down the performance of ETL job pipelines.
  • Since the data marts are tightly coupled to the specific business use cases, managing structural changes and their dependencies on the data warehouse becomes a cumbersome process.

Top-down data warehouse approach

Bill Inmon, widely recognized as the father of data warehouses, proposed the idea of designing the data warehouse with a top-down approach. In this approach, a single source of truth for the data in the form of an EDW is constructed first using a normalized data model to reduce data redundancy. Data from different sources is mapped to a single data model, which means that all the source elements are transformed and formatted to fit in this enterprise-wide structure that’s created in the data warehouse. The following figure shows a top-down EDW design approach where the warehouse is built first before smaller data marts are created for consumers:

Figure 00.5 – Top-down EDW design

Figure 00.5 – Top-down EDW design

Benefits of the top-down approach

Let’s look at a few benefits of the top-down approach:

  • The data model is highly normalized, which reduces data redundancy
  • Since it’s not tied to a specific LOB or use case, the data warehouse can evolve independently at an enterprise level
  • It provides flexibility for any business requirement changes or data structure updates
  • ETL pipelines are simpler to create and maintain

Shortcomings of the top-down approach

Now, let’s look at the shortcomings of the top-down approach:

  • A normalized data model increases the complexity of schema design
  • A large number of joins on the normalized tables can make the system compute-intensive and expensive over time
  • Additional logic is required to create a business-specific data consumption layer, which means additional ETL processes are needed to create data marts from the unified EDW
Left arrow icon Right arrow icon

Key benefits

  • Learn to build modern data platforms on AWS using data lakes and purpose-built data services
  • Uncover methods of applying security and governance across your data platform built on AWS
  • Find out how to operationalize and optimize your data platform on AWS
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Many IT leaders and professionals are adept at extracting data from a particular type of database and deriving value from it. However, designing and implementing an enterprise-wide holistic data platform with purpose-built data services, all seamlessly working in tandem with the least amount of manual intervention, still poses a challenge. This book will help you explore end-to-end solutions to common data, analytics, and AI/ML use cases by leveraging AWS services. The chapters systematically take you through all the building blocks of a modern data platform, including data lakes, data warehouses, data ingestion patterns, data consumption patterns, data governance, and AI/ML patterns. Using real-world use cases, each chapter highlights the features and functionalities of numerous AWS services to enable you to create a scalable, flexible, performant, and cost-effective modern data platform. By the end of this book, you’ll be equipped with all the necessary architectural patterns and be able to apply this knowledge to efficiently build a modern data platform for your organization using AWS services.

What you will learn

Familiarize yourself with the building blocks of modern data architecture on AWS Discover how to create an end-to-end data platform on AWS Design data architectures for your own use cases using AWS services Ingest data from disparate sources into target data stores on AWS Build data pipelines, data sharing mechanisms, and data consumption patterns using AWS services Find out how to implement data governance using AWS services

What do you get with a Packt Subscription?

Free for first 7 days. $15.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details


Publication date : Aug 31, 2023
Length 420 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781801813396
Category :
Concepts :

Estimated delivery fee Deliver to Indonesia

Standard 10 - 13 business days

$12.95

Premium 5 - 8 business days

$45.95
(Includes tracking information)

Table of Contents

24 Chapters
Preface Chevron down icon Chevron up icon
1. Part 1: Foundational Data Lake Chevron down icon Chevron up icon
2. Prologue: The Data and Analytics Journey So Far Chevron down icon Chevron up icon
3. Chapter 1: Modern Data Architecture on AWS Chevron down icon Chevron up icon
4. Chapter 2: Scalable Data Lakes Chevron down icon Chevron up icon
5. Part 2: Purpose-Built Services And Unified Data Access Chevron down icon Chevron up icon
6. Chapter 3: Batch Data Ingestion Chevron down icon Chevron up icon
7. Chapter 4: Streaming Data Ingestion Chevron down icon Chevron up icon
8. Chapter 5: Data Processing Chevron down icon Chevron up icon
9. Chapter 6: Interactive Analytics Chevron down icon Chevron up icon
10. Chapter 7: Data Warehousing Chevron down icon Chevron up icon
11. Chapter 8: Data Sharing Chevron down icon Chevron up icon
12. Chapter 9: Data Federation Chevron down icon Chevron up icon
13. Chapter 10: Predictive Analytics Chevron down icon Chevron up icon
14. Chapter 11: Generative AI Chevron down icon Chevron up icon
15. Chapter 12: Operational Analytics Chevron down icon Chevron up icon
16. Chapter 13: Business Intelligence Chevron down icon Chevron up icon
17. Part 3: Govern, Scale, Optimize And Operationalize Chevron down icon Chevron up icon
18. Chapter 14: Data Governance Chevron down icon Chevron up icon
19. Chapter 15: Data Mesh Chevron down icon Chevron up icon
20. Chapter 16: Performant and Cost-Effective Data Platform Chevron down icon Chevron up icon
21. Chapter 17: Automate, Operationalize, and Monetize Chevron down icon Chevron up icon
22. Index Chevron down icon Chevron up icon
23. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.