Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Serverless ETL and Analytics with AWS Glue

You're reading from  Serverless ETL and Analytics with AWS Glue

Product type Book
Published in Aug 2022
Publisher Packt
ISBN-13 9781800564985
Pages 434 pages
Edition 1st Edition
Languages
Authors (6):
Vishal Pathak Vishal Pathak
Profile icon Vishal Pathak
Subramanya Vajiraya Subramanya Vajiraya
Profile icon Subramanya Vajiraya
Noritaka Sekiyama Noritaka Sekiyama
Profile icon Noritaka Sekiyama
Tomohiro Tanaka Tomohiro Tanaka
Profile icon Tomohiro Tanaka
Albert Quiroga Albert Quiroga
Profile icon Albert Quiroga
Ishan Gaur Ishan Gaur
Profile icon Ishan Gaur
View More author details

Table of Contents (20) Chapters

Preface 1. Section 1 – Introduction, Concepts, and the Basics of AWS Glue
2. Chapter 1: Data Management – Introduction and Concepts 3. Chapter 2: Introduction to Important AWS Glue Features 4. Chapter 3: Data Ingestion 5. Section 2 – Data Preparation, Management, and Security
6. Chapter 4: Data Preparation 7. Chapter 5: Data Layouts 8. Chapter 6: Data Management 9. Chapter 7: Metadata Management 10. Chapter 8: Data Security 11. Chapter 9: Data Sharing 12. Chapter 10: Data Pipeline Management 13. Section 3 – Tuning, Monitoring, Data Lake Common Scenarios, and Interesting Edge Cases
14. Chapter 11: Monitoring 15. Chapter 12: Tuning, Debugging, and Troubleshooting 16. Chapter 13: Data Analysis 17. Chapter 14: Machine Learning Integration 18. Chapter 15: Architecting Data Lakes for Real-World Scenarios and Edge Cases 19. Other Books You May Enjoy

Data warehouses and data marts

In an organization, it is not uncommon for day-to-day operations to be performed and stored in several transactional operating systems. However, when higher-level business decisions are to be made using data gathered from these systems, it would be easier to collate necessary information from these sources and build a centralized repository for datasets to gather actionable intelligence.

A data warehouse is a centralized repository of data that’s been gathered from various sources within an organization. The collated data within this repository is analyzed and can be used to make business decisions. A data mart, on the other hand, is a subset of a data warehouse aligned toward a specific business unit within an organization.

The concept of data warehouses was introduced in the late 1980s. Data warehouses are subject-oriented, integrated, time-variant, and non-volatile. This means that data warehouses are designed to be able to make sense of the data in a specific subject rather than ongoing operations, such as sales, marketing, and HR. Data warehouses are also designed to integrate data for several different source systems, such as Enterprise Resource Planning (ERP), Human Resource Management Systems (HRMSs), Customer Relationship Management (CRM), Financial Management Systems (FMSs), and any other operational systems within an organization. The data within a data warehouse is usually structured, but it can be unstructured as well. Data warehouses also allow users to analyze the data at different grains of time, such as year, month, and day. The data in data warehouses is non-volatile and maintains history. So, changes in the source systems result in newer entries in the data warehouses where the new state of the data is used while preserving the old state of the data.

In Inmon’s top-down data warehousing approach, data architects and modelers start by looking at the holistic data landscape of an organization and identifying the main subject areas and entities under it. Inmon’s data warehouse is normalized and avoids redundancy. This simplifies the data ingestion process but is not optimized for queries. Hence, data marts are built on top of data warehouses and users access these data marts for their queries.

While data marts can be based on a star or snowflake schema, the star schema is generally preferred because it results in faster queries due to fewer joins. In 1996, Ralph Kimball introduced the star schema methodology to the data management world. This follows the bottom-up approach and creates data marts based on the business requirements instead of starting with an enterprise data warehouse.

In a data mart, data is stored at multiple levels and the table at the correct level is picked for processing the data. The atomic level by which the facts may be defined is known as the grain or granularity of the table.

For example, let’s consider a retail sales dataset for a retail store chain operating in different countries. A customer could buy several products in a single sale and the same customer could buy higher quantities of the same product within the same sale. We can have a table that contains region information that can be linked to sales and product tables.

So, while selecting a grain, it is beneficial to have the fact table populated with the most atomic grain. This allows us to be as granular as we want with the information we query. If we define the grain at the sales transaction level, we can query individual sales transactions and get information such as the amount per sale, payment method, and so on. However, we won’t be able to get the product information in a particular sale. To mitigate this, let’s say we define the grain at the product in a sales transaction level. We can query product-related information along with sales information.

These different levels of pre-computation help us avoid heavy computations at query time. For example, if a user is querying for sales_amount at the region level, it might be far easier to select the data from the table that contains the sales_amount and region columns.

As we can see, data marts are helpful for working with datasets related to a specific context or a business line. However, a centralized data warehouse is beneficial when our analysis needs data to be aggregated from a variety of sources across the organization to extract actionable intelligence from the dataset.

A fresh approach to data warehousing came with the introduction of data vaults. This is a hybrid approach that incorporates the best-normalized model and a denormalized star schema. This approach to data modeling can be quite helpful when working with multi-source systems or data sources that have constantly changing relationships. This makes it easier to ingest data from multiple sources. Also, because of the way the data is modeled, data vaults make it easier to audit and track data.

Data transformation is a requirement for the data to be loaded into a data warehouse. This creates entry barriers and lags in delivering value to customers. Generally, organizations have multiple sources of data and they must be imported into a data warehouse to make business decisions or even to know if it adds value. Later, if the user discovers that combining the data from certain sources is not delivering the value that was initially expected, then this results in time and resources being wasted. Also, it is not always possible to forecast the analytical requirements in a world where businesses have to constantly evolve to stay relevant. What happens if a business user needs historical data that isn’t available in the data warehouse? Around 2015, data lakes were created to solve these problems.

You have been reading a chapter from
Serverless ETL and Analytics with AWS Glue
Published in: Aug 2022 Publisher: Packt ISBN-13: 9781800564985
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}