Reader small image

You're reading from  Simplifying Data Engineering and Analytics with Delta

Product typeBook
Published inJul 2022
PublisherPackt
ISBN-139781801814867
Edition1st Edition
Concepts
Right arrow
Author (1)
Anindita Mahapatra
Anindita Mahapatra
author image
Anindita Mahapatra

Anindita Mahapatra is a Solutions Architect at Databricks in the data and AI space helping clients across all industry verticals reap value from their data infrastructure investments. She teaches a data engineering and analytics course at Harvard University as part of their extension school program. She has extensive big data and Hadoop consulting experience from Thinkbig/Teradata prior to which she was managing development of algorithmic app discovery and promotion for both Nokia and Microsoft AppStores. She holds a Masters degree in Liberal Arts and Management from Harvard Extension School, a Masters in Computer Science from Boston University and a Bachelors in Computer Science from BITS Pilani, India.
Read more about Anindita Mahapatra

Right arrow

Chapter 5: Data Consolidation in Delta Lake

"Faith makes you stable and steady. It brings out the totality in you.

Consolidation of your energy is faith. Dissemination of energy is doubt."

– Sri Sri Ravi Shankar, on Transcendental Meditation

In the previous chapters, we discussed the quality of Delta and why it has become the first choice in big data processing. In this chapter, we will focus on how to consolidate disparate datasets into one or more data lakes backed by Delta so that you can build all kinds of use cases on a single source of truth without having to move data or stitch together multiple systems. We have already looked into the special features that Delta offers, including ACID transaction support, schema evolution, time travel, fine-grained data operations, and also big data design blueprints (such as the medallion architecture) in the context of data workflows. In this chapter, we will use those...

Technical requirements

To follow along this chapter, make sure you have the code and instructions as detailed in this GitHub location: https://github.com/PacktPublishing/Simplifying-Data-Engineering-and-Analytics-with-Delta/tree/main/Chapter05

Examples in this book cover some Databricks specific features to provide a complete view of capabilities. Newer features continue to be ported from Databricks to the Open Source Delta.

Let's get started!

Why consolidate disparate data types?

Every finger on your hand serves a purpose, and although you could use them interchangeably to press a key or push the door, it cannot be denied that you need all of them when doing complex tasks such as typing, playing the piano, or grasping an object tightly. That is because the whole is usually greater than the sum of its parts. This is true in the data world as well. You may have specialized Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) operational data stores such as Salesforce, Systems, Applications, and Products (SAP), Jira, or others. However, there is greater value in looking at a cross-section of your customers or accounts by bringing all the relevant and crucial pieces together to get the value without drowning in data. A lot of transactional systems may be structured (or at best semi-structured), but a lot of secondary data that is used to augment the core data is unstructured in nature, such as images...

Delta unifies all types of data

In this section, we will give you some examples of how to ingest (read) different data types into a Spark DataFrame and save it all in the Delta format in the data lake using a common API, df.write.format. ("delta") and this curated data is the single source of truth for all BI and AI use cases, as shown in the following diagram:

Figure 5.3 – Consolidating all your data in a central Delta Lake

The three main data types are structured, semi-structured, and unstructured, and Spark native APIs (along with partner-aided connectors) can ingest data from a wide variety of data sources to create a curated view that all consumers can access, depending on their privilege levels. In some cases, different Lines Of Business (LOBs) may want to have their own mini data lake, and that is perfectly fine as it follows a hub-spoke model of a central data repository for common access and specialized ones with tighter guardrails...

Avoiding patches of data darkness

There are different lenses with which to measure data quality. In simple terms, you want clean, complete, accurate, consistent, timely, and unbiased data. You want your stakeholders to trust the data so they can build more sophisticated data products. Multiple personas using different views of the data should not get contradictory data points, and at no point should false facts be made visible because compliance and audit will uncover it sooner or later.

There are some common problems that every organization dealing with big data grapples with that lead to compromises in data quality, namely failed production jobs, lack of schema enforcement, lack of data consistency, lost data, and compliance requirements such as the GDPR. Let's examine these problems in the context of a simple airline use case of showing flight delays and see how Delta's features help address data quality.

Addressing problems in flight status using Delta

This use...

Curating data in stages for analytics

The raw data has to be wrangled and transformed to be consumable and ready for analytics. Each data persona may look at different data aspects or features, and there is no reason for all of them to run repeatable cleansing functions because if they did, they could all have multiple copies of data and unnecessary processing cycles, which is both time-consuming and expensive. This is where a good data catalog and design blueprints help to maintain discipline, offer data discovery opportunities for reusable components, and prevent redundant work. We have already looked at the medallion architecture, and the bronze, silver, and gold zones are where data is forged and made usable.

RDD, DataFrames, and datasets

This is a good time to refresh concepts around RDD, DataFrames, and datasets. RDD stands for Resilient Distributed Data and is the original low-level construct of Spark. RDDs have to be optimized at each stage and cannot infer schema. DataFrames...

Ease of extending to existing and new use cases

Business demands are constantly changing and evolving, and a well-designed data lake provides scalability not just for the storage and compute needs of existing use cases but also in supporting new processing modes to meet emerging unseen scenarios. In other words, it is agnostic of the specifics of the industry vertical or use case. Exploratory data analysis, machine learning, and business analytics can all work from the same view of the data. New tools and frameworks can be layered in, and as long as the data does not have to move around, it provides a solid base for data personas to focus on their specific use case goals instead of having to solve common infrastructure-level issues over and over again, increasing their productivity and the agility of use cases getting to production. Delta is an open format and, once curated, it is available to a host of other tools using connectors. The following section talks about some popular connectors...

Data governance

Data democratization and self-service capabilities are some of the advantages of data lakes. A data governance layer is imperative to put the right guardrails in place while allowing stakeholders to get the most business value from the generated and curated data and insights. A good data catalog is essential for producing actionable insights in any data-driven organization. Cloud vendors have their own offerings, such as AWS Glue, Azure Purview, and Azure Data Catalog. Apache Atlas is probably the most popular open source offering, and there are vendors who specialize in this area such as Alation and Collibra.

The three primary goals of governance are the following:

  • Keeping data secure and only the right privileges and roles dictate access to data
  • Ensuring the quality of the stored data is high so that it is meaningful to its consumers, who then develop trust in their data and hence the insights generated on top of the data
  • Discovering data so that...

Summary

It is interesting to note how the term data lake came about. It is not called a pond as a pond is perceived to be small. It is not called a sea or ocean because the saltwater makes it look murky and the waves are rough and uncontrolled. It is not called a stream as "streaming" is already heavily used in the context of real-time processing. It is not a river because water drains off, whereas the vision of a data lake is that of a pristine reservoir of water that provides food and shelter to a lot of flora and fauna and could turn into a swamp if you're not careful with governance and management. In this chapter, we went over the need for data consolidation and how Delta helps with data reliability, quality, and governance, giving us curated analytics-ready data and preventing silos and swamps. Data, once curated, remains in an open format and is used in multiple use cases by different data personas, enabling them to be more agile in on-boarding new use cases and...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Simplifying Data Engineering and Analytics with Delta
Published in: Jul 2022Publisher: PacktISBN-13: 9781801814867
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Anindita Mahapatra

Anindita Mahapatra is a Solutions Architect at Databricks in the data and AI space helping clients across all industry verticals reap value from their data infrastructure investments. She teaches a data engineering and analytics course at Harvard University as part of their extension school program. She has extensive big data and Hadoop consulting experience from Thinkbig/Teradata prior to which she was managing development of algorithmic app discovery and promotion for both Nokia and Microsoft AppStores. She holds a Masters degree in Liberal Arts and Management from Harvard Extension School, a Masters in Computer Science from Boston University and a Bachelors in Computer Science from BITS Pilani, India.
Read more about Anindita Mahapatra