Reader small image

You're reading from  Distributed Data Systems with Azure Databricks

Product typeBook
Published inMay 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781838647216
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Alan Bernardo Palacio
Alan Bernardo Palacio
author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio

Right arrow

Chapter 4: Delta Lake with Azure Databricks

In this chapter, we will learn how to make use of Azure Databricks by showing how easy it is to work with Delta tables, as well as how to process data and integrate different solutions in Azure Databricks. We will begin by introducing Delta Lake and how to ingest data with it using either using partner integrations, the COPY INTO command, and the Azure Databricks Auto Loader. Then, we'll show you how to process the data once it has been loaded, as well as how we can use advance features in order to process and optimize ETLs that rely on streams of data.

We will learn how to store and process data efficiently using Delta by covering the following topics:

  • Introducing Delta Lake
  • Ingesting data using Delta Lake
  • Batching table reads and writes in queries
  • Querying past states of a table
  • Streaming table read and writes

Technical requirements

To work with Delta Engine in this chapter, you will need an Azure Databricks Subscription, as well as a running cluster with an attached notebook.

Introducing Delta Lake

Using a data lake has become the de facto solution for many data engineering tasks. This storage layer is composed of files that can be arranged in a historical way instead of tables in a data warehouse. This has the benefit of decoupling storage from computing, which is the great advantage of data lakes. They are much cheaper than a database. The data that's stored in the data lake has no primary and foreign keys, making it hard to extract the information stored on it. Therefore, data lakes are seen as a solution where we only append new data. When trying to query or delete records, we need to go through all the files in the data lake, which could be a very resource-intensive and slow task.

This leads to data lakes being hard to update, and they may have problems when we try to use them in cases where data needs to be frequently queried. This includes customer or transactional data, financial applications that require robust data handling, or when we...

Ingesting data using Delta Lake

Data can be ingested into Delta Lake in several ways. Azure Databricks offers several integrations with Partners, which provide data sources that are loaded as Delta tables. We can copy a file directly into a table, use AutoLoader, or create a new streaming table. Let's take a deeper look at this.

Partner integrations

Azure Databricks allows us to connect to different partners that provide data sources. These are easy to implement and provide scalable ingestion.

We can view the options that we have for ingesting data from Partner Integrations when creating a new table in the UI, as shown in the following screenshot:

Figure 4.1 – Ingesting data from Partner Integrations

Some of these integrations, such as Qlink, allow you to get data from multiple data sources such as Oracle, Microsoft SQL Server, and SAP and load them into Delta Lake. Of course, these integrations require you to have a subscription to...

Batching table read and writes

When performing DDL operations such as merge and update on several large tables stored in databases with high concurrency, the transaction log can become blocked and lead to real outages in the data warehouse. All SQL statements are atomic, which means that modifications that take a long time will cause data to be locked for as long as the process is being executed, which can be a problem for real-time databases. To reduce the computational burden of these operations, we can optimize some of them so that they can run on smaller, easier-to-handle batches that only lock resources for brief periods.

Let's see how we can implement batch reads and writes in Delta Lake, thanks to the options provided by the Apache Spark API.

Creating a table

We can create Delta Lake tables either by using the Apache Spark DataFrameWriter or by using DDL commands such as CREATE TABLE. Let's take a look:

  • Delta Lake tables are created in the metastore...

Streaming table read and writes

Although we have already mentioned that structured streaming is a core part of the Apache Spark API, in this section, we will dive into how we can use it as a reliable stream processing engine, in which computation can be performed in the same way as batch computation is performed on static data. Along with Auto Loader, it will automatically handle data being streamed into Delta tables without the common inconveniences such as merging small files produced by low latency ingestion, running concurrent batch jobs when working with several streams, and, as we discussed earlier, keeping track of the files available for being streamed into tables.

Let's learn how to stream data into Delta tables in Azure Databricks.

Streaming from Delta tables

You can use a Delta table as a stream source for streaming queries. The query will then process any existing data in the table and any incoming data from the stream. Let's take a look:

  • We can...

Summary

Implementing a data lake is a paradigm change within an organization. Delta Lake provides a solution for this when we are dealing with streams of data from different sources, when the schema of the data might change over time, and when we need to have a system that is reliable against data mishandling and easy to audit.

Delta Lake fills the gap between the functionality of a data warehouse and the benefits of a data lake while also overcoming most of its challenges.

Schema validation ensures that our ETL pipelines maintain reliability against changes in the tables. It informs us of this by raising an exception if any mismatches arise and the data becomes contaminated. If the change was intentional, we can use schema evolution.

Time travel allows us to access historic versions of data, thanks to its ordered transaction log. This keeps track of every operation that's performed in Delta tables. This is useful when we need to define pipelines that need to query different...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Distributed Data Systems with Azure Databricks
Published in: May 2021Publisher: PacktISBN-13: 9781838647216
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio