Reader small image

You're reading from  Azure Data Engineer Associate Certification Guide

Product typeBook
Published inFeb 2022
PublisherPackt
ISBN-139781801816069
Edition1st Edition
Tools
Concepts
Right arrow
Author (1)
Newton Alex
Newton Alex
author image
Newton Alex

Newton Alex leads several Azure Data Analytics teams in Microsoft, India. His team contributes to technologies including Azure Synapse, Azure Databricks, Azure HDInsight, and many open source technologies, including Apache YARN, Apache Spark, and Apache Hive. He started using Hadoop while at Yahoo, USA, where he helped build the first batch processing pipelines for Yahoo's ad serving team. After Yahoo, he became the leader of the big data team at Pivotal Inc., USA, where he was responsible for the entire open source stack of Pivotal Inc. He later moved to Microsoft and started the Azure Data team in India. He has worked with several Fortune 500 companies to help build their data systems on Azure.
Read more about Newton Alex

Right arrow

Chapter 7: Implementing the Serving Layer

I hope you enjoyed learning about the implementation details of the logical data structures in the previous chapter. In this chapter, we will learn about implementing the serving layer, which involves implementing star schemas, techniques to read and write different data formats, sharing data between services such as SQL and Spark, and more. Once you complete this chapter, you should be able to understand the differences between a Synapse dedicated SQL pool versus traditional SQL systems for implementing the Star schema, the various ways of accessing Parquet data using technologies such as Spark and SQL, and the details involved in storing metadata across services. All this knowledge should help you build a practical and maintainable serving layer in a data lake and, of course, clear the certification too.

We will cover the following topics in this chapter:

  • Delivering data in a relational star schema
  • Implementing a dimensional...

Technical requirements

For this chapter, you will need the following:

  • An Azure account (free or paid)
  • An active Synapse workspace

Let's get started.

Delivering data in a relational star schema

We had learned about star schema in Chapter 4, Designing the Serving Layer. We will take the same example here and show how to implement a star schema in Synapse SQL and deliver data from it.

Star schemas have two types of tables, fact tables and dimensional tables. Fact tables are usually much higher in volume than the dimension tables and hence would benefit from using a hash distribution with clustered columnstore indexing. On the other hand, dimension tables are smaller and can benefit from using replicated tables.

Important Note

Synapse dedicated SQL pools didn't support foreign key constraints at the time of writing this book. Hence, the responsibility of maintaining data integrity falls on the applications.

Let's consider the same Imaginary Airport Cabs (IAC) cab rides example for our star schema from Chapter 4, Designing the Serving Layer. We had the following tables in that design:

  • FactTrips
  • DimDriver...

Implementing a dimensional hierarchy

As we have already explored dimensional hierarchy in Chapter 4, Designing the Serving Layer under the Designing a dimensional hierarchy section we will not be repeating it here. Please have a look at Chapter 4 to refresh your understanding of dimensional hierarchy.

Let's next look at the techniques available for reading and writing data in Parquet files.

Delivering data in Parquet files

In this section, we will learn to deliver data that is present in Parquet files via both Synapse SQL and Spark. We will use the concept of external tables that we learned in the previous chapter to accomplish this.

Let's consider an example where we have the trips data stored as Parquet files. We will use Synapse SQL serverless pools and Spark to access the data and run some sample queries on it.

Synapse SQL serverless

Let's create an external table from within the Synapse SQL query editor that points to the Parquet files on the Azure...

Maintaining metadata

As we have seen in Chapter 4, Designing a Partition Strategy, metastores are like data catalogs that contain information about all the tables you have, the table schemas, the relationships among them, where they are stored, and so on. In that chapter, we learned at a high level about how to access the metadata in Synapse and Databricks. Now, let's learn the details of implementing them.

Metadata using Synapse SQL and Spark pools

Synapse supports a shared metadata model. The databases and tables that use Parquet or CSV storage formats are automatically shared between the compute pools, such as SQL and Spark.

Important Note

Data created from Spark can only be read and queried by SQL pools but cannot be modified at the time of writing this book.

Let's look at an example of creating a database and a table using Spark and accessing it via SQL:

  1. In the Synapse Spark notebook, create a sample table, as shown in the following screenshot...

Summary

This was another small but interesting chapter. We started with implementing the star schema, then learned about delivering data in Parquet format, and, finally, looked at how we can share data between the SQL-Spark and Spark-Hive services in Azure. With all this knowledge, you should now be able to design and implement a basic serving layer for a data lake architecture using Azure services. To learn more, please follow the links that I have provided at the end of important topics.

With this, we have reached the end of the first major section of the DP-203 syllabus, Designing and Implementing Data Storage. This covers about 40–45% of the certification examination. We are getting closer to the halfway mark. Good going!

In the next section, we will learn about designing and developing data processing systems and, more specifically, about ingesting and transforming data in data lakes.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Azure Data Engineer Associate Certification Guide
Published in: Feb 2022Publisher: PacktISBN-13: 9781801816069
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Newton Alex

Newton Alex leads several Azure Data Analytics teams in Microsoft, India. His team contributes to technologies including Azure Synapse, Azure Databricks, Azure HDInsight, and many open source technologies, including Apache YARN, Apache Spark, and Apache Hive. He started using Hadoop while at Yahoo, USA, where he helped build the first batch processing pipelines for Yahoo's ad serving team. After Yahoo, he became the leader of the big data team at Pivotal Inc., USA, where he was responsible for the entire open source stack of Pivotal Inc. He later moved to Microsoft and started the Azure Data team in India. He has worked with several Fortune 500 companies to help build their data systems on Azure.
Read more about Newton Alex