Reader small image

You're reading from  Azure Data Factory Cookbook - Second Edition

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781803246598
Edition2nd Edition
Right arrow
Authors (4):
Dmitry Foshin
Dmitry Foshin
author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

Tonya Chernyshova
Tonya Chernyshova
author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

Dmitry Anoshin
Dmitry Anoshin
author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

Xenia Ireton
Xenia Ireton
author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton

View More author details
Right arrow

Introduction

A data lake is a central storage system that stores data in its raw format. It is used to collect huge amounts of data that are yet to be analyzed by analysts and data scientists or for regulatory purposes. As the amount of information and the variety of data that a company operates with increases, it gets increasingly difficult to preprocess and store it in a traditional data warehouse. By design, data lakes are built to handle unstructured and semi-structured data with no pre-defined schema.On-premise data lakes are difficult to scale and require thorough requirements and cost estimations. Cloud data lakes are often considered an easier-to-use and -scale alternative. In this chapter, we will go through a set of recipes that will help you to launch a data lake, load data from external storage, and build ETL/ELT pipelines around it.Azure Data Lake Gen2 can store both structured and unstructured data. In this chapter, we will load and manage our datasets in Azure Data Lake...

Technical requirements

You need to have access to Microsoft Azure with Synapse workspace resource created. An Azure free account is sufficient for all recipes in this chapter. To create an account, use the following link: https://azure.microsoft.com/free/. To create Synapse workspace resource please refer to Chapter 3.

Setting up Azure Data Lake Storage Gen2

Azure Data Lake Storage Gen2 is a versatile solution that can be used as a single storage platform.It is Hadoop compatible, so you can use it with HDInsights and Databricks, which we will cover in the next chapter.Setting up properly configured storage is a critical operation for developers and data engineers. In this section, we will set up and configure a scalable Azure data lake to be used with Azure Data Factory and Azure Synapse Analytics.

Getting ready

To get started with your recipe, log in to your Microsoft Azure account.

How to do it...

Azure Data Lake Gen2 uses hierarchical namespaces. Unless you already have a storage account with hierarchical namespaces, you will have to create a new one.Now that we have set up the resource group, let's create a storage account:

  1. Search for Storage accounts in the Azure search bar and click on it.
  2. To add a new storage account, click + Add.
  3. Select Azure Subscription and Resource Group.
  4. Add a Storage...

Create Synapse Analytics Spark Pool

A Spark Pool is a fundamental component that provides the computing resources for running large-scale Apache Spark jobs within Synapse Analytics.In this receipt, we'll guide you through the process of creating a Spark Pool in Azure Synapse. You'll learn how to configure pool settings, customize resource allocation, manage credentials, and monitor job progress.By the end, you'll have the knowledge to provision and configure a Spark Pool in Azure Synapse, enabling you to harness the power of Apache Spark for high-performance data processing and analytics.

Getting ready

To get started with your recipe, log in to your Microsoft Azure account. You need to have Synapse workspace created.

How to do it...

To create a Synapse Analytics Spark pool, follow these steps:

  1. Log in to the Azure portal and navigate to your Synapse Analytics workspace.
  2. Click on the "Manage" button in the left-hand menu and select "Apache Spark pools"...

Integrate Azure Data Lake and run Spark Pool jobs

In this receipt, we'll explore how to integrate Azure Data Lake with Spark Pool in Azure Synapse Analytics. By combining these services, we can unlock powerful data processing and analysis workflows. We'll cover the steps to establish the connection, run Spark jobs, and leverage the capabilities of both services. Get ready to harness the potential of Azure Data Lake and Spark Pool for efficient and scalable data processing.

Getting ready

Let's load and preprocess the MovieLens dataset (F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4: 19:1–19:19. https://doi.org/10.1145/2827872). It contains ratings and free-text tagging activity from a movie recommendation service.The MovieLens dataset exists in a few sizes, which have the same structure. The...

Build and Orchestrate data pipeline for Data Lake and Spark

In this receipt, we will explore the process of constructing and orchestrating data pipelines using Data Lake and Spark in Azure Synapse Analytics. By leveraging the capabilities of these services, you can efficiently manage and process large amounts of data for analysis and insights.

  1. We will improve the code in the notebook from the previous receipt to create a data transformation job and orchestrate it from Synapse Integrate. You'll learn how to extract, transform, and load data into a data lake using Spark, building efficient and scalable pipelines.
  2. Additionally, we'll cover the crucial aspect of orchestration, where you'll discover how to schedule, monitor, and manage the execution of your data pipelines using Azure Synapse's orchestration capabilities.

Getting ready

To get started with your recipe, log in to your Synapse Analytics workspace. You’ll need to have the notebook created in the previous...

Ingesting data into Delta Lake using Mapping Data Flows

In the realm of data management, Atomicity, Consistency, Isolation, Durability (ACID) is a foundational set of principles ensuring the reliability and integrity of database transactions. Let’s break down the significance of each component:

  • Atomicity: Guarantees that a transaction is treated as a single, indivisible unit. It either executes in its entirety, or not at all. This ensures that even if a system failure occurs mid-transaction, the database remains in a consistent state.
  • Consistency: Enforces that a transaction brings the database from one valid state to another. Inconsistent states are avoided, providing a reliable and predictable environment for data operations.
  • Isolation: Ensures that transactions operate independently of each other, preventing interference. Isolation safeguards against concurrent transactions affecting each other’s outcomes, maintaining data integrity.
  • Durability...

External integrations with other compute engines (Snowflake)

Azure Data Factory (ADF), a powerful cloud-based data integration service from Microsoft, has emerged as the go-to solution for enterprises seeking efficient and scalable data movement across various platforms. With its extensive capabilities, ADF not only enables seamless data integration within the Azure ecosystem but also offers external integrations with leading compute engines such as Snowflake.

Azure Data Factory’s integration with Snowflake enables enterprises to seamlessly leverage cloud data warehousing capabilities. Snowflake’s architecture, built for the cloud, complements Azure’s cloud-native approach, offering a scalable and elastic solution for storing and processing vast amounts of data. The integration supports the creation of cost-effective and scalable data solutions. Azure Data Factory’s ability to dynamically manage resources and Snowflake’s virtual warehouse architecture...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (4)

author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton