Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Azure Data Factory Cookbook - Second Edition

You're reading from  Azure Data Factory Cookbook - Second Edition

Product type Book
Published in Feb 2024
Publisher Packt
ISBN-13 9781803246598
Pages 532 pages
Edition 2nd Edition
Languages
Authors (4):
Dmitry Foshin Dmitry Foshin
Profile icon Dmitry Foshin
Tonya Chernyshova Tonya Chernyshova
Profile icon Tonya Chernyshova
Dmitry Anoshin Dmitry Anoshin
Profile icon Dmitry Anoshin
Xenia Ireton Xenia Ireton
Profile icon Xenia Ireton
View More author details

Table of Contents (15) Chapters

Preface Getting Started with ADF Orchestration and Control Flow Setting Up Synapse Analytics Working with Data Lake and Spark Pools Working with Big Data and Databricks Data Migration – Azure Data Factory and Other Cloud Services Extending Azure Data Factory with Logic Apps and Azure Functions Microsoft Fabric and Power BI, Azure ML, and Cognitive Services Managing Deployment Processes with Azure DevOps Monitoring and Troubleshooting Data Pipelines Working with Azure Data Explorer The Best Practices of Working with ADF Other Books You May Enjoy
Index

Introduction

Azure Data Factory (ADF) is known for its efficient utilization of big data tools. This allows building fast and scalable ETL/ELT pipelines and easily managing the storage of petabytes of data. Often, setting up a production-ready cluster used for data engineering jobs is a daunting task. On top of this, estimating loads and planning for an autoscaling capacity can be tricky. Azure with HDInsight clusters and Databricks make these tasks obsolete. Now, any Azure practitioner can set up an Apache Hive, Apache Spark, or Apache Kafka cluster in minutes.

Technical requirements

You need to have access to Microsoft Azure. You will be able to run HDInsight clusters with Azure credits, but running Databricks requires a pay-as-you-go account. Also, you can use the code from https://github.com/PacktPublishing/Azure-Data-Factory-Cookbook/.

Setting up an HDInsight cluster

HDInsight is a comprehensive solution based on a diverse list of open source platforms. It includes Apache Hadoop, Apache Spark, Apache Kafka, Apache HBase, Apache Hive, Apache Storm, and so on. Solutions based on HDInsight can be integrated with ADF, Azure Data Lake, Cosmos DB, and so on.In this section, we will set up the HDInsight service, build a basic pipeline, and deploy it to ADF.

Getting ready

Before getting started with the recipe, log in to your Microsoft Azure account.We assume you have a pre-configured resource group and storage account with Azure Data Lake Gen2.

How to do it…

We will go through the process of creating an HDInsight cluster using the Azure portal and its web interface. Follow these instructions:

  1. Create a user-assigned managed identity. We will need it in the next step, to set up HDInsight cluster access to Data Lake v2. Find Managed Identities in Azure and click +Add.
  2. Fill in the appropriate details, such as Resource...

Processing data from Azure Data Lake with HDInsight and Hive

HDInsight clusters are versatile open source tools that can handle ETL/ELT and data analytical and scientific tasks at scale. Unfortunately, usage of Azure HDInsight is chargeable even when the cluster is inactive or not loaded. But ADF can create and manage short-lived HDInsight clusters. Let's build one.

Getting ready

Ensure that you have a pre-configured resource group and storage account with Azure Data Lake Gen2. Now, log in to your Microsoft Azure account.

How to do it…

For processing data from Azure Data Lake with HDInsight and Hive, use the following steps.

  1. Go to the Azure portal and find Azure Active Directory.
  2. Click App registrations, as shown in the following screenshot:

    Figure 5.10 – App registrations
  3. Then, click + New registration and fill in the name of your app, as shown in the following screenshot:

    Figure 5.11 – Registering an app
  4. Leave the default answer to Who can use this...

Building data model in Delta Lake and data pipeline jobs with Databricks

Apache Spark is a well-known big data framework that is often used for big data ETL/ELT jobs and machine learning tasks. ADF allows us to utilize its capabilities in two different ways:

  1. Running Spark in an HDInsight cluster
  2. Running Databricks notebooks and JAR and Python files

Running Spark in an HDInsight cluster is very similar to the previous recipe. So, we will concentrate on the Databricks service. It also allows running interactive notebooks, which significantly simplifies the development of the ETL/ELT pipelines and machine learning tasks.In this recipe, we will connect Azure Data Lake Storage to Databricks, ingest the MovieLens dataset, transform the data, and store the resulting dataset as delta table in Azure Data Lake Storage.

Getting ready

First, log in to your Microsoft Azure account.We assume you have a pre-configured resource group and storage account with Azure Data Lake Gen2 and the Azure Databricks...

Ingest data into Delta Lake using Mapping Data Flows

Delta Lake is a cutting-edge open source storage layer that ensures the atomicity, consistency, isolation, and durability of data within a lake. Essentially, Delta Lake conforms to ACID standards, making it an ideal solution for data management. In addition to offering scalable metadata handling and support for ACID transactions, Delta Lake integrates seamlessly with existing Data Lakes and Apache Spark APIs. If you're interested in exploring Delta Lake, there are several options available to you. Databricks provides notebooks, along with compatible Apache Spark APIs, to create and manage Delta Lakes. On the other hand, Azure Data Factory's Mapping Data Flows allow for ACID-compliant CRUD operations through simplified ETL pipelines using scaled-out Apache Spark clusters. This recipe will walk you through how to get started with Delta Lake using Azure Data Factory's new Delta Lake connector, demonstrating how to create...

External integrations with other compute engines (Snowflake)

Azure Data Factory (ADF), a powerful cloud-based data integration service by Microsoft, has emerged as a go-to solution for enterprises seeking efficient and scalable data movement across various platforms. With its extensive capabilities, ADF not only enables seamless data integration within the Azure ecosystem but also offers external integrations with leading compute engines such as Snowflake.This recipe will explore the integration between Azure Data Factory and Snowflake, an advanced cloud-based data warehousing platform. We will delve into the benefits and possibilities of combining these two technologies, showcasing how they harmonize to streamline data workflows, optimize processing, and facilitate insightful analytics.Through this recipe, you will learn how to create, configure load data to Snowflake account, gain insights into the steps involved in integrating Azure Data Factory with Snowflake, empowering you to effortlessly...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024 Publisher: Packt ISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}