Reader small image

You're reading from  Azure Data Factory Cookbook - Second Edition

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781803246598
Edition2nd Edition
Right arrow
Authors (4):
Dmitry Foshin
Dmitry Foshin
author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

Tonya Chernyshova
Tonya Chernyshova
author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

Dmitry Anoshin
Dmitry Anoshin
author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

Xenia Ireton
Xenia Ireton
author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton

View More author details
Right arrow

Chaining and branching activities within a pipeline

In this recipe, we shall build a pipeline that will extract the data from the CSV files in Azure Blob Storage, load this data into the Azure SQL table, and record a log message with the status of this job. The status message will depend on whether the extract and load succeeded or failed.

Getting ready

We shall be using all the Azure services that are mentioned in the Technical requirements section at the beginning of the chapter. We shall be using the PipelineLog table and the InsertLogRecord stored procedure. If you have not created the table and the stored procedure in your Azure SQL database yet, please do so.

How to do it…

  1. In this recipe, we shall reuse portions of the pipeline from the Using parameters and built-in functions recipe. If you completed that recipe, just create a clone of that pipeline and name it as pl_orchestration_recipe_4. If you did not, go through steps 1-10 and create a parameterized pipeline.
  2. Observe...

Using the Lookup, Web, and Execute Pipeline activities

In this recipe, we shall implement error handling logic for our pipeline – similar to the previous recipe, but with a more sophisticated design: we shall isolate the error handling flow in its own pipeline. Our main parent pipeline will then call the child pipeline. This recipe also introduces three very useful activities to the user: Lookup, Web, and Execute Pipeline. The recipe will illustrate how to retrieve information from an Azure SQL table and how to invoke other Azure services from the pipeline.

Getting ready

We shall be using all the Azure services that are mentioned in the Technical requirements section at the beginning of the chapter. In addition, this recipe requires a table to store the email addresses of the status email recipients. Please refer to the Technical requirements section for the table creation scripts and instructions.We shall be building a pipeline that sends an email in the case of failure. There...

Creating event-based triggers

Often, it is convenient to run a data movement pipeline in response to an event. One of the most common scenarios is triggering a pipeline run in response to the addition or deletion of blobs in a monitored storage account. Azure Data Factory supports this functionality.In this recipe, we shall create an event-based trigger that will invoke a pipeline whenever new backup files are added to a monitored folder. The pipeline will move backup files to another folder.

Getting ready

  1. To illustrate the trigger in action, we shall use the pipeline in the Using parameters and built-in functions recipe. If you did not follow the recipe, do so now.
  2. We shall be creating a pipeline that is similar to the pipeline in Using the ForEach and Filter activities recipe. If you did not follow that recipe, do so now.
  3. In the storage account (see the Technical requirements section), create another container called backups.
  4. Following steps 1 to 3 from the Using the Copy activity...

Loading data to Azure Synapse Analytics using Azure Data Factory

In this recipe, we will look further at how to load data into Azure Synapse Analytics using Azure Data Factory.

Getting ready

Before we start, please ensure that you have created a linked service to a Blob storage container and know how to create a Copy Data statement in Azure Data Factory. Please refer to Chapter 2, Orchestration and Control Flow, for guidelines on how to do that.

How to do it…

To load data into Azure Synapse Analytics using Azure Data Factory, use the following steps:

  1. Before we create a Copy Data statement in Azure Data Factory, we need to create a new table in Azure Synapse Analytics. To do that, open Synapse Studio, go to the Data tab on the left side of your screen, and click New SQL script on a schema for which you want to create a new table, then click New table:

    Figure 3.9: Creating a new table in Azure Synapse Analytics

  1. Run the following...

Loading data to Azure Synapse Analytics using Azure Data Studio

Azure Data Studio is a cross-platform database tool for data professionals who use on-premises and cloud data platforms on Windows, macOS, and Linux. Azure Data Studio offers a modern editor experience with IntelliSense, code snippets, source control integration, and an integrated terminal. It’s engineered with the data platform user in mind, with the built-in charting of query result sets and customizable dashboards.

In this recipe, we are going to configure Azure Data Studio and load data into Azure Synapse Analytics from an external resource.

Getting ready

You need to have Azure Data Studio installed on your computer.

You need to upload the dataset from this book’s GitHub repository to the container. Then, you need to generate shared access signatures to connect blobs via Azure Synapse Analytics.

You can download the dataset from the book’s GitHub repository, or you can use...

Loading data to Azure Synapse Analytics using bulk load

Azure Synapse workspaces allow users to simply load data into a SQL pool with minimal mouse clicks. In this recipe, you will learn how to do this.

Getting ready

You need to have created an Azure Synapse workspace and a SQL pool, and Azure Data Lake Storage Gen2 should be linked to that workspace. The Customers dataset (or any other dataset) should be uploaded to your storage.

How to do it…

  1. Open the Azure Synapse workspace (also known as Synapse Studio).
  2. Click on the Data tab on the left side of your screen:

    Figure 3.22: Creating a new SQL script table in the Synapse Analytics workspace

  1. Expand your SQL pool and click on Actions to the right of Tables. Select New SQL script | New table:

    Figure 3.23 – Creating a new SQL script table in the Synapse Analytics workspace

  1. An automatically generated SQL query for a new table will be shown on the...

Pausing/resuming an Azure Synapse SQL pool from Azure Data Factory

In this recipe, you will create a new Azure Data Factory pipeline that allows you to automatically pause and resume your Synapse dedicated SQL pool.

Getting ready

You need access to an Azure Synapse workspace with a dedicated SQL pool for this recipe. Make sure your dedicated SQL pool is paused before starting this recipe as you are going to resume it automatically using Azure Data Factory pipeline.

How to do it…

We shall start by designing a pipeline to resume a Synapse SQL pool with an Azure Data Factory pipeline, and then create a pipeline to pause it:

  1. Go to your Azure Data Factory studio, open the Author section, and create a new pipeline. In the Activities section, choose Web. Rename the activity and the pipeline:

    Figure 3.30: Azure Data Factory pipeline – Web activity

    Go to the Settings tab, then copy and paste the following text into the URL textbox:

    ...

Working with Azure Purview using Azure Synapse

Azure Purview, Microsoft’s cloud-based data governance service, allows organizations to discover, understand, and manage data assets across various environments and clouds. Purview is integrated with Synapse Analytics, and by connecting a Purview account, users can unlock enhanced data governance capabilities: gain insights into stored data assets and metadata extracted from Synapse Analytics objects, understand data lineage, track usage, and enforce robust data governance policies and regulatory compliance – all from within the Synapse Analytics workspace.

In this recipe, we will connect a Microsoft Purview account to our Synapse Analytics workspace, run a scan on our Synapse SQL pool, and view the scan’s results right in the Synapse workspace.

Getting ready

You need access to an Azure Synapse workspace (see the Creating an Azure Synapse workspace recipe) and access to a Microsoft Purview account. If...

Copying data in Azure Synapse Integrate

In this recipe, you will create a Copy Data pipeline using the Azure Synapse Integrate tool and export data from a dedicated SQL pool into a data lake.

Getting ready

You need to have access to an Azure Synapse workspace that has a dedicated SQL pool. The SQL pool should have the table Customer (refer to the Loading data to Azure Synapse Analytics using bulk load recipe for instructions on how to create this table and load data).

How to do it…

To export data using Azure Synapse Integrate, follow these steps:

  1. Open your Azure Synapse workspace and go to the Integrate tab.

    Figure 3.47: Synapse Analytics: Integrate tool

  1. Select Add new resource, choose Pipeline, and then add a Copy data activity to your pipeline and rename it:

    Figure 3.48: Creating a new pipeline with the Integrate tool of the Synapse Analytics workspace

  1. In the Source section, create a new source dataset...

Using a Synapse serverless SQL pool

In this recipe, you will learn how to leverage a serverless SQL pool in an Azure Synapse workspace to analyze data in your data lake.

Getting ready

You need to have access to an Azure Synapse workspace. You should also have a file in Parquet format stored in your Azure Synapse storage account. If you do not have one, refer to the Copying data in Azure Synapse Integrate recipe to create it.

How to do it…

Open your Azure Synapse workspace, go to the Data tab, then Linked, and open the folder that contains the Parquet format file:

  1. Right-click on the file (we selected Customers.Parquet) and choose New SQL script | Select TOP 100 rows:

    Figure 3.53: Creating a new SQL script for a file in a storage account

    A new script is created for connecting to the file using a serverless SQL pool. Run it to see the first 100 rows of your Customer table.

    Figure 3.54: Connecting to the file from the Synapse workspace...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (4)

author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton