Reader small image

You're reading from  Azure Data Factory Cookbook - Second Edition

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781803246598
Edition2nd Edition
Right arrow
Authors (4):
Dmitry Foshin
Dmitry Foshin
author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

Tonya Chernyshova
Tonya Chernyshova
author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

Dmitry Anoshin
Dmitry Anoshin
author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

Xenia Ireton
Xenia Ireton
author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton

View More author details
Right arrow

Pausing/resuming an Azure Synapse SQL pool from Azure Data Factory

In this recipe, you will create a new Azure Data Factory pipeline that allows you to automatically pause and resume your Synapse dedicated SQLpool.

Getting ready

You need access to an Azure Synapse Workspace with a dedicated SQL pool for this recipe. Make sure your dedicated SQL pool before is paused before starting this recipe as you are going to resume it automatically using Azure Data Factory pipeline.

How to do it…

We shall start by designing a pipeline to resume a Synapse SQL pool with an Azure Data Factory pipeline, and then create a pipeline to pause it:

  1. Go to your Azure Data Factry studio, open the Author section and create a new pipeline. In the Activities section, choose Web. Rename the activity and the pipeline:
Figure 3.26 – Azure Data Factory pipeline – web activity

Go to the Settings tab, then copy and paste the following text into URL textbox:https://management.azure.com/subscriptions...

Working with Azure Purview using Azure Synapse

In this recipe, we will connect a Microsoft Purview account to your Synapse Analytics workspace, run a scan on our Synapse SQL pool and view scan’s result right in the Synapse workspace.

Getting ready

You need access to an Azure Synapse Workspace (see recipe Create Azure Synapse Workspace) and access to a Microsoft Purview account. If you do not have a Microsoft Purview account, create one in Azure Portal:Open https://portal.azure.com/#create/Microsoft.AzurePurviewGalleryPackage in your browser

Figure 3.33 – Create a Microsoft Purview account

Fill in the Resource group field and choose a name for your Purview account. Choose appropriate region. The rest of the setting can be left default. Press Review+Create button, review our selections and finalize by pressing Create button. Wait until deployment completes – and now you have a Purview account.

How to do it…

First, add your Purview account to your instance...

Copying data in Azure Synapse Integrate

In this recipe, you will create a Copy Data pipeline using Azure Synapse Integrate tool and export data from a dedicated SQL Pool into a data lake.

Getting ready

You need to have access to an Azure Synapse workspace which has a dedicated SQL pool. The SQL pool should have table Customer (refer to recipe Loading data to Azure Synapse Analytics using bulk load for instructions on how to create this table and load data)

How to do it…

To export data using Azure Synapse Integration, follow these steps:

  1. Open your Azure Synapse workspace and go to Integrate tab
  2. Select Add a new resource, choose Pipeline, then add a Copy data activity to your pipeline, and rename it:

    Figure 3.43 – Creating a new pipeline with the Integrate tool of the Synapse Analytics workspace
  3. In the Source section, create a new source dataset with Azure Synapse Dedicated SQL Pool as a data store. Configure data set properties by giving it an appropriate name, and...

Using Synapse Serverless SQL Pool

In this recipe, you will learn how to leverage Serverless SQL Pool in Azure Synapse workspace to analyse data in your data lake.

Getting ready

You need to have access to an Azure Synapse workspace. You should also have a file in Parquet format stored in your Azure Synapse storage account. If you do not have it, refer to recipe Copying Data In Azure Synapse Integrate to create it.

How to do it…

  1. Open your Azure Synapse workspace, go to Data tab, Linked, and open the folder that contents the Parquet format file.
  2. Right-click on the file (we selected Customers.Parquet) and choose New SQL script | Select TOP 100 rows:
Figure 3.47 – Creating a new SQL script for a file in a storage account

A new script is created for connecting to the file using Serverless SQL pool. Run it to see first 100 rows of your Customer table.

Figure 3.48 – Connecting to the file from the Synapse workspace using serverless SQL pool

NOTE

The query executes...

Integrating Azure Data Lake and running Spark pool jobs

In this recipe, we’ll explore how to integrate Azure Data Lake with a Spark pool in Azure Synapse Analytics. By combining these services, we can unlock powerful data processing and analysis workflows. We’ll cover the steps to establish the connection, run Spark jobs, and leverage the capabilities of both services. Get ready to harness the potential of Azure Data Lake and Spark pools for efficient and scalable data processing.

Getting ready

Let’s load and preprocess the MovieLens dataset (F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4: 19:1–19:19. https://doi.org/10.1145/2827872). It contains ratings and free-text tagging activity from a movie recommendation service.

The MovieLens dataset exists in a few sizes, which have the same structure. The smallest one has 100,000 ratings, 600...

Building and orchestrating a data pipeline for Data Lake and Spark

In this recipe, we will explore the process of constructing and orchestrating data pipelines using Data Lake and Spark in Azure Synapse Analytics. By leveraging the capabilities of these services, you can efficiently manage and process large amounts of data for analysis and insights.

We will improve the code in the notebook from the previous recipe to create a data transformation job and orchestrate it from Synapse Integrate. You’ll learn how to extract, transform, and load data into a data lake using Spark, building efficient and scalable pipelines.

Additionally, we’ll cover the crucial aspect of orchestration, where you’ll discover how to schedule, monitor, and manage the execution of your data pipelines using Azure Synapse’s orchestration capabilities.

Getting ready

To get started with the recipe, log in to your Synapse Analytics workspace. You’ll need to have the...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (4)

author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton