Reader small image

You're reading from  Azure Data Factory Cookbook - Second Edition

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781803246598
Edition2nd Edition
Right arrow
Authors (4):
Dmitry Foshin
Dmitry Foshin
author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

Tonya Chernyshova
Tonya Chernyshova
author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

Dmitry Anoshin
Dmitry Anoshin
author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

Xenia Ireton
Xenia Ireton
author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton

View More author details
Right arrow

Monitoring and Troubleshooting Data Pipelines

Azure Data Factory is an orchestration and integration tool that helps engineers transfer data between multiple data stores, both within and outside of the Microsoft Azure ecosystem. However, data integration is rarely straightforward, and errors can and do occur. In this chapter, we will introduce tools to help you manage and monitor your Azure Data Factory pipelines. You will learn where and how to find more information about what went wrong when a pipeline failed, how to debug a failed run, how to set up alerts that notify you when there is a problem, and how to identify problems with your integration runtimes.

The following is a list of the recipes in this chapter:

  • Monitoring pipeline runs and integration runtimes
  • Investigating failures – running pipelines in debug mode
  • Rerunning activities
  • Configuring alerts for your Azure Data Factory runs

Technical requirements

We will be examining Azure Data Factory tools and working with existing pipelines. Specific instructions on how to create or import the pipelines to work with are provided in the Getting ready section of each recipe.

You will need access to an Azure Data Factory instance and a GitHub account to access the files and templates we provide for the recipes. If you do not have a GitHub account, you can sign up for a free one at https://github.com/.

Monitoring pipeline runs and integration runtimes

Data integration can be tricky, and it is helpful to be able to visualize progress or evaluate inputs/outputs of the pipelines in your Data Factory. This recipe will introduce you to the tools that help you gain an insight into the health and progress of your pipelines.

Getting ready

In this recipe, we will give you an overview of the features of the Monitor tab in the Azure Data Factory Author interface. If you have access to a data factory with a few pipelines already configured, you will be able to follow along. Otherwise, create a new data factory instance and design and run two or three pipelines. If you followed the first two or three recipes from Chapter 2, Orchestration and Control Flow, you will have sufficient material to understand the capabilities of the Monitor interface.

How to do it…

Without further ado, let’s explore the Monitor tab interfaces and customize them for our needs:

    ...

Investigating failures – running pipelines in debug mode

When your Azure Data Factory pipeline does not work as expected, it is useful to have tools to examine what went wrong. The debug mode allows us to run a pipeline receiving immediate feedback about its execution.

In this section, we’ll explore how to investigate a pipeline failure using debug mode capabilities. We will cover how to identify errors, understand error messages, and troubleshoot activities to resolve a failed pipeline.

Getting ready

In order to prepare your environment for this recipe, follow these steps:

  1. Set up an Azure SQL server and create Airline, Country, and PipelineLog tables and an InsertLogRecord stored procedure. Use the CreateAirlineTable.sql, CreateCountryTable.sql, and CreateActivityLogsTable.sql scripts to create these objects. These were also required for Chapter 2, Orchestration and Control Flow. If you followed the recipes in that chapter, you should have the...

Rerunning activities

When our data transfers fail for one reason or another, we frequently need to rerun affected pipelines. This ensures that appropriate data movement is performed, albeit delayed. If our design is complex, or if the pipeline is moving large volumes of data, it is useful to be able to repeat the run from the point of failure, to minimize the time lost in the failed pipeline.

In this section, we will look at two features of Azure Data Factory that help us to troubleshoot our pipelines and rerun them with maximum efficiency. The first feature is breakpoints, which allow us to execute a pipeline up to an activity of our choice. The second feature is rerunning from the point of failure, which helps to minimize the time lost due to a failed execution.

Getting ready

Preparing your environment for this recipe is identical to the preparation required for the previous recipe in this chapter, Investigating failures – running pipelines in debug mode. We will...

Configuring alerts for your Azure Data Factory runs

When a failure in data processing happens, we have to react as fast as possible to avoid impacting downstream processes. Azure Data Factory gives us tools to automate monitoring by setting up alerts to inform the engineers when there is a problem. We already introduced a custom email alert in the Branching and chaining recipe in Chapter 2, Orchestration and Control Flow. It sent a notification if that particular pipeline failed. In this chapter, we shall create an alert that will notify on-call engineers via email or a phone call whenever any of the pipelines in our data factory have a problem.

Getting ready

In order to follow this recipe and configure the alerts, we first need to register with the Insights resource provider:

  1. In the Azure portal, go to your subscription, and from the menu on the left, select Resource providers.
  2. Use the Filter by name… text field to search for the microsoft.insights...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (4)

author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton