Reader small image

You're reading from  Azure Data Factory Cookbook - Second Edition

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781803246598
Edition2nd Edition
Right arrow
Authors (4):
Dmitry Foshin
Dmitry Foshin
author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

Tonya Chernyshova
Tonya Chernyshova
author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

Dmitry Anoshin
Dmitry Anoshin
author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

Xenia Ireton
Xenia Ireton
author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton

View More author details
Right arrow

Using the ForEach and Filter activities

In this recipe, we introduce you to the Filter and ForEach activities. We shall enhance the pipeline from the previous recipe to not just examine the data in the Azure Storage container, but filter it based on the file type and then record the last-modified date for every .csv file in the folder.

Getting ready

The preparation steps are the same as for the previous recipe. We shall be reusing the pipeline from the Using Metadata and Stored Procedure activities recipe, so if you did not go through the steps then, do so now.

How to do it…

  1. Clone the pipeline from the previous recipe and rename it pl_orchestration_recipe_3.
  2. Delete the Stored Procedure activity.
  3. Select the Metadata activity and configure it in the following way:
    1. In the Dataset tab, verify that CsvDataFolder is selected as the dataset.
    2. Verify that the Item Name and Last Modified fields are added as arguments. Add one more field, Child Items.
  4. Now, select the Filter activity from the Activities pane on the left (find it in the Iteration and Conditionals section) and drop it into the pipeline canvas to the right of the Metadata activity.
  5. Connect the Metadata activity to the Filter activity.
  6. Configure the Filter activity as follows:
    1. In the General tab, change the name to FilterOnCsv.
    2. In the Settings tab, fill in the values as follows:
      • Items: @activity('CsvDataFolder Metadata').output.childItems
      • Condition: @endswith(item().name, '.csv')
  7. Run this pipeline in Debug mode:

    Figure 2.19: Pipeline status overview in Debug mode

    After the pipeline is finished running, hover over the row representing the Get Metadata activity run in the Output pane and examine the activity’s output. You should see that the Get Metadata activity fetched the metadata for all the files in the folder, as follows:

    Figure 2.20: Get Metadata activity output

    Do the same for the FilterOnCSV activity and verify that the outputs were filtered to only the csv files.

  1. From the Activities pane, add an instance of the ForEach activity (find it in the Iteration and Conditionals section) on the canvas, connect it to the FilterOnCsv activity, and configure it in the following way:
    1. In the Settings tab, enter the following value in the Items textbox: @activity('FilterOnCSV').output.Value.
    2. Within the ForEach activity square, click on the pencil image (meaning Edit). This will open another canvas. We shall configure the actions for the ForEach activity within this canvas.
  2. Add an instance of the Get Metadata Activity onto the ForEach Activity canvas, and configure it as follows:
    1. In the General tab, change the name to ForEach Metadata.
    2. In the Dataset tab, specify CsvData (the parameterized dataset we created in the Using parameters and built-in functions recipe) as the dataset for this activity. If you do not have this dataset, please refer to the Using parameters and built-in functions recipe to see how to create a parameterized dataset.
    3. For the filename parameter, enter @item().name.
    4. In the same Dataset tab, in the Field list section, add two arguments: Item name and Last modified, as shown in the following screenshot:

      Figure 2.21: Adding arguments in the Field list section

  3. Add an instance of Stored Procedure Activity onto the ForEach Activity canvas. Connect ForEach Metadata to Stored Procedure Activity and configure Stored Procedure Activity as follows:
    1. In the Settings tab at the bottom, select AzureSQLDatabase as the linked service and [dbo][InsertFileMetadata] as the stored procedure name.
    2. Click on Import under Stored procedure parameters and enter the following values:
      • FileName: @{item().name}
      • ModifiedAt: @convertFromUtc(activity('ForEach Metadata').output.lastModified,'Pacific Standard Time')
      • UpdatedAt: @convertFromUtc(utcnow(), 'Pacific Standard Time') (you can use your own time zone here, as well):

      Figure 2.22: Stored Procedure activity configuration

    Run your whole pipeline in Debug mode. When it is finished, you should see two additional rows in your FileMetadata table (in Azure SQL Database) showing the last-modified date for airlines.csv and countries.csv.

  1. Publish your pipeline to save the changes.

How it works…

In this recipe, we used the Metadata activity again and took advantage of the childItems option to retrieve information about the folder. After this, we filtered the output to restrict processing to CSV files only with the help of the Filter activity.

Next, we needed to select only the CSV files from the folder for further processing. For this, we added a Filter activity. Using @activity('Get Metadata').output.childItems, we specified that the Filter activity’s input is the metadata of all the files inside the folder. We configured the Filter activity’s condition to only keep files whose name ends with csv (the built-in endswith function gave us a convenient way to do this).

Finally, in order to process each file separately, we used the ForEach activity, which we used in step 6. ForEach is what is called a compound activity, because it contains a group of activities that are performed on each of the items in a loop. We configured the Filter activity to take as input the filtered file list (the output of the Filter activity), and in steps 7 and 8, we designed the sequence of actions that we want to have performed on each of the files. We used a second instance of the Metadata activity for this sub-pipeline and configured it to retrieve information about a particular file. To accomplish this, we configured it with the parameterized CsvData dataset and specified the filename. In order to refer to the file, we used the built-in formula @item (which provides a reference to the current file in the ForEach loop) and indicated that we need the name property of that object.

The configuration of the Stored Procedure activity is similar to the previous step. In order to provide the filename for the Stored Procedure parameters, we again referred to the provided current object reference, @item. We could also have used @activity('ForEach Metadata').output.itemName, as we did in the previous recipe.

Previous PageNext Page
You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024Publisher: PacktISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $19.99/month. Cancel anytime

Authors (4)

author image
Dmitry Foshin

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Read more about Dmitry Foshin

author image
Tonya Chernyshova

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Read more about Tonya Chernyshova

author image
Dmitry Anoshin

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Read more about Dmitry Anoshin

author image
Xenia Ireton

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.
Read more about Xenia Ireton