Reader small image

You're reading from  Data Engineering with Apache Spark, Delta Lake, and Lakehouse

Product typeBook
Published inOct 2021
PublisherPackt
ISBN-139781801077743
Edition1st Edition
Right arrow
Author (1)
Manoj Kukreja
Manoj Kukreja
author image
Manoj Kukreja

Manoj Kukreja is a Principal Architect at Northbay Solutions who specializes in creating complex Data Lakes and Data Analytics Pipelines for large-scale organizations such as banks, insurance companies, universities, and US/Canadian government agencies. Previously, he worked for Pythian, a large managed service provider where he was leading the MySQL and MongoDB DBA group and supporting large-scale data infrastructure for enterprises across the globe. With over 25 years of IT experience, he has delivered Data Lake solutions using all major cloud providers including AWS, Azure, GCP, and Alibaba Cloud. On weekends, he trains groups of aspiring Data Engineers and Data Scientists on Hadoop, Spark, Kafka and Data Analytics on AWS and Azure Cloud.
Read more about Manoj Kukreja

Right arrow

Scheduling the master pipeline

Scheduling a pipeline for automated execution at a given time requires you to create a trigger in the Azure data factory. Let's find out how to do this:

  1. Using the panel on the left-hand side, click on Manage. Then, click on Triggers. Finally, click on New:
    • Name: electroniz_master_trigger
    • Start date: Choose a date and time
    • Time Zone: Choose your time zone
    • Recurrence: Every 1 hour
    • Activated: Yes.

    Finally, click on OK:

    Figure 9.11 – Trigger for the Electroniz master pipeline

  2. Using the pane on the left-hand side, click on Author. Click on the electroniz_master_pipeline underneath Pipeline:
    • Click on Add Trigger. Then, click on New/Edit.
    • Choose the following trigger: electroniz_master_trigger.
    • Click on OK.

    In the Edit trigger panel, add the Trigger Run parameters as follows:

    • Name: STORAGE_ACCOUNT Type: String Value: traininglakehouse
    • Name: BRONZE_LAYER_NAMESPACE Type: String Value: bronze
    • Name: SILVER_LAYER_NAMESPACE Type: String Value...
lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Data Engineering with Apache Spark, Delta Lake, and Lakehouse
Published in: Oct 2021Publisher: PacktISBN-13: 9781801077743

Author (1)

author image
Manoj Kukreja

Manoj Kukreja is a Principal Architect at Northbay Solutions who specializes in creating complex Data Lakes and Data Analytics Pipelines for large-scale organizations such as banks, insurance companies, universities, and US/Canadian government agencies. Previously, he worked for Pythian, a large managed service provider where he was leading the MySQL and MongoDB DBA group and supporting large-scale data infrastructure for enterprises across the globe. With over 25 years of IT experience, he has delivered Data Lake solutions using all major cloud providers including AWS, Azure, GCP, and Alibaba Cloud. On weekends, he trains groups of aspiring Data Engineers and Data Scientists on Hadoop, Spark, Kafka and Data Analytics on AWS and Azure Cloud.
Read more about Manoj Kukreja