Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

Data Pipeline Orchestration

Once you have defined the business logic and transformations on your data, you need a reliable way to stitch them all together. If there is a failure, you should be notified and be able to easily identify the tasks that failed before you analyze them. This is where data pipeline orchestration comes in. It refers to the coordination and management of tasks in data transformation through well-defined dependencies between them. There are many business reasons for orchestration, but consider the following simple example. You need a report delivered daily and you need to process the data for that report each day. This requires orchestration.

In this chapter, we are going to look at some of the most common tools and techniques used for data pipeline orchestration. Two of them, Airflow and Argo, are open source, whereas Databricks workflows and Azure Data Factory are proprietary software. You can determine which one is best suited for your orchestration needs...

Technical requirements

You need Python (between 3.7 and 3.10) on your machine to run Airflow. If you do not have it installed, you can download Python from https://www.python.org/downloads/. You also need minikube which we will use to run Argo. You can find the installation steps outlined here, https://minikube.sigs.k8s.io/docs/start/.

Understanding the basics of orchestration

Once you have written and tested your transformations, you need a way to define dependencies among the various steps of your data engineering pipelines, define a strategy to deal with failures, and so on. This is where orchestration comes in. It allows you to define the strategy of data pipeline execution. For example, which conditions must be met before the job starts, which transformations are going to run in parallel, what happens when a job fails (do you want to try after a certain interval or ignore it? Should the pipeline be aborted?), and so on. It is important to get it right to ensure optimal performance and cost savings.

In the following sections, we are going to look at some of the popular orchestration tools in the industry.

Understanding core features of Apache Airflow

Apache Airflow is an open source platform that provides a comprehensive solution for orchestrating complex data pipelines. Born out of the need to manage Airbnb’s data workflows, Airflow has gained widespread adoption due to its flexibility, scalability, and active community support and is now one of the most widely used orchestration platforms.

Airflow uses concepts such as DAGs and operators, which are the fundamental building blocks that you need to work with when developing an orchestration solution using Airflow:

  • Directed Acyclic Graphs (DAGs): At the heart of Airflow’s orchestration philosophy are DAGs. A DAG is a collection of tasks with defined dependencies, where the direction of dependencies forms a directed graph, and there are no cycles. Each node in the graph represents a task, while edges denote the order in which tasks should be executed.
  • Operators: Tasks within an Airflow DAG are implemented...

Working with Argo Workflows

Argo Workflows is an open source orchestration tool for jobs running on Kubernetes. It is implemented as a Kubernetes custom resource definition (CRD). Each step in Argo Workflows runs as a container on a Kubernetes platform. In this section, we are going to take a look at the capabilities offered by Argo Workflows starting with installation on your local machine.

Installing Argo Workflows

For the purposes of this chapter, we will set up Argo locally. There are two steps you need to take before we can start looking at Argo:

  1. Install minikube. Please refer to the steps outlined in https://minikube.sigs.k8s.io/docs/start/ to install minikube.
  2. Install the kubectl CLI to interact with minikube by following the steps outlined in https://kubernetes.io/docs/tasks/tools/#kubectl.

Once you have minikube and kubectl installed, you can proceed with configuring two separate Argo components that we are going to use in this section:

  • Argo...

Using Databricks Workflows

Databricks Workflows is a fully managed cloud orchestration service available to all Databricks customers. It simplifies the creation of pipeline orchestration for the following types of tasks:

  • Databricks notebooks
  • Python Script/Wheel
  • JAR
  • Spark Submit
  • Databricks SQL – dashboards, queries, alerts, or files
  • Delta Live Table pipelines
  • dbt

We will focus on using a spark submit task to run a Scala JAR. The first thing we have to do is create an assembly or fat jar, which will include all the dependencies of our project in our JAR.

To do this, we will add the following code to our build.sbt file:

assemblyJarName in assembly := "de-with-scala-assembly-1.0.jar"
assemblyMergeStrategy in assembly := {
case PathList("META-INF", _*) => MergeStrategy.discard
case _ => MergeStrategy.first
}

The first line is to specify the name of the .jar file to be created. The next block will provide a...

Leveraging Azure Data Factory

Azure Data Factory (ADF) evolved from the on-premises version called Data Management Gateway. Recognizing the shift to cloud computing, Microsoft revamped and launched ADF to serve the growing demand for cloud data solutions. ADF is built on the philosophy of “code-free” data integration. It emphasizes visual tools, allowing users to build, deploy, and manage data transformation processes without needing to write extensive code. In this section, we are going to look at the capabilities offered by ADF and see how it can simplify building pipeline orchestration.

Primary components of ADF

ADF provides an ensemble of features designed to handle the vast demands of data integration in a cloud-first world. A robust understanding of these components is crucial to leverage ADF fully. Let’s discuss them in the following sections.

Datasets and dataflows

Datasets and dataflows are the fundamental building blocks of ADF that are used...

Summary

In this chapter, we looked at pipeline orchestration, which is a key component of data engineering. We looked at various options – both open source and paid – that should allow you to evaluate the solution that works best for your data engineering needs. We looked at Airflow and Argo, which are open source tools that are quite popular among developers. We then looked at Databricks Workflows as well as ADF, which are managed solutions and provide a lot of functionalities and seamless integration with other services running in the cloud.

In the next chapter, we are going to look at performance tuning, which is extremely important for ensuring your data engineering workloads run efficiently and are cost effective.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford