Reader small image

You're reading from  Modern Data Architectures with Python

Product typeBook
Published inSep 2023
Reading LevelExpert
PublisherPackt
ISBN-139781801070492
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Brian Lipp
Brian Lipp
author image
Brian Lipp

Brian Lipp is a Technology Polyglot, Engineer, and Solution Architect with a wide skillset in many technology domains. His programming background has ranged from R, Python, and Scala, to Go and Rust development. He has worked on Big Data systems, Data Lakes, data warehouses, and backend software engineering. Brian earned a Master of Science, CSIS from Pace University in 2009. He is currently a Sr. Data Engineer working with large Tech firms to build Data Ecosystems.
Read more about Brian Lipp

Right arrow

Orchestrating Your Data Workflows

We have covered a wealth of techniques and knowledge in building our data platforms. However, there are some missing components in fully orchestrating everything. We’ve mentioned Databricks Workflows, but we didn’t dive deep into how it works; we also haven’t mentioned logging or secrets management. Workflows is an orchestration tool that’s used to manage data pipelines in Databricks. Orchestration tools normally allow for common data tasks and provide the history of each pipeline run, which is specific to the pipeline. Having a central place to manage all your pipelines is a critical step to having reliable, scalable data pipelines. So, this chapter will discuss these topics in detail and create more stability in our data platform.

In this chapter, we’re going to cover the following main topics:

  • Logging and monitoring with Datadog
  • Secrets management
  • Databricks Workflows
  • Databricks REST APIs...

Technical requirements

The tooling that will be used in this chapter is tied to the tech stack that’s been chosen for this book. All vendors should offer a free trial account.

I will be using Databricks in this chapter.

Setting up your environment

Before we begin this chapter, let’s take some time to set up our working environment.

Databricks

As in the previous chapters, this chapter assumes you have a working version of Python 3.6 or above installed in your development environment. It also assumes you have set up an AWS account and that you have set up Databricks with that AWS account.

Databricks CLI

The first step is to install the databricks-cli tool using the pip Python package manager:

pip install databricks-cli

Let’s validate that everything has been installed correctly. If the following command produces the tool’s version, then everything is working correctly:

Databricks -v

Now, let’s set up authentication. First, go into the Databricks UI and generate a personal access token. The following command will ask for the host that was created for your Databricks instance and the token that was created:

databricks configure --token

We can...

Orchestrating data workloads

Now that we have all the pre-setup work done, let’s jump right into organizing and running our workloads in Databricks. We will cover a variety of topics, the first of which is managing incremental new additions via files.

Making life easier with Autoloader

Spark Streaming isn’t something new and many deployments are using it in their data platforms. Spark Streaming has rough edges that Autoloader resolves. Autoloader is an efficient way to have Databricks detect new files and process them. Autoloader works with the Spark structured streaming context, so there isn’t much difference in usage once it’s set up.

Reading

To create a streaming DataFrame using Autoloader, you can simply use the cloud file format, along with the needed options. In the following case, we are setting the schema, delimiter, and format for a CSV load:

spark.readStream.format("cloudFiles") \
    .option("cloudFiles...

Databricks Workflows

Now that we’ve gone through the YAML deployment of workflows in dbx, next, we will look at the web console. Here, we have the main page for workflows. We can create a new workflow by clicking the Create job button at the top left:

Figure 9.1: Create job

Figure 9.1: Create job

When you create a workflow, you will be presented with a diagram of the workflow and a menu for each step:

Figure 9.2: My_workflow

Figure 9.2: My_workflow

Be sure to match the package name and entry point with what is defined in setup.py if you’re using a package:

Figure 9.3: Workflow diagram

Figure 9.3: Workflow diagram

When you run your workflow, you will see each instance run, its status, and its start time:

Figure 9.4: Workflow run

Figure 9.4: Workflow run

Here is an example of a two-step workflow that has failed:

Figure 9.5: Workflow flow

Figure 9.5: Workflow flow

You can see your failed runs individually in the console:

Figure 9.6: Workflow run failed
...

Terraform

Databricks supports workflows in Terraform, and it’s a very viable way to deploy and change your workflows.

Here is how you can define a workflow, also called a job in some interfaces. You must set your workflow name and the resource name. After that, you must define tasks within the workflow:

resource "databricks_job" "my_pipeline_1" {
 name = "my_awsome_pipeline"
   task {
....
       existing_cluster_id = <cluster-id>
   }
      task {
....
       existing_cluster_id = <cluster-id>
   }
}

Failed runs

When your workflows fail, you have the option to repair your run. You don’t need to rerun the whole pipeline, and Workflows is smart enough to just run your failed steps. This brings up the important topic of creating idempotent steps in a workflow. In short, if you...

REST APIs

REST APIs are a way to access functionality and data over the network. Databricks, like many vendors, allows you to interact and change the platform through REST API interactions.

The Databricks API

Here are some useful endpoints you can interact:

  • Cluster API: Manages clusters, including restart, create, and delete
  • Jobs API: Manages jobs and workflows, including restart, create, and delete
  • Token API: Creates and manages tokens in the workspace

Python code

Here, we have a basic client setup for a REST endpoint. In this example, it’s google.com:

  1. First, we must import the necessary libraries. Here, we are using the requests library exclusively:
    import requests
    from requests.adapters import HTTPAdapter, Retry
  2. Next, we must set up a session and define our Retry pattern. We are using Retry because the nature of Network APIs can be finicky, so we want to make sure there is a wide range of time we can get our interaction through:
    session...

Practical lab

Our cloud team will be triggering an AWS Lambda and passing the path to the data being delivered from our ingestion tool. They have asked for a Lambda that will pass that information to your workflow, which should be parameterized. This type of request is very common and allows Databricks to be interacted with using a variety of tooling, such as AWS Step Functions and Jenkins, among others.

Solution

In this solution, we will walk you through the Python code needed to complete the tasks. There are two ways to access Databricks via the REST API – using the requests package, as shown previously, and using the Python package provided by Databricks. In my solution, I am using the Databricks package to keep things simple. I have not come across a case where the package doesn’t meet my needs, but if it’s not good enough, you can always access the REST API directly.

Lambda code

Here, I am importing all my Python libraries. Take note of the databricks_cli...

Summary

We have covered a great deal of information in this chapter. To summarize, we have looked at loading incremental data efficiently, delved deeper into the Databricks Workflows API, dabbled with the REST API, and also worked with AWS Lambda.

In this next chapter, we’ll go full steam ahead with data governance!

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Modern Data Architectures with Python
Published in: Sep 2023Publisher: PacktISBN-13: 9781801070492
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Brian Lipp

Brian Lipp is a Technology Polyglot, Engineer, and Solution Architect with a wide skillset in many technology domains. His programming background has ranged from R, Python, and Scala, to Go and Rust development. He has worked on Big Data systems, Data Lakes, data warehouses, and backend software engineering. Brian earned a Master of Science, CSIS from Pace University in 2009. He is currently a Sr. Data Engineer working with large Tech firms to build Data Ecosystems.
Read more about Brian Lipp