Reader small image

You're reading from  Data Engineering with Python

Product typeBook
Published inOct 2020
Reading LevelBeginner
PublisherPackt
ISBN-139781839214189
Edition1st Edition
Languages
Right arrow
Author (1)
Paul Crickard
Paul Crickard
author image
Paul Crickard

Paul Crickard authored a book on the Leaflet JavaScript module. He has been programming for over 15 years and has focused on GIS and geospatial programming for 7 years. He spent 3 years working as a planner at an architecture firm, where he combined GIS with Building Information Modeling (BIM) and CAD. Currently, he is the CIO at the 2nd Judicial District Attorney's Office in New Mexico.
Read more about Paul Crickard

Right arrow

Chapter 8: Version Control with the NiFi Registry

In the previous chapters, you built several data pipelines, but we have left out a very important component—version control. Any good software developer will almost always set up version control on their project before they start writing any code. Building data pipelines for production is no different. Data engineers use many of the same tools and processes as software engineers. Using version control allows you to make changes without the fear of breaking your data pipeline. You will always be able to roll back changes to previous versions. The NiFi registry also allows you to connect new NiFi instances and have full access to all your existing data pipelines. In this chapter, we're going to cover the following main topics:

  • Installing and configuring the NiFi Registry
  • Using the Registry in NiFi
  • Versioning your data pipelines
  • Using git-persistence with the NiFi Registry

Installing and configuring the NiFi Registry

When you hear about version control, you are probably used to hearing about Git. Later in this chapter, we will use Git, but Apache NiFi has a sub-project that can handle all of our version control needs—the NiFi Registry:

Figure 8.1 – The NiFi Registry home page

Let's now install the Registry.

Installing the NiFi Registry

To install the NiFi Registry, go to the website at https://nifi.apache.org/registry and scroll to Releases. The following screenshot shows the available releases:

Figure 8.2 – The NiFi Registry

You will see a source release and two binaries for the current version, which, at the time of writing, is 0.6.0. On Windows, you can download the zip version, but since I am on Linux, I will download the nifi-registry-0.6.0-bin.tar.gz file.

Once the file is downloaded, move it to your home directory, extract the contents, then delete the archive...

Using the Registry in NiFi

The Registry is up and running, and now you need to tell NiFi about it so that you can start using it to version your data pipelines. The NiFi GUI will handle all of the configuration and versioning. In the next section, you will add the Registry to NiFi.

Adding the Registry to NiFi

To add the Registry to NiFi, click on the waffle menu in the top-right corner of the window, then select Controller Settings from the drop-down menu, as shown in the following screenshot:

Figure 8.6 – Controller Settings in Nifi

In the Controller Settings popup, there are several tabs. You will select the last tab—Registry Clients. Clicking the plus sign at the top right of the window, you will add your Registry as shown in the following screenshot:

Figure 8.7 – Adding the NiFi Registry to NiFi

After clicking the ADD button, you will have your Registry connected to NiFi. Close the window and you will...

Versioning your data pipelines

You can use the NiFi Registry to version your data pipelines inside of a processor group. I have NiFi running and the canvas zoomed in to the SeeClickFix processor group from Chapter 6, Building a 311 Data Pipeline. To start versioning this data pipeline, right-click on the title bar of the processor group and select Version | Start version control, as shown in the following screenshot:

Figure 8.8 – Starting version control on a processor group

Your processor group is now being tracked by version control. You will see a green checkmark on the left of the processor group title box, as shown in the following screenshot:

Figure 8.9 – Processor group using version control

If you browse back to the NiFi Registry, you will see that Scf-DataEngineeringPython is being tracked. You will also see the details by expanding the bar. The details show your description and the version notes (First Commit...

Using git-persistence with the NiFi Registry

Just like software developers, you can also use Git to version control your data pipelines. The NiFi Registry allows you to use git-persistence with some configuration. To use Git with your data pipelines, you need to first create a repository.

Log in to GitHub and create a repository for your data pipelines. I have logged in to my account and have created the repository as shown in the following screenshot:

Figure 8.16 – Creating a GitHub repository

After creating a repository, you will need to create an access token for the registry to use to read and write to the repository. In the GitHub Settings, go to Developer settings, then Personal access tokens, then click the Generate a personal access token hyperlink shown in the following screenshot:

Figure 8.17 – The setting to create an access token

You can then add a note for the token so you can remember what service is using...

Summary

In this chapter, you have learned one of the most important features of production data pipelines: version control. A software developer would not write code without using version control and neither should a data engineer. You have learned how to install and configure the Nifi Registry and how to start tracking version on processor groups. Lastly, you are now able to persist the version to GitHub. Any changes to your data pipelines will be saved and if you need to roll back, you can. As your team grows, all the data engineers will be able to manage the data pipelines and be sure they have the latest versions, all while developing locally.

In the next chapter, you will learn about logging and monitoring your data pipelines. If something goes wrong, and it will, you will need to know about it. Good logging and monitoring of data pipelines will allow you to catch errors when they happen and debug them to restore your data flows.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Python
Published in: Oct 2020Publisher: PacktISBN-13: 9781839214189
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Paul Crickard

Paul Crickard authored a book on the Leaflet JavaScript module. He has been programming for over 15 years and has focused on GIS and geospatial programming for 7 years. He spent 3 years working as a planner at an architecture firm, where he combined GIS with Building Information Modeling (BIM) and CAD. Currently, he is the CIO at the 2nd Judicial District Attorney's Office in New Mexico.
Read more about Paul Crickard