Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Data Engineering with Python

You're reading from  Data Engineering with Python

Product type Book
Published in Oct 2020
Publisher Packt
ISBN-13 9781839214189
Pages 356 pages
Edition 1st Edition
Languages
Author (1):
Paul Crickard Paul Crickard
Profile icon Paul Crickard

Table of Contents (21) Chapters

Preface 1. Section 1: Building Data Pipelines – Extract Transform, and Load
2. Chapter 1: What is Data Engineering? 3. Chapter 2: Building Our Data Engineering Infrastructure 4. Chapter 3: Reading and Writing Files 5. Chapter 4: Working with Databases 6. Chapter 5: Cleaning, Transforming, and Enriching Data 7. Chapter 6: Building a 311 Data Pipeline 8. Section 2:Deploying Data Pipelines in Production
9. Chapter 7: Features of a Production Pipeline 10. Chapter 8: Version Control with the NiFi Registry 11. Chapter 9: Monitoring Data Pipelines 12. Chapter 10: Deploying Data Pipelines 13. Chapter 11: Building a Production Data Pipeline 14. Section 3:Beyond Batch – Building Real-Time Data Pipelines
15. Chapter 12: Building a Kafka Cluster 16. Chapter 13: Streaming Data with Apache Kafka 17. Chapter 14: Data Processing with Apache Spark 18. Chapter 15: Real-Time Edge Data with MiNiFi, Kafka, and Spark 19. Other Books You May Enjoy Appendix

Chapter 8: Version Control with the NiFi Registry

In the previous chapters, you built several data pipelines, but we have left out a very important component—version control. Any good software developer will almost always set up version control on their project before they start writing any code. Building data pipelines for production is no different. Data engineers use many of the same tools and processes as software engineers. Using version control allows you to make changes without the fear of breaking your data pipeline. You will always be able to roll back changes to previous versions. The NiFi registry also allows you to connect new NiFi instances and have full access to all your existing data pipelines. In this chapter, we're going to cover the following main topics:

  • Installing and configuring the NiFi Registry
  • Using the Registry in NiFi
  • Versioning your data pipelines
  • Using git-persistence with the NiFi Registry

Installing and configuring the NiFi Registry

When you hear about version control, you are probably used to hearing about Git. Later in this chapter, we will use Git, but Apache NiFi has a sub-project that can handle all of our version control needs—the NiFi Registry:

Figure 8.1 – The NiFi Registry home page

Let's now install the Registry.

Installing the NiFi Registry

To install the NiFi Registry, go to the website at https://nifi.apache.org/registry and scroll to Releases. The following screenshot shows the available releases:

Figure 8.2 – The NiFi Registry

You will see a source release and two binaries for the current version, which, at the time of writing, is 0.6.0. On Windows, you can download the zip version, but since I am on Linux, I will download the nifi-registry-0.6.0-bin.tar.gz file.

Once the file is downloaded, move it to your home directory, extract the contents, then delete the archive...

Using the Registry in NiFi

The Registry is up and running, and now you need to tell NiFi about it so that you can start using it to version your data pipelines. The NiFi GUI will handle all of the configuration and versioning. In the next section, you will add the Registry to NiFi.

Adding the Registry to NiFi

To add the Registry to NiFi, click on the waffle menu in the top-right corner of the window, then select Controller Settings from the drop-down menu, as shown in the following screenshot:

Figure 8.6 – Controller Settings in Nifi

In the Controller Settings popup, there are several tabs. You will select the last tab—Registry Clients. Clicking the plus sign at the top right of the window, you will add your Registry as shown in the following screenshot:

Figure 8.7 – Adding the NiFi Registry to NiFi

After clicking the ADD button, you will have your Registry connected to NiFi. Close the window and you will...

Versioning your data pipelines

You can use the NiFi Registry to version your data pipelines inside of a processor group. I have NiFi running and the canvas zoomed in to the SeeClickFix processor group from Chapter 6, Building a 311 Data Pipeline. To start versioning this data pipeline, right-click on the title bar of the processor group and select Version | Start version control, as shown in the following screenshot:

Figure 8.8 – Starting version control on a processor group

Your processor group is now being tracked by version control. You will see a green checkmark on the left of the processor group title box, as shown in the following screenshot:

Figure 8.9 – Processor group using version control

If you browse back to the NiFi Registry, you will see that Scf-DataEngineeringPython is being tracked. You will also see the details by expanding the bar. The details show your description and the version notes (First Commit...

Using git-persistence with the NiFi Registry

Just like software developers, you can also use Git to version control your data pipelines. The NiFi Registry allows you to use git-persistence with some configuration. To use Git with your data pipelines, you need to first create a repository.

Log in to GitHub and create a repository for your data pipelines. I have logged in to my account and have created the repository as shown in the following screenshot:

Figure 8.16 – Creating a GitHub repository

After creating a repository, you will need to create an access token for the registry to use to read and write to the repository. In the GitHub Settings, go to Developer settings, then Personal access tokens, then click the Generate a personal access token hyperlink shown in the following screenshot:

Figure 8.17 – The setting to create an access token

You can then add a note for the token so you can remember what service is using...

Summary

In this chapter, you have learned one of the most important features of production data pipelines: version control. A software developer would not write code without using version control and neither should a data engineer. You have learned how to install and configure the Nifi Registry and how to start tracking version on processor groups. Lastly, you are now able to persist the version to GitHub. Any changes to your data pipelines will be saved and if you need to roll back, you can. As your team grows, all the data engineers will be able to manage the data pipelines and be sure they have the latest versions, all while developing locally.

In the next chapter, you will learn about logging and monitoring your data pipelines. If something goes wrong, and it will, you will need to know about it. Good logging and monitoring of data pipelines will allow you to catch errors when they happen and debug them to restore your data flows.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Python
Published in: Oct 2020 Publisher: Packt ISBN-13: 9781839214189
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}