Reader small image

You're reading from  Data Engineering with Python

Product typeBook
Published inOct 2020
Reading LevelBeginner
PublisherPackt
ISBN-139781839214189
Edition1st Edition
Languages
Right arrow
Author (1)
Paul Crickard
Paul Crickard
author image
Paul Crickard

Paul Crickard authored a book on the Leaflet JavaScript module. He has been programming for over 15 years and has focused on GIS and geospatial programming for 7 years. He spent 3 years working as a planner at an architecture firm, where he combined GIS with Building Information Modeling (BIM) and CAD. Currently, he is the CIO at the 2nd Judicial District Attorney's Office in New Mexico.
Read more about Paul Crickard

Right arrow

Chapter 4: Working with Databases

In the previous chapter, you learned how to read and write text files. Reading log files or other text files from a data lake and moving them into a database or data warehouse is a common task for data engineers. In this chapter, you will use the skills you gained working with text files and learn how to move that data into a database. This chapter will also teach you how to extract data from relational and NoSQL databases. By the end of this chapter, you will have the skills needed to work with databases using Python, NiFi, and Airflow. It is more than likely that most of your data pipelines will end with a database and very likely that they will start with one as well. With these skills, you will be able to build data pipelines that can extract and load, as well as start and finish, with both relational and NoSQL databases.

In this chapter, we're going to cover the following main topics:

  • Inserting and extracting relational data in...

Inserting and extracting relational data in Python

When you hear the word database, you probably picture a relational database – that is, a database made up of tables containing columns and rows with relationships between the tables; for example, a purchase order system that has inventory, purchases, and customer information. Relational databases have been around for over 40 years and come from the relational data model developed by E. F. Codd in the late 1970s. There are several vendors of relational databases – including IBM, Oracle, and Microsoft – but all of these databases use a similar dialect of SQL, which stands for Structured Query Language. In this book, you will work with a popular open source database – PostgreSQL. In the next section, you will learn how to create a database and tables.

Creating a PostgreSQL database and tables

In Chapter 2, Building Our Data Engineering Infrastructure, you created a database in PostgreSQL using pgAdmin...

Inserting and extracting NoSQL database data in Python

Relational databases may be what you think of when you hear the term database, but there are several other types of databases, such as columnar, key-value, and time-series. In this section, you will learn how to work with Elasticsearch, which is a NoSQL database. NoSQL is a generic term referring to databases that do not store data in rows and columns. NoSQL databases often store their data as JSON documents and use a query language other than SQL. The next section will teach you how to load data into Elasticsearch.

Installing Elasticsearch

To install the elasticsearch library, you can use pip3, as shown:

pip3 install elasticsearch

Using pip will install the newest version, which, if you installed Elasticsearch according to the instructions in Chapter 2, Building Our Data Engineering Infrastructure, is what you will need. You can get the library for Elasticsearch versions 2, 5, 6, and 7. To verify the installation and...

Building data pipelines in Apache Airflow

In the previous chapter, you built your first Airflow data pipeline using a Bash and Python operator. This time, you will combine two Python operators to extract data from PostgreSQL, save it as a CSV file, then read it in and write it to an Elasticsearch index. The complete pipeline is shown in the following screenshot:

Figure 4.6 – Airflow DAG

The preceding Directed Acyclic Graph (DAG) looks very simple; it is only two tasks, and you could combine the tasks into a single function. This is not a good idea. In Section 2, Deploying Pipelines into Production, you will learn about modifying your data pipelines for production. A key tenant of production pipelines is that each task should be atomic; that is, each task should be able to stand on its own. If you had a single function that read a database and inserted the results, when it fails, you have to track down whether the query failed or the insert failed. As...

Handling databases with NiFi processors

In the previous sections, you learned how to read and write CSV and JSON files using Python. Reading files is such a common task that tools such as NiFi have prebuilt processors to handle it. In this section, you will build the same data pipeline as in the previous section. In NiFi, the data pipeline will look as shown in the following screenshot:

Figure 4.9 – A NiFi data pipeline to move data from PostgreSQL to Elasticsearch

The data pipeline contains one more task than the Airflow version, but otherwise, it should look straightforward. The following sections will walk you through building the data pipeline.

Extracting data from PostgreSQL

The processor most used for handling relational databases in NiFi is the ExecuteSQLRecord processor. Drag the Processor icon to the canvas, and search for the ExecuteSQLRecord processor. Once it has been added to the canvas, you need to configure it.

Configuring the...

Summary

In this chapter, you learned how to use Python to query and insert data into both relational and NoSQL databases. You also learned how to use both Airflow and NiFi to create data pipelines. Database skills are some of the most important for a data engineer. There will be very few data pipelines that do not touch on them in some way. The skills you learned in this chapter provide the foundation for the other skills you will need to learn – primarily SQL. Combining strong SQL skills with the data pipeline skills you learned in this chapter will allow you to accomplish most of the data engineering tasks you will encounter.

In the examples, the data pipelines were not idempotent. Every time they ran, you got new results, and results you did not want. We will fix that in Section 2, Deploying Pipelines into Production. But before you get to that, you will need to learn how to handle common data issues, and how to enrich and transform your data.

The next chapter will...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Python
Published in: Oct 2020Publisher: PacktISBN-13: 9781839214189
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Paul Crickard

Paul Crickard authored a book on the Leaflet JavaScript module. He has been programming for over 15 years and has focused on GIS and geospatial programming for 7 years. He spent 3 years working as a planner at an architecture firm, where he combined GIS with Building Information Modeling (BIM) and CAD. Currently, he is the CIO at the 2nd Judicial District Attorney's Office in New Mexico.
Read more about Paul Crickard