Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Data Engineering with Python

You're reading from  Data Engineering with Python

Product type Book
Published in Oct 2020
Publisher Packt
ISBN-13 9781839214189
Pages 356 pages
Edition 1st Edition
Languages
Author (1):
Paul Crickard Paul Crickard
Profile icon Paul Crickard

Table of Contents (21) Chapters

Preface 1. Section 1: Building Data Pipelines – Extract Transform, and Load
2. Chapter 1: What is Data Engineering? 3. Chapter 2: Building Our Data Engineering Infrastructure 4. Chapter 3: Reading and Writing Files 5. Chapter 4: Working with Databases 6. Chapter 5: Cleaning, Transforming, and Enriching Data 7. Chapter 6: Building a 311 Data Pipeline 8. Section 2:Deploying Data Pipelines in Production
9. Chapter 7: Features of a Production Pipeline 10. Chapter 8: Version Control with the NiFi Registry 11. Chapter 9: Monitoring Data Pipelines 12. Chapter 10: Deploying Data Pipelines 13. Chapter 11: Building a Production Data Pipeline 14. Section 3:Beyond Batch – Building Real-Time Data Pipelines
15. Chapter 12: Building a Kafka Cluster 16. Chapter 13: Streaming Data with Apache Kafka 17. Chapter 14: Data Processing with Apache Spark 18. Chapter 15: Real-Time Edge Data with MiNiFi, Kafka, and Spark 19. Other Books You May Enjoy Appendix

Chapter 12: Building a Kafka Cluster

In this chapter, you will move beyond batch processing – running queries on a complete set of data – and learn about the tools used in stream processing. In stream processing, the data may be infinite and incomplete at the time of a query. One of the leading tools in handling streaming data is Apache Kafka. Kafka is a tool that allows you to send data in real time to topics. These topics can be read by consumers who process the data. This chapter will teach you how to build a three-node Apache Kafka cluster. You will also learn how to create and send messages (produce) and read data from topics (consume).

In this chapter, we're going to cover the following main topics:

  • Creating ZooKeeper and Kafka clusters
  • Testing the Kafka cluster

Creating ZooKeeper and Kafka clusters

Most tutorials on running applications that can be distributed often only show how to run a single node and then you are left wondering how you would run this in production. In this section, you will build a three-node ZooKeeper and Kafka cluster. It will run on a single machine. However, I will split each instance into its own folder and each folder simulates a server. The only modification when running on different servers would be to change localhost to the server IP.

The next chapter will go into detail on the topic of Apache Kafka, but for now it is enough to understand that Kafka is a tool for building real-time data streams. Kafka was developed at LinkedIn and is now an Apache project. You can find Kafka on the web at http://kafka.apache.org. The website is shown in the following screenshot:

Figure 12.1 – Apache Kafka website

Kafka requires another application, ZooKeeper, to manage information about the...

Testing the Kafka cluster

Kafka comes with scripts to allow you to perform some basic functions from the command line. To test the cluster, you can create a topic, create a producer, send some messages, and then create a consumer to read them. If the consumer can read them, your cluster is running.

To create a topic, run the following command from your kafka_1 directory:

bin/kafka-topics.sh --create --zookeeper localhost:2181,localhost:2182,localhost:2183 --replication-factor 2 --partitions 1 --topic dataengineering

The preceding command runs the kafka-topics script with the create flag. It then specifies the ZooKeeper cluster IP addresses and the topic. If the topic was created, the terminal will have printed the following line:

created topic dataengineering

You can verify this by listing all the topics in the Kafka cluster using the same script, but with the list flag:

bin/kafka-topics.sh –list --zookeeper localhost:2181,localhost:2182,localhost:2183

The...

Summary

In this chapter, you learned how to create a Kafka cluster, which required the creation of a ZooKeeper cluster. While you ran all of the instances on a single machine, the steps you took will work on different servers too. Kafka allows the creation of real-time data streams and will require a different way of thinking than the batch processing you have been doing.

The next chapter will explain the concepts involved in streams in depth. You will also learn how to process streams in both NiFi and Python.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Python
Published in: Oct 2020 Publisher: Packt ISBN-13: 9781839214189
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}