Reader small image

You're reading from  Building Data Streaming Applications with Apache Kafka

Product typeBook
Published inAug 2017
PublisherPackt
ISBN-139781787283985
Edition1st Edition
Tools
Right arrow
Authors (2):
Chanchal Singh
Chanchal Singh
author image
Chanchal Singh

Chanchal Singh has over half decades experience in Product Development and Architect Design. He has been working very closely with leadership team of various companies including directors ,CTO's and Founding members to define technical road-map for company.He is the Founder and Speaker at meetup group Big Data and AI Pune MeetupExperience Speaks. He is Co-Author of Book Building Data Streaming Application with Apache Kafka. He has a Bachelor's degree in Information Technology from the University of Mumbai and a Master's degree in Computer Application from Amity University. He was also part of the Entrepreneur Cell in IIT Mumbai. His Linkedin Profile can be found at with the username Chanchal Singh.
Read more about Chanchal Singh

Manish Kumar
Manish Kumar
author image
Manish Kumar

Manish Kumar works as Director of Technology and Architecture at VSquare. He has over 13 years' experience in providing technology solutions to complex business problems. He has worked extensively on web application development, IoT, big data, cloud technologies, and blockchain. Aside from this book, Manish has co-authored three books (Mastering Hadoop 3, Artificial Intelligence for Big Data, and Building Streaming Applications with Apache Kafka).
Read more about Manish Kumar

View More author details
Right arrow

Kafka Cluster Deployment

In the previous chapters, we talked about the different use cases associated with Apache Kafka. We shed light on different technologies and frameworks associated with the Kafka messaging system. However, putting Kafka to production use requires additional tasks and knowledge. Firstly, you must have a very thorough understanding of how the Kafka cluster works. Later on, you must determine the hardware required for the Kafka cluster by performing adequate capacity planning. You need to understand Kafka deployment patterns and how to perform day-to-day Kafka administrating activities. In this chapter, we will cover the following topics:

  • Kafka cluster internals
  • Capacity planning
  • Single-cluster deployment
  • Multi-cluster deployment
  • Decommissioning brokers
  • Data migration

In a nutshell, this chapter focuses on Kafka cluster deployment on enterprise grade production...

Kafka cluster internals

Well, this topic has been covered in bits and pieces in the introductory chapters of this book. However, in this section, we are covering this topic with respect to components or processes that play an important role in Kafka cluster. We will not only talk about different Kafka cluster components but will also cover how these components communicate with each other via Kafka protocols.

Role of Zookeeper

Kafka clusters cannot run without Zookeeper servers, which are tightly coupled with Kafka cluster installations. Therefore, you should first start this section by understanding the role of Zookeeper in Kafka clusters.

If we must define the role of Zookeeper in a few words, we can say that Zookeeper acts...

Capacity planning

Capacity planning is mostly required when you want to deploy Kafka in your production environment. Capacity planning helps you achieve the desired performance from Kafka systems along with the required hardware. In this section, we will talk about some of the important aspects to consider while performing capacity planning of Kafka cluster.

Note that there is no one definite way to perform Kafka capacity planning. There are multiple factors that come into the picture, and they vary depending upon your organizational use cases.

Our goal here is to give you a good starting point for Kafka cluster capacity planning with some pointers that you should always keep in mind. Let's consider these one by one.

Capacity planning goals

...

Single cluster deployment

This section will give you an overview of what Kafka cluster would look like in a single data center. In a single cluster deployment, all your clients would connect to one data center, and read/write would happen from the same cluster. You would have multiple brokers and Zookeeper servers deployed to serve the requests. All those brokers and Zookeepers would be in the same data center within the same network subnet.

The following diagram represents what single cluster deployments would look like:

In the preceding diagram, Kafka is deployed in Data Center 1. Just like any other single Kafka cluster deployment, there are internal clients (Application 1 and Application 2), remote clients in different Data Centers (Application 3 and Application 4), and direct remote clients in the form of mobile applications.

As you can clearly see, this kind of setup has...

Multicluster deployment

Generally, multicluster deployment is used to mitigate some of the risks associated with single cluster deployment. We have mentioned to you, some of those risks in the preceding section. Multicluster deployment can come in two flavors - distributive models and aggregate models.

The distributive model diagram is shown in the following figure. In this model, based on the topics, messages are sent to different Kafka clusters deployed in different data centers. Here, we have chosen to deploy Kafka cluster on Data Center 1 and Data Center 3.

Applications deployed in Data Center 2 can choose to send data to any of the Kafka clusters deployed in Data Center 1 and Data Center 3. They will use different a data center-deployed Kafka cluster depending on the Kafka topic associated with the messages. This kind of message routing can also be done using some intermediate...

Decommissioning brokers

Kafka is a distributed and replicated messaging system. Decommissioning brokers can become a tedious task sometimes. In lieu of that, we thought of introducing this section to keep you informed about some of the steps you should perform for decommissioning the broker.

You can automate this using any scripting language. In general, you should perform the following steps:

  1. Log in to the Zookeeper shell, and from there, collect the relevant broker information based on the broker IP or hostname.
  2. Next, based on the broker information collected from Zookeeper, you should gather information about which topics and partition data need to be reassigned to different brokers. You can use Kafka topic shell-based utilities to gather such information. Basically, topic partitions that require reassignment are identified with leader and replicas values that are equal to...

Data migration

Data migration in Kafka cluster can be viewed in different aspects. You may want to migrate data to newly-added disk drives in the same cluster and then decommission old disks. You may want to move data to a secure cluster or to newly-added brokers and then decommission old brokers. You may want to move data to a different new cluster altogether or to the Cloud. Sometimes, you also end up migrating Zookeeper servers. In this section, we will, in general, discuss one of the scenarios mentioned earlier.

Let's consider the scenario where we want to add new hard drives/disks and decommission old ones on broker servers. Kafka data directories contain topic and partition data. You can always configure more than one data directory, and Kafka will balance the partition or topic data across these directory locations.

One important thing to note here is that Kafka does...

Summary

In this chapter, we dove deep into Kafka cluster. You learned how replication works in Kafka. This chapter walked you through how the Zookeeper maintains its znodes and how Kafka uses Zookeeper servers to ensure high availability. In this chapter, we wanted you to understand how different processes work in Kafka and how they are coordinated with different Kafka components. Sections such as Metadata request processing, Producer request processing, and Consumer request processing, were written keeping that goal in mind.

You also learned about the different types of Kafka deployment models along with the different aspects of Capacity Planning Kafka cluster. Capacity Planning is important from the perspective of deploying Kafka cluster in the production environment. This chapter also touched base with complex Kafka administration operations such as broker decommissioning and...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Building Data Streaming Applications with Apache Kafka
Published in: Aug 2017Publisher: PacktISBN-13: 9781787283985
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Chanchal Singh

Chanchal Singh has over half decades experience in Product Development and Architect Design. He has been working very closely with leadership team of various companies including directors ,CTO's and Founding members to define technical road-map for company.He is the Founder and Speaker at meetup group Big Data and AI Pune MeetupExperience Speaks. He is Co-Author of Book Building Data Streaming Application with Apache Kafka. He has a Bachelor's degree in Information Technology from the University of Mumbai and a Master's degree in Computer Application from Amity University. He was also part of the Entrepreneur Cell in IIT Mumbai. His Linkedin Profile can be found at with the username Chanchal Singh.
Read more about Chanchal Singh

author image
Manish Kumar

Manish Kumar works as Director of Technology and Architecture at VSquare. He has over 13 years' experience in providing technology solutions to complex business problems. He has worked extensively on web application development, IoT, big data, cloud technologies, and blockchain. Aside from this book, Manish has co-authored three books (Mastering Hadoop 3, Artificial Intelligence for Big Data, and Building Streaming Applications with Apache Kafka).
Read more about Manish Kumar