Introducing Kafka

Exclusive offer: get 50% off this eBook here
Apache Kafka

Apache Kafka — Save 50%

Set up Apache Kafka clusters and develop custom message producers and consumers using practical, hands-on examples with this book and ebook

$20.99    $10.50
by Nishant Garg | October 2013 | Open Source

Welcome to the world of Apache Kafka.

In this article by Nishnat Garg, the author of Apache Kafka, we will be introduced to the world of Kafka.

(For more resources related to this topic, see here.)

In today's world, real-time information is continuously getting generated by applications (business, social, or any other type), and this information needs easy ways to be reliably and quickly routed to multiple types of receivers. Most of the time, applications that are producing information and applications that are consuming this information are well apart and inaccessible to each other. This, at times, leads to redevelopment of information of producers or consumers to provide an integration point between them. Therefore, a mechanism is required for seamless integration of information of producers and consumers to avoid any kind of rewriting of an application at either end.

In the present era of big data, the first challenge is to collect the data and the second challenge is to analyze it. As it is a huge amount of data, the analysis typically includes the following and much more:

  • User behavior data
  • Application performance tracing
  • Activity data in the form of logs
  • Event messages

Message publishing is a mechanism for connecting various applications with the help of messages that are routed between them, for example, by a message broker such as Kafka. Kafka is a solution to the real-time problems of any software solution, that is, to deal with real-time volumes of information and route it to multiple consumers quickly. Kafka provides seamless integration between information of producers and consumers without blocking the producers of the information, and without letting producers know who the final consumers are.

Apache Kafka is an open source, distributed publish-subscribe messaging system, mainly designed with the following characteristics:

  • Persistent messaging: To derive the real value from big data, any kind of information loss cannot be afforded. Apache Kafka is designed with O(1) disk structures that provide constant-time performance even with very large volumes of stored messages, which is in order of TB.
  • High throughput: Keeping big data in mind, Kafka is designed to work on commodity hardware and to support millions of messages per second.
  • Distributed: Apache Kafka explicitly supports messages partitioning over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics.
  • Multiple client support: Apache Kafka system supports easy integration of clients from different platforms such as Java, .NET, PHP, Ruby, and Python.
  • Real time: Messages produced by the producer threads should be immediately visible to consumer threads; this feature is critical to event-based systems such as Complex Event Processing (CEP) systems.

Kafka provides a real-time publish-subscribe solution, which overcomes the challenges of real-time data usage for consumption, for data volumes that may grow in order of magnitude, larger that the real data. Kafka also supports parallel data loading in the Hadoop systems.

The following diagram shows a typical big data aggregation-and-analysis scenario supported by the Apache Kafka messaging system:

At the production side, there are different kinds of producers, such as the following:

  • Frontend web applications generating application logs
  • Producer proxies generating web analytics logs
  • Producer adapters generating transformation logs
  • Producer services generating invocation trace logs

At the consumption side, there are different kinds of consumers, such as the following:

  • Offline consumers that are consuming messages and storing them in Hadoop or traditional data warehouse for offline analysis
  • Near real-time consumers that are consuming messages and storing them in any NoSQL datastore such as HBase or Cassandra for near real-time analytics
  • Real-time consumers that filter messages in the in-memory database and trigger alert events for related groups

Need for Kafka

A large amount of data is generated by companies having any form of web-based presence and activity. Data is one of the newer ingredients in these Internet-based systems. This data typically includes user-activity events corresponding to logins, page visits, clicks, social networking activities such as likes, sharing, and comments, and operational and system metrics. This data is typically handled by logging and traditional log aggregation solutions due to high throughput (millions of messages per second). These traditional solutions are the viable solutions for providing logging data to an offline analysis system such as Hadoop. However, the solutions are very limiting for building real-time processing systems.

According to the new trends in Internet applications, activity data has become a part of production data and is used to run analytics at real time. These analytics can be:

  • Search based on relevance
  • Recommendations based on popularity, co-occurrence, or sentimental analysis
  • Delivering advertisements to the masses
  • Internet application security from spam or unauthorized data scraping

Real-time usage of these multiple sets of data collected from production systems has become a challenge because of the volume of data collected and processed.

Apache Kafka aims to unify offline and online processing by providing a mechanism for parallel load in Hadoop systems as well as the ability to partition real-time consumption over a cluster of machines. Kafka can be compared with Scribe or Flume as it is useful for processing activity stream data; but from the architecture perspective, it is closer to traditional messaging systems such as ActiveMQ or RabitMQ.

Few Kafka usages

Some of the companies that are using Apache Kafka in their respective use cases are as follows:

  • LinkedIn (www.linkedin.com): Apache Kafka is used at LinkedIn for the streaming of activity data and operational metrics. This data powers various products such as LinkedIn news feed and LinkedIn Today in addition to offline analytics systems such as Hadoop.
  • DataSift (www.datasift.com/): At DataSift, Kafka is used as a collector for monitoring events and as a tracker of users' consumption of data streams in real time.
  • Twitter (www.twitter.com/): Twitter uses Kafka as a part of its Storm— a stream-processing infrastructure.
  • Foursquare (www.foursquare.com/): Kafka powers online-to-online and online-to-offline messaging at Foursquare. It is used to integrate Foursquare monitoring and production systems with Foursquare, Hadoop-based offline infrastructures.
  • Square (www.squareup.com/): Square uses Kafka as a bus to move all system events through Square's various datacenters. This includes metrics, logs, custom events, and so on. On the consumer side, it outputs into Splunk, Graphite, or Esper-like real-time alerting.

    The source of the above information is https: //cwiki. apache.org/confluence/display/KAFKA/Powered+By.

Summary

In this article, we have seen how companies are evolving the mechanism of collecting and processing application-generated data, and that of utilizing the real power of this data by running analytics over it.

Resources for Article:


Further resources on this subject:


Apache Kafka Set up Apache Kafka clusters and develop custom message producers and consumers using practical, hands-on examples with this book and ebook
Published: October 2013
eBook Price: $20.99
Book Price: $34.99
See more
Select your format and quantity:

About the Author :


Nishant Garg

Nishant Garg is a Technical Architect with more than 13 years' experience in various technologies such as Java Enterprise Edition, Spring, Hibernate, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Kafka, Storm, Mahout, and Solr/Lucene; NoSQL databases such as MongoDB, CouchDB, HBase and Cassandra, and MPP Databases such as GreenPlum and Vertica.

He has attained his M.S. in Software Systems from Birla Institute of Technology and Science, Pilani, India, and is currently a part of Big Data R&D team in innovation labs at Impetus Infotech Pvt. Ltd.

Nishant has enjoyed working with recognizable names in IT services and financial industries, employing full software lifecycle methodologies such as Agile and SCRUM. He has also undertaken many speaking engagements on Big Data technologies.

Books From Packt


Hadoop Real-World Solutions Cookbook
Hadoop Real-World Solutions Cookbook

Hadoop Beginner's Guide
Hadoop Beginner's Guide

Apache Solr 3.1 Cookbook
Apache Solr 3.1 Cookbook

Apache Solr 3 Enterprise Search Server
Apache Solr 3 Enterprise Search Server

Hadoop MapReduce Cookbook
Hadoop MapReduce Cookbook

Apache CloudStack Cloud Computing
Apache CloudStack Cloud Computing

Instant Apache ServiceMix How-to [Instant]
Instant Apache ServiceMix How-to [Instant]

Hadoop Operations and Cluster Management Cookbook
Hadoop Operations and Cluster Management Cookbook


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Q
B
H
A
r
1
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software