Reader small image

You're reading from  Modern Big Data Processing with Hadoop

Product typeBook
Published inMar 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781787122765
Edition1st Edition
Languages
Concepts
Right arrow
Authors (3):
V Naresh Kumar
V Naresh Kumar
author image
V Naresh Kumar

Naresh has more than a decade of professional experience in designing, implementing and running very large scale Internet Applications in Fortune Top 500 Companies. He is a Full Stack Architect with hands-on experience in domains like E-commerce, Web-hosting, Healthcare, Bigdata & Analytics, Data Streaming, Advertising and Databases. He believes in Opensource and contributes to them actively. He keeps himself up-to-date with emerging technologies starting from Linux Systems Internals to Frontend technologies. He studied in BITS-Pilani, Rajasthan with Dual Degree in Computer Science & Economics.
Read more about V Naresh Kumar

Manoj R Patil
Manoj R Patil
author image
Manoj R Patil

Manoj R Patil is the Chief Architect in Big Data at Compassites Software Solutions Pvt. Ltd. where he overlooks the overall platform architecture related to Big Data solutions, and he also has a hands-on contribution to some assignments. He has been working in the IT industry for the last 15 years. He started as a programmer and, on the way, acquired skills in architecting and designing solutions, managing projects keeping each stakeholder's interest in mind, and deploying and maintaining the solution on a cloud infrastructure. He has been working on the Pentaho-related stack for the last 5 years, providing solutions while working with employers and as a freelancer as well. Manoj has extensive experience in JavaEE, MySQL, various frameworks, and Business Intelligence, and is keen to pursue his interest in predictive analysis. He was also associated with TalentBeat, Inc. and Persistent Systems, and implemented interesting solutions in logistics, data masking, and data-intensive life sciences.
Read more about Manoj R Patil

Prashant Shindgikar
Prashant Shindgikar
author image
Prashant Shindgikar

Prashant Shindgikar is an accomplished big data Architect with over 20 years of experience in data analytics. He specializes in data innovation and resolving data challenges for major retail brands. He is a hands-on architect having an innovative approach to solving data problems. He provides thought leadership and pursues strategies for engagements with the senior executives on innovation in data processing and analytics. He presently works for a large USA-based retail company.
Read more about Prashant Shindgikar

View More author details
Right arrow

Designing Real-Time Streaming Data Pipelines

The first three chapters of this book all dealt with batch data. Having learned about the installation of Hadoop, data ingestion tools and techniques, and data stores, let's turn to data streaming. Not only will we look at how we can handle real-time data streams, but also how to design pipelines around them.

In this chapter, we will cover the following topics:

  • Real-time streaming concepts
  • Real-time streaming components
  • Apache Flink versus Spark
  • Apache Spark versus Storm

Real-time streaming concepts

Let's understand a few key concepts relating to real-time streaming applications in the following sections.

Data stream

The data stream is a continuous flow of data from one end to another end, from sender to receiver, from producer to consumer. The speed and volume of the data may vary; it may be 1 GB of data per second or it may be 1 KB of data per second or per minute.

Batch processing versus real-time data processing

In batch processing, data is collected in batches and each batch is sent for processing. The batch interval can be anything...

Real-time streaming components

In the following sections we will walk through some important real-time streaming components.

Message queue

The message queue lets you publish and subscribe to a stream of events/records. There are various alternatives we can use as a message queue in our real-time stream architecture. For example, there is RabbitMQ, ActiveMQ, and Kafka. Out of these, Kafka has gained tremendous popularity due to its various unique features. Hence, we will discuss the architecture of Kafka in detail. A discussion of RabbitMQ and ActiveMQ is beyond the scope of this book.

So what is Kafka?

...

Apache Storm

Apache Storm is a free and open source distributed real-time stream processing framework. At the time of writing this book, the stable release version of Apache Storm is 1.0.5. The Storm framework is predominantly written in the Clojure programming language. Originally, it was created and developed by Nathan Marz and the team at Backtype. The project was later acquired by Twitter.

During one of his talks on the Storm framework, Nathan Marz talked about stream processing applications using any framework, such as Storm. These applications involved queues and worker threads. Some of the data source threads write messages to queues and other threads pick up these messages and write to target data stores. The main drawback here is that source threads and targets threads do not match the data load of each other and this results in data pileup. It also results in data loss...

Apache Spark versus Storm

Spark uses micro-batches to process events while Storm processes events one by one. It means that Spark has a latency of seconds while Storm provides a millisecond of latency. Spark Streaming provides a high-level abstraction called a Discretized Stream or DStream, which represents a continuous sequence of RDDs. (But, the latest version of Spark, 2.4 supports millisecond data latency.) The latest Spark version supports DataFrames.

Almost the same code (API) can be used for Spark Streaming and Spark batch jobs. That helps to reuse most of the code base for both programming models. Also, Spark supports Machine learning and the Graph API. So, again, the same codebase can be used for those use cases as well.

Summary

In this chapter, we started with a detailed understanding of real-time stream processing concepts, including data stream, batch vs. real-time processing, CEP, low latency, continuous availability, horizontal scalability, storage, and so on. Later, we learned about Apache Kafka, which is a very important component of modern real-time stream data pipelines. The main features of Kafka are scalability, durability, reliability, and high throughput.

We also learned about Kafka Connect; its architecture, data flow, sources, and connectors. We studied case studies to design a data pipeline with Kafka Connect using file source, file Sink, JDBC source, and file Sink Connectors.

In the later sections, we learned about various open source real-time stream-processing frameworks, such as the Apache Storm framework. We have seen a few practical examples, as well. Apache Storm is distributed...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Modern Big Data Processing with Hadoop
Published in: Mar 2018Publisher: PacktISBN-13: 9781787122765
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
V Naresh Kumar

Naresh has more than a decade of professional experience in designing, implementing and running very large scale Internet Applications in Fortune Top 500 Companies. He is a Full Stack Architect with hands-on experience in domains like E-commerce, Web-hosting, Healthcare, Bigdata & Analytics, Data Streaming, Advertising and Databases. He believes in Opensource and contributes to them actively. He keeps himself up-to-date with emerging technologies starting from Linux Systems Internals to Frontend technologies. He studied in BITS-Pilani, Rajasthan with Dual Degree in Computer Science & Economics.
Read more about V Naresh Kumar

author image
Manoj R Patil

Manoj R Patil is the Chief Architect in Big Data at Compassites Software Solutions Pvt. Ltd. where he overlooks the overall platform architecture related to Big Data solutions, and he also has a hands-on contribution to some assignments. He has been working in the IT industry for the last 15 years. He started as a programmer and, on the way, acquired skills in architecting and designing solutions, managing projects keeping each stakeholder's interest in mind, and deploying and maintaining the solution on a cloud infrastructure. He has been working on the Pentaho-related stack for the last 5 years, providing solutions while working with employers and as a freelancer as well. Manoj has extensive experience in JavaEE, MySQL, various frameworks, and Business Intelligence, and is keen to pursue his interest in predictive analysis. He was also associated with TalentBeat, Inc. and Persistent Systems, and implemented interesting solutions in logistics, data masking, and data-intensive life sciences.
Read more about Manoj R Patil

author image
Prashant Shindgikar

Prashant Shindgikar is an accomplished big data Architect with over 20 years of experience in data analytics. He specializes in data innovation and resolving data challenges for major retail brands. He is a hands-on architect having an innovative approach to solving data problems. He provides thought leadership and pursues strategies for engagements with the senior executives on innovation in data processing and analytics. He presently works for a large USA-based retail company.
Read more about Prashant Shindgikar