Reader small image

You're reading from  Apache Spark 2.x for Java Developers

Product typeBook
Published inJul 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781787126497
Edition1st Edition
Languages
Right arrow
Authors (2):
Sourav Gulati
Sourav Gulati
author image
Sourav Gulati

Sourav Gulati is associated with software industry for more than 7 years. He started his career with Unix/Linux and Java and then moved towards big data and NoSQL World. He has worked on various big data projects. He has recently started a technical blog called Technical Learning as well. Apart from IT world, he loves to read about mythology.
Read more about Sourav Gulati

Sumit Kumar
Sumit Kumar
author image
Sumit Kumar

Sumit Kumar is a developer with industry insights in telecom and banking. At different junctures, he has worked as a Java and SQL developer, but it is shell scripting that he finds both challenging and satisfying at the same time. Currently, he delivers big data projects focused on batch/near-real-time analytics and the distributed indexed querying system. Besides IT, he takes a keen interest in human and ecological issues.
Read more about Sumit Kumar

View More author details
Right arrow

Chapter 9. Near Real-Time Processing with Spark Streaming

So far in this book, we have discussed batch analytics with Spark using the core libraries and Spark SQL. In this chapter, we will learn about another vertical of Spark that is processing near real-time streams of data. Spark comes with a library known as Spark Streaming, which provides the capability to process data in near real time. This extension of Spark makes it a true general purpose system.

This chapter will the explain internals of Spark Streaming, reading streams of data in Spark from various data sources with examples, and newer extensions of stream processing in Spark known as structured streaming.

Introducing Spark Streaming


With the advancement and expansion of big data technologies, most of the companies have shifted their focus towards data-driven decision making. It has now become an essential and integral part of the business. In the current world, not only the analytics is important, but also how early it is made available is important. Offline data analytics, as known as batch analytics, help in providing analytics on the history data. On the other hand, online data analytics showcase what is happening in real time. It helps organizations to take decisions as early as possible to keep themselves ahead of their competitors. Online analytics/near real time analytics is done by reading incoming streams of data, for example user activities for e-commerce websites, and process those streams to get valuable results.

The Spark Streaming API is a library that allows you to process data from live streams at near real time. It provides high scalability, fault tolerance, high throughput...

Understanding micro batching


Micro batching is defined as the procedure in which the incoming stream of messages is processed by dividing them into group of small batches. This helps to achieve the performance benefits of batch processing; however, at the same time, it helps to keep the latency of processing of each message minimal.

Here, an incoming stream of messages is treated as a flow of small batches of messages and each batch is passed to a processing engine, which outputs a processed stream of batches.

In Spark Streaming, a micro batch is created based on time instead of size, that is, events received in a certain time interval, usually in milliseconds, are grouped together into a batch. This assures that any message does not have to wait much before processing and helps to keep latency of processing under control.

Another advantage of micro batching is that it helps to keep the volume of control message lower. For example, if a system requires an acknowledgement is to be sent by the...

Streaming sources


Streaming sources are segregated into two categories in Spark Streaming, that is, basic source and advance source. All those sources that are directly available through StreamingContext, such as filesystem and socket streams are called basic sources while sources that require dependency linkages, as in the case of Kafka, Flume, and so on are called advanced sources. Streaming sources can also be defined on the basis of reliability; if an acknowledgement is sent to the source system after receiving and replicating the messages then such receivers are called reliable receivers, such as the Kafka API. Similarly if the system does not send an acknowledgement to the source system then they are termed as unreliable.

Some common streaming sources apart from socket streaming, which were discussed in previous examples, are explained in the next section.

fileStream

Data files from any directory can be read from a directory using the fileStream() API of StreamingContext. The fileStream...

Kafka


Kafka is a publish-subscribe messaging system that provides a reliable Spark Streaming source. With the latest Kafka direct API, it provides one-to-one mapping between Kafka's partition and the DStream generated RDDs partition along with access to metadata and offset. Since, Kafka is an advanced streaming source as far as Spark Streaming is concerned, one needs to add its dependency in the build tool of the streaming application. The following is the artifact that should be added in the build tool of one's choice before starting with Kafka integration:

 groupId = org.apache.spark 
artifactId = spark-streaming-kafka-0-10_2.11 
version = 2.1.1 

After adding the dependency, one also needs basic information about the Kafka setup, such as the server(s) on which Kafka is hosted (bootstrap.servers) and some of the basic configurations describing the message, such as sterilizer, group ID, and so on. The following are a few common properties used to describe a Kafka connection:

  • bootstrap.servers...

Streaming transformations


As in the previous word count example, we saw that words from each line were being counted once and for the next set of records the counter was reset again. But what if we want to add the previous state count to the new set of words in the following batch to come? Can we do that and how? The answer to the first part of the question is, in Spark Streaming there are two kinds of transformation, stateful and stateless transformation, so if we want to preserve the previous state then one will have to opt for stateful transformation rather than the stateless transformation that we achieved in the previous example.

Stream processing can be stateless or stateful based on the requirement. Some stream processing problems may require maintaining a state over a period of time, others may not.

Consider that an airline company wants to process data consistiting of the temperature reading of all active a flights at real time. If the airline wants to just print or store the reading...

Fault tolerance and reliability


Streaming jobs are designed to run continuously and failure in the job can result in loss of data, state, or both. Making streaming jobs fault tolerant becomes one of the essential goals of writing the streaming job in the first place. Any streaming job comes with some guarantees either by design or by implementing certain configuration features, which mandates how many times a message will be processed by the system:

  • At most once guarantee: Records in such systems can either be processed once or not at all. These systems are least reliable as far as streaming solution is concerned.
  • At least once guarantee: The system will process the record at least once and hence by design there will be no loss of messages, but then messages can be processed multiple times giving the problem of duplication. This scenario however is better than the previous case and there are use cases where duplicate data may not cause any problem or can easily be deduced.
  • Exactly once guarantee...

Structured Streaming


Structured is a brand new edition in Apache Spark's streaming processing vertical. It is a stream processing engine built on top of the Spark SQL engine. With the introduction of structured streaming, a unification bond of batch processing and stream processing as it allows us to develop a stream processing is enabled application similar to the batch processing application. At the same time, it is scalable and fault tolerant as well.

As per Apache Spark's documentation,

Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing without the user having to reason about streaming

.

Instead of using DStream in structured streaming, the dataset API can be used and it is the responsibility of the Spark SQL engine to keep the dataset updated as new streaming data arrives. As the dataset API is used, all the Spark SQL operations are available. Therefore, users can use SQL queries on the stream data using the optimized Spark SQL engine...

Summary


This chapter focused on handling streaming data from sources such as Kafka, socket, and filesystem. We also covered various stateful and stateless transformation of DStream along with checkpointing of data. But chekpointing of data alone does not guarantee fault tolerance and hence we discussed other approaches to make Spark Streaming job fault tolerant. We also talked about the transform operation, which comes in handy where operations of RDD API is not available in DStreams. Spark 2.0 introduced structured streaming as a separate module, however, because of its similarity with Spark Streaming, we discussed the newly introduced APIs of structured streaming also.

In the next chapter, we will focus on introducing the concepts of machine learning and then move towards its implementation using Apache Spark MLlib libraries. We will also discuss some real-world problems using Spark MLlib.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Spark 2.x for Java Developers
Published in: Jul 2017Publisher: PacktISBN-13: 9781787126497
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Sourav Gulati

Sourav Gulati is associated with software industry for more than 7 years. He started his career with Unix/Linux and Java and then moved towards big data and NoSQL World. He has worked on various big data projects. He has recently started a technical blog called Technical Learning as well. Apart from IT world, he loves to read about mythology.
Read more about Sourav Gulati

author image
Sumit Kumar

Sumit Kumar is a developer with industry insights in telecom and banking. At different junctures, he has worked as a Java and SQL developer, but it is shell scripting that he finds both challenging and satisfying at the same time. Currently, he delivers big data projects focused on batch/near-real-time analytics and the distributed indexed querying system. Besides IT, he takes a keen interest in human and ecological issues.
Read more about Sumit Kumar