Reactive Programming With Java 9

2.3 (3 reviews total)
By Tejaswini Mandar Jog
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Introduction to Reactive Programming

About this book

Reactive programming is an asynchronous programming model that helps you tackle the essential complexity that comes with writing such applications.

Using Reactive programming to start building applications is not immediately intuitive to a developer who has been writing programs in the imperative paradigm. To tackle the essential complexity, Reactive programming uses declarative and functional paradigms to build programs. This book sets out to make the paradigm shift easy.

This book begins by explaining what Reactive programming is, the Reactive manifesto, and the Reactive Streams specifi cation. It uses Java 9 to introduce the declarative and functional paradigm, which is necessary to write programs in the Reactive style. It explains Java 9’s Flow API, an adoption of the Reactive Streams specifi cation. From this point on, it focuses on RxJava 2.0, covering topics such as creating, transforming,fi ltering, combining, and testing Observables. It discusses how to use Java’s popular framework, Spring, to build event-driven, Reactive applications. You will also learn how to implement resiliency patterns using Hystrix. By the end, you will be fully equipped with the tools and techniques needed to implement robust, event-driven, Reactive applications.

Publication date:
September 2017


Chapter 1. Introduction to Reactive Programming

The world is on the web. Nowadays, everyone uses the internet for banking applications, searching for information, online purchasing, marking products by companies, personal and professional communication via emails, and as a way to present themselves on public platforms, such as Twitter. We use the internet for personal as well as professional purposes. Most organizations have their websites, applications for mobiles, or some other way to exist on the internet. These organizations are wide-ranging. They have different functionalities and their nature of working is also very different. Also, they work in a variety of domains that need a lookalike of the developed application to serve all functionalities, application to serve more flexible, resilient, and robust functionalities. Application requirements, both from the user's perspective as well as the developer's perspective, have changed dramatically. The most important change now, which even a normal user may recognize, relates to the time taken for a response. You may be a technical person who understands terminologies such as a request, a response, or a server or someone who only knows how to browse and has no understanding of the technicalities of web programming. We all want answers on the web as quickly as possible. Today, no one wants to wait even for seconds to get the result or response they asked for; users demand a quick response. We are processing a huge amount of data as compared to applications developed in the past. Also, a lot of hardware-related changes have taken place. Today, instead of going with many servers for the deployment of an application, the preferred way is to deploy the application on the cloud. A cloud-based deployment makes the application more maintainable by saving hours of maintenance. Now, the market demands responsive, loosely coupled, scalable, resilient, and message-driven systems. The traditional approach to programming can't fulfil this requirement. The approach needs to be changed. We need to adapt to asynchronous, non-blocking reactive programming!

In this chapter, we will concentrate on the following points so as to get familiar with the terms, situations, and challenges in reactive programming:

  • Reactive Manifesto
  • Reactive Streams
  • Reactive Extensions
  • Reactive in a JVM
  • Reactive across JVMs

In the 90s, when Java actually started acquiring a grip on the market, many companies started developing applications using it. The basic reason was, it allowed you to do this with the applications--write once, run anywhere. The internet was still not so famous and had very few users. But, in the early 2000s, the conditions started changing dramatically. Billions of users were now using the internet for different reasons, and experts were aware of the increase in these numbers. So applications, specifically web applications, had to handle enormous traffic. From the development perspective, this led to problems of scalability and how to handle the pressure of so many requests. The users of a system also became more demanding with regard to look and feel along the way. They also demanded a quick response.

Consider a pharmaceutical application dealing in pharmaceutical products. The application helps find distributor information in an area, the price of the product, the details of the product, the stock, and other relevant things. Let's consider one of the functionalities of the application to get the list of distributors. When a user wants to find a list of the pharmaceutical distributors in his/her city or area, what will they do? Yes! They will either enter or select the name of the city to find the distributor list. After entering the data, the user just needs to click on a button to initiate the request.

We have two options as developers:

  • Keep the user waiting until the result has not been processed to find the list of the distributors that will be fetched from the database
  • Second, allow the user to proceed further and use other stuff in the application, such as finding product details, availability of products, and so on

Instead of discussing what a developer will do, you will ask, "If you are the user, which of the preceding scenarios would you have preferred?" Obviously, I will also choose not to wait. Instead, I'd enjoy the amazing functionality of the application which will eventually process the result. If you haven't recognized it yet, let me tell you that you have chosen parallel programming. Correct! You have preferred an asynchronous approach rather than the traditional synchronous approach. What is an asynchronous approach? If you are already aware of this, you can skip the discussion and go on to the next discussion.


Asynchronous programming

The term asynchronous programming means parallel programming where some unit of work or functionality runs separately from the main application without blocking it. It's a model that allows you to take leverage of the multiple cores of the system. Parallel programming uses multiple CPU cores to execute the tasks, ultimately increasing the performance of the application. We know very well that each application has at least one thread, which we usually call the main thread. Other functionalities run in separate threads. We may call them child threads; they keep notifying the main thread about its progress as well as failure.

The major benefit provided by the asynchronous approach is that the performance and responsiveness of the application improve drastically. In today's world, the application market is too demanding. No one wants to be kept waiting while the application is processing things that one needs an answer for. No matter whether it's a desktop application or the web, users prefer to continue enjoying the application while the application is computing a complicated result or performing some expensive task on a resource. We need this in all the scenarios where developers don't want to block the user while using the application.

Now, you may be thinking of many questions: Do we need asynchronous programming? Should one choose asynchronous programming over concurrency? If yes, when do we need it? What are the ways? What are its benefits? Are there any flaws? What are the complications? Hold on! We will discuss them before we move ahead. So let's start with one interesting and well-known style of programming.


The term concurrency comes in whenever developers talk about performing more than one task at the same time. Though we keep on saying we do have tasks running simultaneously, actually they are not. Concurrency takes advantage of CPU time slicing, where the operating system chooses a task and runs a part of it. Then, it goes to the next task keeping the first task in the state of waiting.

The problems encountered in using a thread-based model are as follows:

  • Concurrency may block the processing, keeping the user waiting for the result, and also waste computational resources.
  • Some threads increase the performance of the applications and some degrade it.
  • It's an overhead to start and terminate many threads at the same time.
  • Sharing the available limited resource in multiple threads hampers performance.
  • If data is being shared between multiple threads, then it's complicated to maintain its state. Sometimes, it is maintained by synchronization or using locks, which complicates the situation.

Now, you may have understood that concurrency is a good feature, but it's not a way to achieve parallelism. Parallelism is a feature that runs a part of the task or multiple tasks on multiple cores at the same time. It cannot be achieved on a single core CPU.

Parallel programming

Basically, the CPU executes instructions, usually one after another. Parallel programming is a style of programming in which the execution of the process is divided into small parts at the same time, with better use of multi-core processors. These small parts, which are generally called tasks, are independent of each other. Even the order of these tasks doesn't matter. A single CPU has performance limitations as well as the availability of memory, making parallel programming helpful in solving larger problems faster by utilizing the memory efficiently.

In any application, we have two things to complete as a developer. We need to design the functionalities that will help in continuing the flow of the application, helping users smoothly use their applications. Every application uses data to compute user requests. Thread-based models only help in the functional flow of the application. To process the data needed by the application, we need something more. But what do we need? We need a data flow computation that will help deal with data flows and their changes. We need to find the ease with which either static or dynamic data flows and the underlying model could use it. Also, changes in the data will be propagated through the data flow. It is reactive programming that will fulfil this requirement.


We are all very familiar with and frequently use the Collections framework to handle data. Though this framework enables a user to handle data quite efficiently, the main complexity lies in using loops and performing repeated checks. It also doesn't facilitate the use of multi-core systems efficiently. The introduction of Streams has helped to overcome these problems. Java 8 introduces Streams as a new abstract layer that enables the processing of data in a declarative manner.

A Stream is a series of different elements that have been emitted over a time period. Are Streams the same as an array? In a way, yes! They are like an array, but not an array. They do have distinct differences. The elements of an array are sequentially arranged in memory, while the elements in a Stream are not! Every Stream has a beginning as well as an end.

Let's make it simpler by discussing a mathematical problem. Consider calculating the average of the first six elements of an array and then Streams. Let's take the array first. What will we, as developers, do to calculate the average of an array? Yes, we will fetch each element from an array and then calculate their addition. Once we get the addition, we will apply the formula to calculate the average. Now, let's consider Streams. Should we apply the same logic as we just applied for an array? Actually, no! We should not directly start developing the logic. Before the logic, we must understand a very important thing that every single element in the Streams may not be visited. Also, each element from the Streams may be emitted at the same speed. So while the calculation is done, some elements may not have been visited at all. The calculated average is not the final value, but the answer will be of the values in motion. Confused?

Have you ever sat on the banks of a river or flowing water and put your legs in it? Most of us have at least once enjoyed this peaceful experience. The water moves around our legs and moves ahead. Have you ever observed the same flowing water passing through your legs twice? Obviously, not! It's the same when it comes to Streams.

Features of Streams

We just discussed Streams; now let's discuss the features offered by Streams.

The sequential elements

A Stream provides a sequence of the typical type of elements on demand.

Source of a stream

As a Stream contains elements, it needs elements. A Stream can take Collections, Arrays, files, or any other I/O resource as its input.

Performing operations

Streams strongly support performing various operations, such as filtering, mapping, matching, finding, and many more, on their elements.

Performing automatic operations

Streams don't need explicit iterations to perform on the elements from the source; they do the iteration implicitly.

Reactive Streams

Streams help in handling data. Today's application demands are oriented more toward live or real-time data operations. Today's world doesn't want only a collection of data; instead, most of the time, this is followed by modification and filtration. This processing requires more time, leading to performance bottlenecks. Streams help in turning this huge amount of data processed and provide a quick response. It means the ever-changing data in motion needs to be handled. The initiative of reactive streams started in late 2013 by engineers from Pivotal, Netflix, and Typesafe. Reactive Streams is a specification that has been developed for library developers. Library developers write code against Reactive Streams APIs. On the other hand, business developers work with the implementation of the Reactive Streams specification.

Business developers have to concentrate on the core business logic required for the application. Reactive Streams provide a level of abstraction to concentrate on the business logic instead of getting involved in low-level plumbing to handle streams.

What do Reactive Streams address?

We have just discussed Reactive Streams. The following are the features around which they are woven.


Today's application users don't like to wait for a response from the server. They don't care about the processing of the request--collecting the required information and then generating the response. They are just interested in getting a quick response without a block.

Asynchronous systems decouple the components of a system allowing the user to explore and use other parts of the application instead of wasting the time of the user in waiting for the response. Asynchrony enables parallel programming so as to compute the resources, collaborate the resources over the network, and collaborate with and use multiple cores of a CPU on a single machine to enhance the performance. In the traditional approach, developers compute one function after another. The time taken by the complete operation is the sum of the time taken by each of the functionalities. In the asynchronous approach, operations are carried out parallely. The total time taken to complete the operation here is the time taken by the longest operation and not the sum of each operation of the application. This ultimately enhances the performance of the application to generate a quicker response.

Back pressure

The Reactive Streams specification defines a model for back pressure. The elements in the Streams are produced by the producer at one end, and the elements are consumed by the consumer at the other end. The most favorable condition is where the rate at which the elements are produced and consumed is the same. But, in certain situations, the elements are emitted at a higher rate than they are consumed by the consumer. This scenario leads to the growing backlog of unconsumed elements. The more the backlog grows, the faster the application fails. Is it possible to stop the failure? And if yes, how to stop this failure? We can certainly stop the application from failing. One of the ways to do this is to communicate with the source or the publisher to reduce the speed with which elements are emitted. If the speed is reduced, it ultimately reduces the load on the consumer and allows the system to overcome the situation.

Back pressure plays a very vital role as a mechanism that facilitates a gradual response to the increasing load instead of collapsing it down. It may be a possibility that the time taken to respond may increase, leading to degradation; however, it ensures resilience and allows the system to redistribute the increasing load without failure.

The elements get published by the publisher and collected by the subscriber or consumer at the downstream. Now, the consumer sends a signal for the demand in the upstream, assuring the safety of pushing the demanded number of elements to the consumer. The signal is sent asynchronously to the publisher, so the subscriber is free to send more requests for more elements with a pull strategy.

Reactive Programming

The term reactive or Reactive Programming (RP) has been in use at least since the paper, The Reactive Engine was published by Alan Kay in 1969. But the thought behind Reactive Programming is the result of the effort taken by Conal Elliot and Paul Hudak who published a paper Function Reactive Animation in 1997. Later on, the Function reactive animation was developed, which is also referred to as functional reactive programming (FRP). FRP is all about the behavior and how the behavior is changing and interacting on the events. We will discuss FRP more in the next chapter. Now, let's only try to understand Reactive Programming.

RP designs code in such a way that the problem is divided into many small steps, where each step or task can be executed asynchronously without blocking each other. Once each task is done, it's composed together to produce a complete flow without getting bound in the input or output.

Reactive Manifesto

RP ensures it matches the requirements of the everyday changing market and provides a basis for developing scalable, loosely coupled, flexible, and interactive applications. A reactive application focuses on the systems that react to the events, the varying loads, and multiple users. It also reacts effectively to all the conditions, whether it's successful or has failed to process the request. The Reactive system supports parallel programming to avoid blocking of the resources in order to utilize the hardware to its fullest. This is totally opposite to the traditional way of programming. Reactive Programming won't keep the resources busy and allows them to be used by other components of the system. Reactive systems ensure they provide features such as responsiveness, resilience, elasticity, scalability, and a message-driven approach. Let's discuss these features one by one.


Responsiveness is the most important feature of an application and ensures quick and effective responses with consistent behavior. This consistent behavior ensures the application will handle errors as well.

Applications that delay in giving a response are regarded by the users as not functioning well, and soon they start ignoring them. Responsiveness is a major feature required in today's applications. It optimizes resource utilization and prepares the system for the next request. When we open a web page that has too many images, it usually takes more time to open the images, but the content of the page gets displayed before the images. What just happened? Instead of keeping the user waiting for the web page, we allowed him/her to use the information and refer to the images once downloaded. Such applications are called more responsive applications.


For resilience, the application will be responsive even in the event of a failure, making it resilient. Let's consider a situation. We have requested for data from the server. While the request is getting processed, the power supply fails. Obviously, all the resources and the data coming from the server as a response will suddenly become unavailable. We need to wait until the power supply is restarted and the server starts taking the load again. What if the server has an alternative power supply in case the main supply fails. Yes, no one has to wait as the alternative supply keeps the server running and enables it to work without fail. All of this happens only because the alternative supply replaces the main power supply. It's a kind of clustering. In the same way, the application may be divided into components that are isolated from each other so that even if one part of the system fails, it will recover without any kind of compromise, providing a complete application experience.


Reactive programs react to the changes that happen in the input by allocating resources as per the requirements so as to achieve high performance. The reactive system achieves this elasticity by providing a scaling algorithm. The elastic system has the provision to allocate more resources to fulfil the increasing demand. Whenever the load increases, the application can be scaled up by allocating more resources, and if the demand decreases, it removes them so as to not waste the resources.

Let's consider a very simple example. One day, a few of your friends visit your house without prior intimation. You will be surprised and will welcome them happily. Being a good host, you will offer them coffee or cold drinks and some snacks. How will you serve drinks? Normally, we'd use some of the coffee mugs or glasses that we always keep to one side so that we can use them if required. After the use, will you keep them again with your daily crockery? No! We will again keep them aside. This is what we call elasticity. We have the resources, but we will not use them unless required.


Reactive systems use an asynchronous message, passing between the components for communication, to achieve isolation and loose coupling without blocking the resources. It facilitates easy to extend, maintainable applications that are flexible as well.


The market is changing day by day, so are the client demands too. The workload on an application cannot be fully predictable. The developers need an application that will be able to handle increasing pressure, but in case it doesn't, then they need an application that is easily scalable. Scalability is not only in terms of code, but it must be able to adopt new hardware as well. This is the same as that of elasticity. No, it's not. Don't get confused between these two terminologies.

We'll consider the same example we discussed to understand elasticity, but with a change. Your friends inform you that they are coming this weekend. Now, you are well aware and want to be prepared for the party. Suddenly you realize you don't have enough glasses to serve the soft drinks. What will you do? Very simple, you will buy some use-and-throw glasses. What did you do? You scaled up your hardware by adding more resources.

Benefits of Reactive Programming

The benefits of RP are as follows:

  • It increases the performance of the application
  • It increases the utilization of computing resources on a multi-core
  • It provides a more maintainable approach to deal with asynchronous programming
  • It includes back pressure, which plays a vital role to avoid over-utilization of the resources

Reactive Extensions

Reactive Extensions are the set of the tools that allow operations on sequential data without considering whether the data is synchronous or asynchronous. They also provide a set of sequence operators that can be operated on each of the items in sequence. There are many libraries that implement the Reactive Streams specification. The libraries that support Reactive Programming include Akka, Reactor, RxJava, Streams, Ratpack, and Vert.x.


ReactiveX has been implemented as a library for languages such as JavaScript, Ruby, C#, Scala, C++, Java, and many more. RxJava is the Reactive Extension for JVM, specifically for Java. It is open source. It was created by Netflix in 2014 and published under the Apache 2.0 license. It has been created to simplify concurrency on the server side. The main goal of RxJava is to enable the client to invoke a heavy request that will be executed in parallel on the server.

Now we are well aware that in Reactive Programming, the consumer reacts to the incoming data. Reactive programming basically propagates the changes in the events to the registered observers. The following are the building blocks of the RxJava :

  • Observable: The observable represents the source of the data. The observable emits the elements which vary in numbers. There is no fixed number of elements to be emitted, it could vary. The observable could successfully emit the elements or with an error if something goes wrong. At a time the observable can have any number of the subscribers.
  • Observer or Subscriber: The subscribers listen to the observables. It consumes the elements emitted by the observable.
  • Methods: The set of methods enables the developers to handle, modify and compose the data.

  Following are the methods used in RxJava programming:

    • onNext(): When a new item is emitted from the observable, the onNext() method is called on each subscriber
    • onComplete(): When the observable finishes the data flow successfully, the onComplete() method gets called
    • onError(): The onError() method will be called in situations where the observable finishes data emission with an error
Advantages of RxJava

The following are a few of the advantages of RxJava:

  • It allows a chain of asynchronous operations
  • To keep track of the state, we usually need to maintain a counter variable or some variable to maintain the previously calculated value; however, in RxJava, we don't need to get involved in keeping track of the state
  • It has a predefined way to handle errors that occur between the processes

Project Reactor

Spring 5 supports Reactive Programming to make more responsive and better-performing applications. Spring was added with Spring web reactive and reactive HTTP as new components along with support for Servlet 3.1 specification. Pivotal or Spring developed the Reactor as a framework for asynchronous programming. It enables writing of high-performance applications and works asynchronously using the event-driven programming paradigm. The Reactor uses the design pattern, where the services are received from the clients, and distributes them to different event handlers, where their processing will be done.

The Reactor is a Reactive Programming foundation, specifically for JVMs. It supports unblocking fully. The Project Reactor aims to address a very important drawback of the traditional approach by supporting asynchronicity. It provides an efficient way to handle back pressure. It also focuses on the following aspects, making it more useful for Reactive Programming:

  • It facilitates the use of the rich operators to manipulate a data flow
  • Data keeps flowing until someone doesn't subscribe to it
  • It provides a more readable way of coding so as to make it more maintainable
  • It provides a strong mechanism that ensures signalling by the consumer to the producer about what speed it should produce the elements at
The features of Project Reactor

Project Reactor is facilitated by a library that focuses on the Reactive Streams specification, targeting Java 8, and also supports a few features offered by Java 9. This library can expose the operators confirmed by the RxJava API, which are Publisher, Subscriber, Subscription, and Processor as core interfaces.

It introduced a reactive type that implements Publisher, but it has the following building components:

  • Mono: Mono represents a single element or empty result. It's a very specialized kind of Publisher<T> that emits at the most one element. It can be used to represent asynchronous processes with no value denoted as Mono<Void>:
  • Flux: Flux, as you may have guessed, represents a sequence of 0...n elements. A Flux<T> is a kind of Publisher<T> that represents the sequence of 0 to n elements emitted by it, and it either results in a success or an error. It also has three similar methods, namely onNext(), onComplete(), and onError(), for signalling. Let's consider a few of the elements emitted by Flux and applied by the operators for the transformation, as shown in the following figure:
  • Operators: The elements emitted by the publisher may undergo transformations before getting consumed by the consumer. Operators help in handling, transforming, and filtering the elements:

Akka Streams

A huge amount of data is daily getting consumed on the internet through uploading as well as downloading. The increase in demand for the internet also demands the handling of a large amount of data over the network. Today, data is not only getting handled, but also analyzed and computed for various reasons. This large amount of processing lowers the performance of the application. Big Data is the principle of processing such enormous data sequentially as a Stream on single or multiple CPUs.

Akka strictly follows the Reactive Manifesto. It is a JVM-based toolkit and an open source project introduced by Steven Haines along with an actor-based model. It is developed by keeping one aim in mind--providing a simplified construction for developing distributed concurrent applications. Akka is written in Scala and provides language binding between Java and Scala.

The following are the principles implemented by Akka Streams:

  • Akka doesn't provide logic but provides strong APIs
  • It provides an extensive model for distributed bound Stream processing
  • It provides libraries that will provide the users with reusable pieces

It handles concurrency on an actor-based model. An actor-based model is an actor-based system in which everything is designed to handle the concurrency model, whereas everything is an object in object-oriented languages. The actors within the system pass messages for sharing information. I have used the word actor a number of times, but what does this actor mean?


In this case, an actor is an object in the system that receives messages and takes action to handle them accordingly. It is loosely coupled from the source of the message that generates them. The only responsibility of an actor is to recognize the type of the message it receives and take an action accordingly. When the message is received, the actor takes one or more of the following actions:

  • Executing operations to perform calculations, manipulations, and persistency of the data
  • Forwarding the message to some other actor
  • Creating a new actor and then forwarding the message to it

In certain situations, the actor may choose not to take any action.


Ratpack is an easy-to-use, loosely coupled, and strongly typed set of Java libraries to build fast, efficient, and evolvable HTTP applications. It is built on an event-driven networking engine that gives better performance. It supports an asynchronous non-blocking API over the traditional blocking Java API. The best thing about Ratpack is that, to build any Ratpack-based application, any JVM build tool can be used.

Aim of Ratpack

Ratpack aims to do the following:

  • To create applications that are fast, scalable, and efficient with non-blocking programming
  • To allow the creation of complex applications with ease and without compromising its functionalities
  • To allow the creation of applications that can be tested easily and thoroughly


The Quasar framework provides the basis for developing most responsive web applications and hybrid mobile applications. It enables developers to create and manage the project by providing a CLI. It provides project templates that are thorough and well defined to make things easier for beginners. It also provides defaults to Stylus for the CSS, based upon the flexbox for its grid system. Quasar enables developers to write mobile applications that need web storage, cookie management, and platform detection.


The official MongoDB Reactive Streams Java drivers provides asynchronous Stream processing with non blocking back pressure for MongoDB. MongoDB provides the implementations of the Reactive Streams API for reactive Stream operations.

Along with MongoDB, Redis is (open source) in memory data structure; it used as a database, cache management, and message brokers. We also have Cassandra, which is a highly scalable application with a high-performance distributed database. It is designed for handling huge amounts of data across multiple servers.


Slick is a library for Scala that facilitates querying and accessing data. It allows developers to work with the stored data similar to how you fetch data using a Scala collection. It enables you to query a database in Scala instead of SQL. It has a special compiler that has the capacity to generate the code as per the underlying databases.


Vert.x provides a library of the modular components to the developers for writing reactive applications in many different languages, such as Java, JavaScript, Groovy, Scala, Ruby, and many more. It can be used without the container as well. Being highly modular, Vert.x provides a free hand to the developers to use any components required for the application instead of using them all.

The components of Vert.x

The following table provides information about the core components of Vert.x:

Name of the component

Use of the component


This provides low-level functionality to support HTTP, TCP, file systems, and many such features.


This provides the components that enable developers to write advanced HTTP clients with ease.

Data access

This provides many different asynchronous clients, enabling you to store a range of data for an application.


This provides a simple library to write simple SMTP mail clients, which makes it easy to write applications with the functionalities to send mail.

Event Bus Bridge

This lets the developers interact with Vert.x from any application.

Authentication and authorization

This provides the APIs that are useful for the authentication and authorization process.


This provides components such as Vert.x Rx, and Vert.x Sync that enable the developers to build more reactive applications.

Reactive Streams in Java 9

In JDK 9, the Flow APIs correspond to the Reactive Streams specification. The JEP 266 contains the interfaces that provide the publication and subscription. Reactive Streams is a standard specification to build Stream-oriented libraries for reactive and non-blocking JVM-based systems. The libraries provide the following:

  • The processing of enormous numbers of elements
  • The processing of the elements sequentially
  • Passing of the elements between the elements asynchronously
  • A mechanism to ensure proper communication between the source and the consumer of the elements so as to avoid an extra processing burden on the consumer

Standard Reactive Streams have the following two components:

  • The Technology Compatibility Kit (TCK)
  • API components

The Technology Compatibility Kit (TCK)

TCK is the standard tool suite that enables developers to test the implementations.

API components

The java.util.concurrent package provides the APIs to the developers to write reactive applications and provide interoperability among implementations. The following table describes the API components that provide Reactive Stream implementations:



Description of the components


interface Publisher<T>

This publishes a sequence of elements according to the demand coming in from the consumer.


interface Subscriber<T>

Every Subscriber demands for elements to consume from the Publisher.


interface Subscription

Subscription is the communication between the and the Subscriber. This communication happens in both the situations--when the elements are requested by the Subscriber and when the Subscriber no longer requires any elements.


interface Processor <T, R>

The Processor is a processing stage that is obeyed by both the Publisher and the Subscriber.

The following diagram describes the flow and overall working of the reactive streams:

The preceding diagram describes the publishing and subscribing of the items as follows:

  • The Publisher publishes the items. It pushes the items to Processor. The Processor pushes the items to Subscriber using the Subscriber::onNext() invocation.
  • The Processor requests for publisher items using the Subscription::request() method invocation. The Subscriber requests the items from Processor.

Reactive in and across JVM

We all are well aware of the threads in Java to handle concurrency. We also know the Thread class and the Runnable interface both facilitate in achieving concurrency. I will not go into further details of how to create a Thread and the precautions to take while dealing with it. Though, the developers create and maintain threads, this has the following disadvantages:

  • Each time we create a new thread, it causes some performance overhead
  • When the application has too many threads, it hampers the performance because the CPU needs to switch between the created threads
  • Controlling the number of threads may cause problems, such as running out of memory

Improved performance while handling threads, is provided by the java.util.concurrent package. Java 5 provided the Executor framework to manage a pool of worker threads. The work queue in a thread pool holds the tasks waiting for their turn for execution. The Executor framework has the following three main components:

  • The Executor interface that supports the launching of new tasks
  • The ExecutorService interface, which is a child of Executor, that facilitates the managing of the life cycle of the individual tasks and of the executor
  • ScheduledExecutorService, which is a child of ExecutorService, supports the periodic execution of tasks

ExecutorService facilitates task delegation for an asynchronous execution. The current working thread delegates the task to ExecutorService and continues with its own execution without being bothered about the task it delegated.

We can create ExecutorService in many different ways, as shown here:

    ExecutorService service1 = Executors.newSingleThreadExecutor();
    ExecutorService service2 = Executors.newFixedThreadPool(5);
    ExecutorService service3 = Executors.newScheduledThreadPool(5); 

Once the Executor service is created now, it's time for task delegation. ExecutorService provides different ways to accomplish task delegation, as discussed in the following list:

  • execute(Runnable): This method takes an instance of the Runnable object and executes it asynchronously.
  • submit(Runnable): This method takes an instance of the Runnable object; however, it returns an object of Future.
  • submit(Callable): This method takes an object of Callable. The call() method of Callable can return a result that can be obtained via an object of Future.
  • invokeAny(...): This method accepts a collection of Callable objects. As opposed to submit(Callable), it doesn't return an object of Future; however, it returns the result of one of the Callable objects.
  • invokeAll(...): This method accepts a collection of Callable objects. It returns the result of the list of Future objects.

We just discussed the ExecutorService interface; now let's discuss its implementation.


ThreadPoolExecutor implements the ExecutorService interface. ThreadPoolExecutor executes a subtask using one of the threads obtained from its internal pool of threads. The number of threads in the thread pool can vary in number, which can be determined using the corePoolSize and maximumPoolSize variables.


ScheduledThreadPoolExecutor implements the ExecutorService interface, and as the name suggests, it can schedule tasks to run after a particular time interval. The subtasks will be executed by the worker thread and not by the thread that is handling the task for ScheduledThreadPoolExecutor.

We sublet the task to multiple threads, which obviously improves the performance; however, we still face the problem that these threads cannot put the completed task back to the queue so as to complete the blocked tasks. The Fork/Join framework addresses this problem.

The Fork/Join framework

Java 7 came up with the Fork/Join framework. It provides an implementation of the traditional thread pool or thread/runnable development to compute a task in parallel. It provides a two-step mechanism:

  1. Fork: The first step is to split the main task into smaller subtasks, which will be executed concurrently. Each split subtask can be executed in parallel, either on the same or different CPUs. The split tasks wait for each other to complete.
  2. Join: Once the subtasks finish the execution, the result obtained from each of the subtasks can be merged into one.

The following code creates a pool of Fork Join to split the task into subtasks, which will then be executed in parallel, and combines the result:

    ForkJoinPool forkJoinPool = new ForkJoinPool(4);

In the code, the number 4 represents the number of processors.

We can learn the number of available processors using the following code:


The Collection framework internally uses ForkJoinPool to support parallelism . We can obtain it as follows:

    ForkJoinPool pool = ForkJoinPool.commonPool();  

Alternatively, we can obtain it this way:

    ForkJoinExecutor pool = new ForkJoinPool(10);

We discussed various ways to achieve parallelism while working with Reactive Programming. The combination of the classes we discussed, and the Flow APIs, help in achieving non-blocking reactive systems.



In this chapter, we discussed the traditional approach of synchronous application development, its drawbacks, and the newly added Reactive Programming approach that enhances the performance of applications with the help of parallel programming. We also discussed asynchronous Reactive Programming and its benefits in depth. The whole story actually starts with the problem of handling an enormous amount of data to and fro. We discussed an application that will not only handle the data, but also demand the processing and manipulation of data as well. Streams is a solution that provides a handy and abstract way to handle a huge amount of data. Streams is not the ultimate solution as we are not talking about data only, but about data in motion. This is where our Reactive Streams plays a vital role. We also discussed the Reactive Manifesto and its features. We continued the discussion with libraries such as RxJava, Slick, and Project Reactor as an extension for Reactive Programming. We also discussed briefly the interfaces involved in the reactive APIs in Java 9.

In the next chapter, we will discuss the application development approach of in depth, and how the approach has changed, and why adapting to the new approach is important. We will discuss these with examples to understand concepts such as declarative programming, functional programming, composition, and many more.

About the Author

  • Tejaswini Mandar Jog

    Tejaswini Mandar Jog is a passionate and enthusiastic Java trainer. She has more than nine years of experience in the IT training field, specializing in Java, J2EE, Spring, and relevant technologies. She has worked with many renowned corporate companies on training and skill enhancement programs. She is also involved in the development of projects using Java, Spring, and Hibernate. Tejaswini has written two books. In her first book, Learning Modular Java Programming, the reader explores the power of modular programming to build applications with Java and Spring. The second book, Learning Spring 5.0, explores building an application using the Spring 5.0 framework with the latest modules such as WebFlux for dealing with reactive programming.

    Browse publications by this author

Latest Reviews

(3 reviews total)
Doesn't read well, a lot of spelling errors etc.
Very poor language. Had a hard time understanding what was being said.
Momentan eher theoretische Lektüre, da Java 9 noch nicht „dran“ ist

Recommended For You

Book Title
Access this book, plus 7,500 other titles for FREE
Access now