Java EE 8 and Angular

4.3 (9 reviews total)
By Prashant Padmanabhan
    What do you get with a Packt Subscription?

  • Instant access to this title and 7,500+ eBooks & Videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Free Chapter
    What's in Java EE 8?

About this book

The demand for modern and high performing web enterprise applications is growing rapidly. No more is a basic HTML frontend enough to meet customer demands. This book will be your one-stop guide to build outstanding enterprise web applications with Java EE and Angular. It will teach you how to harness the power of Java EE to build sturdy backends while applying Angular on the frontend. Your journey to building modern web enterprise applications starts here!

The book starts with a brief introduction to the fundamentals of Java EE and all the new APIs offered in the latest release. Armed with the knowledge of Java EE 8, you will go over what it's like to build an end-to-end application, configure database connection for JPA, and build scalable microservices using RESTful APIs running in Docker containers. Taking advantage of the Payara Micro capabilities, you will build an Issue Management System, which will have various features exposed as services using the Java EE backend. With a detailed coverage of Angular fundamentals, the book will expand the Issue Management System by building a modern single page application frontend. Moving forward, you will learn to fit both the pieces together, that is, the frontend Angular application with the backend Java EE microservices. As each unit in a microservice promotes high cohesion, you will learn different ways in which independent units can be tested efficiently.

Finishing off with concepts on securing your enterprise applications, this book is a handson guide for building modern web applications.

Publication date:
January 2018
Publisher
Packt
Pages
348
ISBN
9781788291200

 

What's in Java EE 8?

Java in general has enjoyed a successful run in the enterprise space for nearly two decades, but we all understand that being successful today doesn't guarantee success tomorrow. Businesses have become more demanding compared to how things used to be 10 years ago. The need for flexible, robust, and scalable solutions delivered over the internet using the web and mobiles is only growing. While Java addresses most of these needs, change is inevitable for it to adapt to newer challenges. Fortunately for Java, with a large community of developers around it, there are a plethora of tools, libraries, and architectural patterns being established to deliver solutions for these business complexities. Java EE standardizes these solutions and allows the Java developer to leverage his existing skills in building enterprise applications.

Just like a hammer can't be the solution for every problem, using the same technology stack can't be the solution to every business challenge. With the web becoming faster, there's been a rise in client-side frameworks that are very responsive. These web client frameworks rely on enterprise services to utilize the underlying business capabilities of an enterprise. Java EE enables teams to deliver cloud-ready solutions using architectural patterns such as microservices.

Java EE, which stands for Enterprise Edition, can be considered an umbrella specification for defining the entire Java EE platform. EE 8 is the latest specification, which itself relies upon several other specs and groups them together into a unified offering. These changes are meant to simplify, standardize, and modernize the technical stack used by developers to make them more productive in building next-generation applications.

The enterprise space for business applications has never been more vibrant than now. Java EE 8 brings with it newer APIs and improvements to existing ones. This chapter will try to provide you with a clear understanding of what this release train of Java comprises. There's a fair bit to cover, so brace yourself as we dive into the world of Java EE.

We will cover the following topics in this chapter:

  • Improvements in EE 8
  • Overview of Java SE 8
  • CDI 2.0
  • JSON Processing 1.1
  • JSON Binding 1.0
  • JAXRS 2.1
  • Servlet 4.0
  • JSF 2.3
  • Bean Validation 2.0
  • Java EE Security API 1.0
 

Improvements in EE 8

Java EE has always tried to move common infrastructure tasks to container-based models. In recent times, these have been further simplified, allowing for developers to focus on the business logic rather than worry about the ceremonious code necessities. Java EE 7 focused on WebSockets and JSON, which helped build HTML 5 support. Java EE 8 continues to build upon EE 7, with a focus on building modern cloud-ready web applications with ease of development in mind.

Here's a quick summary of changes for the impatient. But don't get overwhelmed, as we will be going over these in more detail in the follow-up sections. So, what has changed, you may ask? Well, let's begin with JSON. Just like you can process XML documents and map XML to objects or objects to XML, now you can do the same with JSON too by using JSON-P and JSON-B. Java EE 8 now supports HTTP/2 with the Servlet 4.0 update and brings with it some exciting options to use. REST APIs are only growing stronger; now we have the support for server-sent events and we can use concurrency utilities available with SE 8 along with a reactive client API. Authentication and authorization support gained a standard way of doing things with the introduction of the new Java EE Security API. Bean validation now leverages SE 8 features to extend its range of options. CDI is no longer confined to the boundaries of EE, as it's now going to be made available for SE as well, along with new capabilities such as Async events, observer ordering, and more.

In the next few sections to follow, we will go over these changes in more detail, and what they mean when building an application.

 

Overview of Java SE 8

One of the goals of Java EE 8 was better alignment with Java SE 8. SE 8 was a major update; it was released in March 2014 and brought with it some major changes to the language and APIs. Lambdas, streams, default methods, and functional-style programming were introduced and were the highlights of the release. With these capabilities, the method of writing code was no longer going to be the same. A few other noteworthy additions in this release were optionals, repeating annotations, the date/time APIs, type annotations, and CompletableFutures.

If you would like to dig deeper into this release, then considering reading a book specific to Java 8. Here, we will cover just enough for getting to grips with some of the language features.

Lambdas, streams, and default methods

Lambdas have been the biggest change in the language since generics were introduced in Java 5. This was a fundamental change that impacted many of the APIs to follow. Anonymous classes are very useful to pass code around, but they come at the cost of readability as they lead to some boilerplate code (think Runnable or ActionListener). Those wanting to write clean code that is readable and void of any boilerplate would appreciate what lambda expressions have to offer.

In general, lambda expressions can only be used where they will be assigned to a variable whose type is a functional interface. The arrow token (->) is called the lambda operator. A functional interface is simply an interface having exactly one abstract method:

Runnable run = new Runnable() {
@Override
public void run() {
System.out.println("anonymous inner class method");
}
};

With lambdas similar to those in the preceding code, the code can be rewritten as follows, where the empty parenthesis is used for the no args method:

Runnable runWithLambda = () -> System.out.println("hello lambda");

To understand some of the enhancements, let us look at an example. Consider the Hero class, which is a plain Java object with two properties, telling us the name of the Hero and whether the hero can fly or not. Well, yes there are a few who can't fly, so let's keep the flag around:

 class Hero {  
String name;
boolean canFly;

Hero(String name, boolean canFly) {
this.name = name;
this.canFly = canFly;
}
// Getters & Setters omitted for brevity
}

Now, it's typical to see code that iterates over a collection and does some processing with each element in the collection. Most of the methods would typically repeat the code for iterating over a list, but what varies is usually the condition and the processing logic. Imagine if you had to find all heroes who could fly and find all heroes whose name ends with man. You would probably end up with two methods—one for finding flying heroes and another for the name-based filter. Both these methods would have the looping code repeated in them, which would not be that bad, but we could do better. A solution is to use anonymous inner class blocks to solve this, but then it becomes too verbose and obscures the code readability. Since we are talking about lambdas, then you must have guessed by now what solution we can use. The following sample iterates over our Hero list, filtering the elements by some criteria and then processing the matching ones:

List<String> getNamesMeetingCondition(List<Hero> heroList,
Predicate<Hero> condition) {
List<String> foundNames = new ArrayList<>();
for (Hero hero : heroList) {
if (condition.test(hero)) {
foundNames.add(hero.name);
}
}
return foundNames;
}

Here, Predicate<T> is a functional interface new to Java 8; it has one abstract method called test, which returns a Boolean. So, you can assign a lambda expression to the Predicate type. We just made the condition a behavior that can be passed dynamically.

Given a list of heroes, our code can now take advantage of lambdas without having to write the verbose, anonymous inner classes:

List<Hero> heroes = Arrays.asList(
new Hero("Hulk", false),
new Hero("Superman", true),
new Hero("Batman", false));

List<String> result = getNamesMeetingCondition(heroes, h -> h.canFly);
result = getNamesMeetingCondition(heroes, h -> h.name.contains("man"));

And finally, we could print the hero names using the new forEach method available for all collection types:

result.forEach( s -> System.out.println(s));

Moving onto streams, these are a new addition along with core collection library changes. The Stream interface comes with many methods that are helpful in dealing with stream processing. You should try to familiarize yourself with a few of these. To establish the value of streams, let's solve the earlier flow using streams. Taking our earlier example of the hero list, let's say we wanted to filter the heroes by the ability to fly and output the filtered hero names. Here's how its done in the stream world of Java:

heroes.stream().filter(h -> h.canFly)
.map( h -> h.name)
.forEach(s -> System.out.println(s));

The preceding code is using the filter method, which takes a Predicate and then maps each element in the collection to another type. Both filter and map return a stream, and you can use them multiple times to operate on that stream. In our case, we map the filtered Hero objects to the String type, and then finally we use the forEach method to output the names. Note that forEach doesn't return a stream and thus is also considered a terminal method.

If you hadn't noticed earlier, then look again at the previous examples in which we already made use of default methods. Yes, we have been using the forEach method on a collection which accepts a lambda expression. But how did they add this method without breaking existing implementations? Well, it's now possible to add new methods to existing interfaces by means of providing a default method with its own body. For collection types, this method has been defined in the Iterable interface.

These capabilities of Java 8 are now powering many of the EE 8 APIs. For example, the Bean Validation 2.0 release is now more aligned to language constructs such as repeatable annotations, date and time APIs, and optionals. This allows for using annotations to validate both the input and output of various APIs. We will learn more about this as we explore the APIs throughout the book.

 

CDI 2.0

What would the world be like if there was only a single object with no dependencies? Well, it certainly wouldn't be called object-oriented, to begin with. In programming, you normally have your objects depend on other objects. The responsibility of obtaining these other dependencies is owned by the owing object itself. In Inversion of Control (IoC), the container is responsible for handing these dependencies to the object during its creation. Context and Dependency Injection (CDI) allows us to set the dependencies on an object without having to manually instantiate them; a term often used to describe this is called injection. It does this with the added advantage of type-safe injection, so there’s no string matching done to get the dependency, but instead its done based on the existing Java object model. Most of the CDI features have been driven by the community and input from expert group members. Many of the features in CDI have been influenced by a number of existing Java frameworks such as Seam, Guice, and Spring.

While Java EE developers have enjoyed this flexible yet powerful API, SE developers were deprived of it, as CDI was part of Java EE alone. That's changed since this version, as this powerful programming model is now going to be available for Java SE as well. As of the 2.0 release, CDI can be used in both Java SE and Java EE. To make use of CDI in SE, you can pick a reference implementation such as Weld to get started. CDI can be broken down into three parts:

  • Core CDI
  • CDI for Java SE
  • CDI for Java EE

Given how important CDI has become to the Java EE platform, it's a key programming model to familiarize oneself with. It can be considered the glue between the other specifications and is heavily used in JAXRS and Bean Validation specs. It's important to note that CDI is not just a framework but a rich programming model with a focus on loose coupling and type safety. A reference implementation for CDI is Weld, which is an open source project developed by JBoss/Red Hat. The primary theme for this release was to add support for Java SE, as earlier versions were targeted specifically at Java EE alone. CDI provides contexts, dependency injection, events, interceptors, decorators, and extensions. CDI services provide for an improved life cycle for stateful objects, bound to well-defined contexts. Messaging between objects can be facilitated by using event notifications. If all this sounds a little overwhelming then don't worry, as we will be covering all of it and much more in the next chapter.

If there's any feature that might draw developers towards CDI, then that in all probability must be the event notification model of CDI. CDI events are pretty much what you would refer to as an implementation of the observer pattern. This feature largely influences the decoupling of code to allow for greater flexibility in object communication. When we talk about events in CDI, this mainly involves two functions; one is to raise an event and the other would be to catch an event. Now isn't that simple? Events can be synchronous or asynchronous in nature. Event firing is supported by the Event interface; the earlier version was only supported by firing synchronous events, but with the 2.0 release you can now fire async events as well. Now, in case you are wondering what this event is and why we would use it, an event is just a Java object that can be passed around. Consider a plain old Java object called LoginFailed. Based on a certain scenario or method invocation, we want to notify an observer of this event, what just happened. So, here's how you can put this together in code:

public class LoginFailed {}

public class LoginController {
@Inject Event<LoginFailed> loginFailedEvent;

public void loginAttempt() {
loginFailedEvent.fire(new LoginFailed());
}

}

We will discuss the specifics of events and more in the next chapter, which is dedicated to CDI and JPA. For now, we have just scratched the surface of what CDI has to offer, but nevertheless this should serve as a good starting point for our journey into the exciting world of CDI-based projects.

 

JSON Processing 1.1

Most languages provide support for reading and writing text files. But when it comes to special types of documents such as XML, CSV, or JSON, processing requires handling them differently to traditional text files. Java has historically had support for XML-based, documents but the support for JSON was provided via third-party libraries. JSON itself is a lightweight data-interchange format which is a well documented standard and has become extremely successful; it has become the default format for many systems. Java had the support for processing XML documents using Java API for XML Processing (JAXP) and JSON-P, which was introduced in Java EE 7. You can now process JSON documents as well. So, JSON-P does for JSON what JAXP does for XML. The 1.1 version was an update to the earlier JSON-P specification called JSON-P 1.0. This was to keep it updated with the JSON IETF standards. While this might sound like the other JSONP (notice the lack of hyphen), which stands for JSON with Padding, this is not that. JSONP is a format used to deal with cross origin AJAX calls using GET, while JSON-P is the specification defined within Java EE, used for JSON Processing and written as JSON-P.

When dealing with any Java EE API, you would have a public API and a corresponding reference implementation. For JSON-P, here are some useful references:

JSON-P official web site

https://javaee.github.io/jsonp

JSR-374 page on the JCP site

https://jcp.org/en/jsr/detail?id=374

API and reference implementation

https://github.com/javaee/jsonp

The API includes support for parsing, generating, and querying JavaScript Object Notation data. This is made possible using the object model or the streaming model provided by the JSON-P API. You can consider this a low-level API, which is different to the higher level declarative JSON binding API which is also part of Java EE 8. The streaming model can be considered similar to StAX for XML for creating and reading JSON in a streaming manner, while the object model can be used to work with JSON, similar to DOM for XML.

Working with JSON documents

To get an understanding of how this works, consider the following JSON document saved in a file called demo.json (it can be any file), which contains an array of JSON objects with the name and priority key-value pairs:

[
{
"name": "Feature: Add support for X",
"priority": 1
},
{
"name": "Bug: Fix search performance",
"priority": 2
},
{
"name": "Feature: Create mobile page",
"priority": 3
}
]

Now, before looking at the API, it is important to understand how we need to perceive this JSON document. JSON defines only two data structures, one being an array that contains a list of values and the other an object that is just a name key-value pair. There are six value types in JSON, namely:

  • String
  • Number
  • Object
  • Array
  • Boolean
  • Null

In the previous document, the square brackets around the content denote an array that has multiple objects as its values. Let's take a look at the first JSON object, as follows:

{
"name": "Feature: Add support for X",
"priority": 1
}

The curly braces, {}, denote a JSON object that contains a key-value pair. The key must be a string in quotes followed by the value, which must be a valid JSON data type. In the previous case, we have the string in quotes, "Feature: Add support for X", and this maps to a String data type. The value of the "priority" key is a number data type, given as 1. Since the value can be any JSON data type, you could also have nested objects and arrays as values of the JSON object. Here's an example of that, showing the "ticket" key having an array as its value, which contains objects:

{ 
"name": "Feature: Add support for X",
"priority": 1,
"ticket": [
{
"name": "Feature: add new ticket",
"priority": 2
},
{
"name": "Feature: update a ticket",
"priority": 2
}
]
}

Having built an understanding of this document structure, let's look at the API.

JSON Processing API

JSON-P can be considered as having two core APIs:

javax.json

JSON Object Model API for a simpler way of working with JSON documents in memory.

javax.json.stream

JSON Streaming API, which parses a document and emits events without loading the entire document in memory.

Let's look at what the parsing API looks like when trying to parse the previous sample document. First we need to obtain a parser using the Json class. This class is a factory class for creating JSON processing objects:

JsonParser parser = Json.createParser(Main.class.getResourceAsStream("/sample.json"));

Next, we use the returned JsonParser object to loop over all the entries using the hasNext() method, similar to an iterator, and in turn invoke the next() method, which emits a JsonParser.Event instance. This instance will hold the current JSON entry which can be a key or value:

while (parser.hasNext()) {             
JsonParser.Event e = parser.next();
System.out.print(e.name());
switch (e) {
case KEY_NAME:
System.out.print(" - " + parser.getString());
break;
case VALUE_STRING:
System.out.print(" - " + parser.getString());
break;
case VALUE_NUMBER:
System.out.print(" - " + parser.getString());
}
System.out.println();
}

Using the preceding loop, we will be able to parse and examine each entry as we go through the entire document.

Apart from creating parsers using the Json class, we can also obtain a JsonObjectBuilder instance which is used to create JSON object models from scratch. A one line demo is shown as follows, and creates a JsonObject:

JsonObject json = Json.createObjectBuilder().add("name", "Feature ABC").build();

More advanced usage is possible by nesting calls to the add(...) method, which we will look at later on. There have been many noteworthy enhancements with JSON-P 1.1, such as:

  • JSON Pointer: Allows for finding specific values in a JSON document
  • JSON Patch: Allows for modifying operations on a JSON document
  • JSON Merge Patch: Allows for using patch operations with merge
  • Addition of JSON Collectors: Allows for accumulating JSON values from streams

Additionally, JsonReader has a new method called readValue() which returns a JSON value from the underlying input source. Similarly, JsonWriter was updated with another new method called write(JsonValue value), which allows for writing the JSON value to an output source. These additions were possible without breaking the earlier APIs because default methods were introduced in Java 8. We will go through more details about parsing and various other APIs in another chapter, but for now this should give you a starting point to begin exploring the APIs further.

 

JSON Binding 1.0

As the version number 1.0 suggests, this is one of the new additions to the Java EE specification group. It's also probably the most welcomed addition, as it brings with it the much awaited ability to bind any Java object to a JSON string in a standard way. As the predominant way of exchanging information is JSON, most developers would look for solutions to convert their APIs' input and output values to and from JSON. JSON-B does for JSON what JAXB did for XML—it acts as a binding layer for converting objects to JSON and JSON string to objects. While the default mapping mechanism should serve us well, we all know that there's always a need for customization. Thus, there are customization options available when the defaults aren't good enough for a use case. This can be done using annotations on your Java classes.

The Yasson project is the reference implementation for JSON-B. For JSON-B, here are some useful references:

JSON-B official web site

https://javaee.github.io/jsonp

JSR-367 page on the JCP site

https://jcp.org/en/jsr/detail?id=367

API and spec project

https://github.com/javaee/jsonb-spec

Yasson RI project

https://github.com/eclipse/yasson

One of the reasons why JAXB, and now JSON-B, are so popular is because they almost hide the complexity of working with the document. As a developer, you get to focus on the business objects or entities while letting these binding layers take care of the complexities of mapping an object to/from their document representation. The API provides a class called Jsonb, which is a high-level abstraction over the JSON Binding framework operations. There are mainly two operations that you would perform using this class; one is to read JSON input and deserialize to a Java object and the other is to write a JSON output by serializing an object. To get an instance of the Jsonb class, you need to obtain it from a JsonbBuilder. An example of its usage follows:

Jsonb jsonb = JsonbBuilder.create();

The builder also allows for passing in custom configurations that can change the processing behavior. Once you have obtained the Jsonb instance, you can use any of the toJson or fromJson overloaded methods for performing any operation. This instance is thread-safe and can be cached for reuse. Consider this sample to see the API in action:

class Ticket {
public String name;
public Integer priority;
}

Here are the lines of code required for converting Java objects to/from JSON:

Ticket t = new Ticket();
t.name = "Feature ABC";
t.priority = 2;

/* Create instance of Jsonb using builder */
Jsonb jsonb = JsonbBuilder.create();

/* Ticket to this {"name":"Feature ABC","priority":2} */
String jsonString = jsonb.toJson(t);

/* {"name":"Feature ABC","priority":2} to a Ticket */
Ticket fromJson = jsonb.fromJson(jsonString, Ticket.class);

As you can see, this is very different to working with the JSON-P APIs, which are low-level. The JSON-B API allows for working with JSON with a much simpler API. There are times when you can even combine the two APIs (JSON-P and JSON-B) to perform certain operations. Imagine you are given a large JSON file from which you need to selectively extract a nested object to use that object in your code. You could use the JSON-P API and use the JSON Pointer to extract the needed object, and then later use JSON-B API to deserialize it to a Java object. When working with JSON, single object types aren't enough—you often run into a collection of objects that you need to work with. In our sample, think instead of one ticket. You may be reading and writing a collection of tickets. As you might expect, JSON-B has built-in support for collections too. As a matter of fact, it also supports generic collections for working with JSON. Generics, as you may recall, is for compile time type checking, but is implemented by the compiler using a technique called type erasure. Thus, the type information is not present at runtime. Hence, to correctly perform deserialization, the runtime type of the object needs to be passed to JSON-B.

JSON-B also offers some options in the form of compile time customization using annotations and runtime customization using the JsonbConfig class. The annotations can be placed on classes that you need to do the custom changes, and that's it. The customization options don't end there, though, as there might be times when you may not have access to the source code for some reason. In such cases, you can make use of an adapter, which allows for writing your custom code to perform the mapping; this allows more fine-grained control over data processing. These options are very handy for a developer to have at their disposal in today's age where JSON is prevalent.

 

JAXRS 2.1

Java EE has always been good at supporting various over the network communication options, right from binary RPC, XML-based RPC or now XML and JSON-based communication via JAXRS over HTTP/1 and HTTP/2 protocols. In recent times, REST-based web services have become the de facto choice for developing web APIs. There are broadly two standards that can be followed when developing web services, one being SOAP and the other REST. Initially, the SOAP-style APIs dominated the enterprise space and it seemed there was no room for any other, but the REST style had its own appeal. As mobile applications grew, so did the demand for server-side APIs. Clients required more flexibility and weren't exactly a fan of the verbosity of SOAP XMLs.

REST APIs are less strict and thus more flexible and simpler to use, which added to their appeal. While a SOAP document primarily focused on XML and had to conform to SOAP standards, REST APIs enjoyed the freedom of choice in terms of data format. The dominant choice of data format in REST for communication is JSON documents, but it can be XML too, or any other. The published APIs in REST are known as resources. There are suggested design guidelines when building a REST API, and while these aren't going to prevent you from doing it your way, its best to stick to the guidelines for the most part. Think of them as a design pattern, which you may follow. REST in itself is an architectural pattern and widely adopted in the industry.

JAXRS 2.0 was imagined as a single client request and a single server response. But with 2.1 and the underlying HTTP/2 updates, you can now think of single requests as having multiple responses. The new API update allows for non-blocking interceptors and filters as well.

A JAXRS project would typically have one or more resources along with some providers. Building a REST endpoint or resource, as they are called, is as simple as creating a class with a few annotations and writing a resource method. There will be one class to bootstrap the REST resources, and then you will define the actual resource and providers that are needed by your application. Bootstrapping is done by creating a subclass of the Application class, which serves to configure your REST resources. This is similar to the following snippet:

/** 
* To bootstrap the REST APIs
*/
@ApplicationPath("/resources")
public class JaxrsActivator extends Application { }

@Path("heroes")
public class HeroResource {
@GET
@Path("{id}")
public Hero getSingleHero(@PathParam("id") String id) {
return new Hero(id);
}
}

With those annotations added to the class, the HeroResource class has just been transformed into a REST resource. The class will be deployed within a web application (WAR file) and can be run locally on any JEE compliant server. As REST resources are accessed via http(s), this resource will now be available at a URL, given its called heroapp. Notice that /resources is actually defined by the subclass of the javax.ws.rs.core.Application class. So, you define the prefix for all the REST endpoints using the @ApplicationPath annotation. Typically, naming conventions include /api, /resources, or /rest:

http://localhost:8080/heroapp/resources/heroes/1

The matching of request to resource methods is done internally by the container using an algorithm which is implementation specific, but the output is defined by the specification. What this means to a developer is that they can rely on the standards and not worry about different implementations.

In the 2.1 release, which was updated from the earlier JAXRS 2.0 version, came a Reactive Client API. The reactive style of programming is for the client side, which allows for a more reactive way of handling a request/response. This is not done by replacing the existing Client API, but instead by extending it to support this new style of programming. A noticeable change includes the rx() method on the Client Fluent API. Additionally, there's better async support as it embraces the concurrency features of Java 8. CDI updates have also been leveraged along with underlying Java 8 alignment. Server Sent Events is a popular web transport technique used for pushing one-way asynchronous updates to the browser. SSE is supported in both client and server APIs. With SSE, it's possible to have a communication channel kept open from the server to the clients, such that subsequent communications from the server to connected clients can be sent. With SSE and WebSocket its time to stop polling. While polling occasionally isn't that bad an idea, there are better alternatives at our disposal. Polling in general adds to unnecessary resource usage and undue complexity, which we can avoid now. The growing need for a real-time push has led to new standards such as SSE, which is an HTTP-based solution for one-sided communication and WebSockets an exciting standard allowing for bidirectional communication between both client and server.

The idea of SSE can be applied whenever a client needs to subscribe to updates from the server, such as a stock quote update that the server may send to the client when there's any change in the price. WebSockets, on the other hand, can be used for more complex use cases as it supports two-way communication, such as messaging or collaboration tools which require updates going in both directions. Needless to say, these can be used to replace the age old polling solutions that always fall short. Now that we understand the differences between SSE and WebSockets, it's also worth noting that HTTP/2 Push is unrelated to the two. Simply put, HTTP/2 Push is a mechanism to push assets to the web browser in advance to avoid multiple round trips.

JAXRS uses Providers, which are classes that you annotate with the @Provider annotation. These classes are discovered at runtime and can be used to register filters, interceptors, and more. You may think of these as layers that sit between the originating request and your REST resource. These can be used to intercept the incoming request and thus allow for applying any cross-cutting concerns across the application. Now, this is a good idea to make use of given it promotes the separation of concerns. Imagine polluting your code with redundant checks or validations for each request, which are part of the infrastructure or protocol-related logic. This feature allows us to separate the infrastructure code from the actual business logic that your component should focus on. We will go over more details in the later chapters, but this should serve as a good reference regarding what JAXRS has to offer.

 

Servlet 4.0

For the majority of developers, this may not impact the way you write servlet code, but it does offer some performance benefits along with new abilities such as server push. HTTP/2.0 is a binary protocol based on frames and is the new standard for the web. HTTP/2 standard was approved around February 2015 and is supported by most modern day browsers. While the web has been evolving at a fast pace, the same can't be said about HTTP itself. For years, developers had to work around the limitations of HTTP 1.x, but the wait is finally over, as this version has better alignment with modern day demands. Some of the HTTP/2 benefits include the ability to reuse the same TCP connection for multiple requests, more compressed header information, priority and packet streaming, and server push to send resources from the server to the client. This results in reduced latency with faster content downloads. For the uninformed, this change won't be a crucial change and your applications will continue to function as they did before with the added benefit of faster performance.

So, there are no new HTTP methods and no new headers or URL changes that you need to worry about. Since Java EE servlets are primarily based on the HTTP protocol, it was only logical for it to get updated to meet the changes in the HTTP standards. The 4.0 update is mainly focused on adding support for the HTTP/2.0 standard, and thus is a 100% compliant implementation for the HTTP/2 specification. What this update should bring with it is increased performance.

Some of the features of HTTP/2 are:

  • Request/response multiplexing (bi-directional support)
  • Optimized headers (uses HPACK header compression)
  • Binary frames (this solves the HOL blocking problem present in HTTP/1.1)
  • Server Push
  • Stream prioritization
  • Upgrade from HTTP/1.0

Servlet 4.0 serves as an abstraction of the underlying protocol, allowing us to focus on the high-level APIs that shield us from the intricacies of HTTP. It's also interesting to note that the servlet specification itself is relied upon by other specs, such as JSF, which will be utilizing these updates to their benefit. Typically, you can think of an HTTP request/response cycle as one request and one response, but that just changed. Now one request can be used to send out multiple responses. To put this into perspective, remember the earlier workarounds of HTTP 1.1, such as domain sharding or where we tried to save multiple requests in order to reduce the TCP connection overhead, such as using CSS Sprites (one image combined with multiple images), well that’s no longer needed.

Server Push

There's a new Push builder API that can be used for server push features. Armed with the server push ability, a server can push a resource to the client. This doesn't mean that there's no request needed in the first place. You need to obtain a PushBuilder from the request object and then use this for constructing a push request. Thus, there's always a request, based on which, the push feature is enabled.

A sample of this is as follows:

PushBuilder pushBuilder = httpServletRequest.newPushBuilder();

Once a pushBuilder instance is obtained from the request, you can use it to set the required URI path, which is to be used for sending the push request. A sample is shown here:

request.newPushBuilder()
.path(“/assests/images/product.png”)
.push();

Here, the paths beginning with / are considered absolute paths. Without the / prefix, the path would be considered to be relative to the context of the request used to create the instance of PushBuilder. While the short code shown here is handy, it must be used with caution since there's a possibility that the call to newPushBuilder() may return null if push is not supported.

If you are wondering how we put that newPushBuilder method on the request object, remember that Java 8 has default methods. So, the signature of the method looks like the following:

default PushBuilder newPushBuilder()

Building a push request involves setting the request method to GET and setting the path explicitly, as that won't be set by default. Calling the push method generates the push request from the server and sends it to the client, unless the push feature is not available for some reason. You may add headers or query strings to the push request by using the addHeader or queryString methods.

With the preceding code, the server will send a push to the client that made this request. The client may already have the resource and thus can tell the server that it has this cached from a previous request, and in turn will inform the server to not bother sending this resource over the wire. You might have guessed by now that it's the client who can dictate whether a resource should be pushed or not. Thus, the client can explicitly disable the server push.

Let's imagine we need to push the logo to the client from our servlet. Here's how we might write this code:

protected void processRequest(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException {
PushBuilder pushBuilder = request.newPushBuilder();
if (pushBuilder != null) {
pushBuilder.path("images/logo.png")
.addHeader("Content-Type", "image/png")
.push();
}
try (PrintWriter writer = response.getWriter();) {
writer.write(new StringBuilder()
.append("<html><body>")
.append("<img src='images/logo.png'>")
.append("</body></html>").toString());
}
}

The Servlet API already provides Java SE 9 support for HTTP/2. There’s broadly just two classes, HttpRequestGroup and HttpRequest. These are just enough to solve the most common use cases but not exhaustive enough to replace a more established HTTP client library. It will support both the earlier HTTP/1 version along with the newer HTTP/2 version.

 

JSF 2.3

There has been a rise in frontend frameworks, and these have competed with each other to become dominant frameworks for the web. JavaServer Faces (JSF), while it isn't the new kid on the block, still has a fairly large community and is the primary framework in the Java EE space for building UIs, which makes it a force to be reckoned with.

JSF is the standard user interface for building applications with Java EE. It takes a component-based approach to building the UI, which is different to the traditional request-based model. While it has been around for over a decade, it didn't gain much traction, arguably, until the 2.x release. There have been supporting frameworks and libraries built around JSF, and thus it enjoys good community support. Frameworks such as PrimeFaces, RichFaces, and IceFaces, along with libraries such as OmniFaces and more, have made it a popular choice among developers. That doesn't mean there aren't any critics; a framework this old is bound to have an opinionated community. With new client-side solutions making their mark, including Angular and React, the competition has only grown tougher. That's good news for developers, as it leads to a richer selection of choices for building your next web application.

The latest update of 2.3 brings with it many enhancements and refinements and makes it aligned with Java 8. Some of the major features include:

  • CDI integration, which makes it easy for injecting JSF artefacts into classes and EL expressions
  • The confusion of Managed Bean annotations is finally resolved
  • Supports Java 8 date/time APIs
  • f:websocket, which allows for easy usage of the WebSocket protocol
  • Validation and conversion enhancements
  • Lots of API updates and fixes

When writing a JSF application, its fairly routine to obtain references to certain context-based objects. Earlier versions didn't have any easy way to obtain these, and developers had to look up the instances by using some statically chained methods, such as the following:

FacesContext.getCurrentInstance().getExternalContext(). [ get request map, get request header map, and more get stuff]

This issue is solved by CDI, as it allows for injecting these artefacts directly in your classes. Additionally, it's also possible to use these via the EL expression. All of this is possible because JSF now provides some default providers for common use cases. A few handy ones are listed in the following table:

Before

EL variable available

Using Inject

FacesContext.getCurrentInstance()

#{facesContext}

@Inject
FacesContext
facesContext;
FacesContext.getCurrentInstance()
.getExternalContext()
.getRequestMap();

#{requestScope}

@Inject
@RequestMap
Map<String, Object> map;
FacesContext.getCurrentInstance()
.getExternalContext()
.getRequestHeaderMap();

#{header}

@Inject
@HeaderMap
Map<String, Object> map;
FacesContext.getCurrentInstance()
.getExternalContext()
.getRequestParameterMap();

#{param}

@Inject
@RequestParameterMap
Map<String, Object> map;

It's important to note that the general reference types, such as Map or others, would require specifying a qualifier (RequestMap, HeaderMap, and so on), to assist in resolving the required type. With CDI integration support, it's also possible to inject your own custom validator and converter, too.

JSF 2.0 brought it's own set of annotations, but as soon as CDI arrived, those annotations had to be revisited. Since CDI has universal appeal in terms of managed beans, it conflicts with JSF's own annotations. It was finally decided with the 2.3 release to deprecate the JSF defined annotations in favour of the more flexible and universal CDI annotations. Thus, Managed Bean annotations were deprecated in favor of CDI annotations.

There's support for Java 8 date/time APIs in the 2.3 release, with an update to the existing converter tag, called <f:convertDateTime>. The type attribute now takes more values along with earlier ones, such as both, date, time, localDate, localTime, localDateTime, offsetTime, offsetDateTime, and zonedDateTime. If we have a bean with a LocalDate property, then the same can be referenced in the facelets view, as follows:

<h:outputText value="#{ticketBean.createdDate}">
<f:convertDateTime type="localDate" pattern="MM/dd/yyyy" />
</h:outputText>

The WebSocket protocol offering full bi-directional communication support, as well as developers wanting to utilize these abilities, has led to the inclusion of web socket integration in JSF standard. It's now possible to register a WebSocket with the client using the f:websocket tag, pushing messages to the client from the server using PushContext. You can get this running with very little code; all you need to do is name the channel, which is a required attribute for this tag, and then register a JavaScript callback listener through the onmessage attribute. That's it for the client side. This callback would be invoked once the server sends a message to the client. In case you are wondering, the message is encoded as JSON and sent to the client. Here are a few snippets to help you understand this better.

This is the JSF view part, which registers the WebSocket:

<f:websocket channel="jsf23Channel" onmessage="function(message){alert(message)}" />

Then, on the server side, the PushContext is injected and later used for sending the push messages to the client:

 @Inject @Push
private PushContext jsf23Channel;

public void send() {
jsf23Channel.send("hello websocket");
}

A few other enhancements include support for importing constants for page authors using the <f:importConstants/> tag. Also, there will be support for the c:forEach method of iteration using ui:repeat. While we are on the iterating point, it's worth mentioning that support for map-based iteration has also been added, and this means you can now use ui:repeat, c:forEach, and h:dataTable to iterate over the entries in a map. The @FacesDataModel annotation allows for supplying your own custom registrable DataModel objects that can then be used in ui:repeat or h:dataTable. This can be utilized by library providers to add more flexibility to their components. An example of ui:repeat using a map is shown here:

<ui:repeat var="anEntry" value="#{ticketMapOfFeatures}">
key: #{anEntry.key} - value: #{anEntry.value}
</ui:repeat>

AJAX method calls are now supported—you can invoke a JavaScript method which in turn will invoke a server-side bean method in an Ajax call. Those familiar with the PrimeFaces p:remoteCommand can relate to this feature, with the difference being that it's included as a standard now. This can be done using the h:commandScript component tag. Similar to invoking the server-side code from JavaScript, you can also invoke JavaScript from server-side code as well, which is made possible using API enhancement. This is done by referencing the PartialViewContext and invoking the getEvalScripts method for adding your JavaScript code to the response. With so many additions, JSF has once again become worth adding to a developers arsenal when building web applications for Java EE.

 

Bean Validation 2.0

With the advent of so many technical choices such as microservices, rich front end applications, data stores like NoSQL, and a plethora of systems always communicating with each other and exchanging data, it's vital to get the data validation done right. There's a growing need for data validation services; most APIs typically have some input and output as part of their contract. The input and output are usually the candidates for applying some validation on.

Imagine you are trying to register a user in the system but the client didn't send the username or email which was required by your business logic. You would want to validate the input against the constraints defined by your application. In an HTML-based client, if you were building a HTML form, you would want the input to meet certain criteria before passing it down for further processing. These validations might be handled in the client-side and/or in your server-side processing. Validation is such a common requirement for any API that there’s room for standardizing these constraints and applying them in an intuitive way on your APIs. Bean Validation specification defines a set of built-in validations that can be used on your APIs in a declarative way using annotations. It would be naive to think this covers every possible case, thus there’s a way to use your own custom validators when the built-in ones just won't do.

As you might have guessed, this kind of validation is not only restricted to JAXRS web services but can be applied across various specs such as JSF, JPA, CDI and even third-party frameworks such as Spring, Vaadin, and many more. Bean Validation allows for writing expressive APIs with constraints defined in a declarative manner, which get validated on invocation.

Now, if you are familiar with the earlier version, then you might be wondering what's changed in 2.0. Well, the main driving factor for the 2.0 release involved leveraging the language changes brought in by Java 8 for the purpose of validation. We have new types, such as LocalTime or LocalDate, as well as the possibility to repeat annotations or use lambda expressions. So, an update to support and leverage these changes was only logical.

Let's assume we have a REST resource (web service) that takes a team as input to be added into the system and outputs the updated list of teams. Here, we want a name to be provided for a team and this can’t be null. So, here's the code for doing just that:

public class Team {     
private Long id;
//NotNull suggest the name of a team can’t be null
@NotNull
private String name;
//Rest of the code can be ignored
...
}

@Path("teams")
public class TeamResource {
/* A method to add new team, which requires the input
of Team to be Valid
*/
@POST
@Produces(MediaType.APPLICATION_JSON)
public List add(@Valid Team team) {
//Rest of the code can be ignored
...
}
}

Let's assume we have the preceding JAXRS resource running on a server. If you invoke this API and supply team data as input, then it must have a name in order for the input to pass the validation constraint. In other words, valid team input has the name field satisfying the NotNull constraint. Similarly, it's possible to put a constraint on the result as well. A rewritten method signature is shown as follows, which puts a NotNull constraint on the response:

@POST
@Produces(MediaType.APPLICATION_JSON)
public @NotNull List<Team> add(@Valid Team team) { ... }

With Bean Validation 2.0, a whole new set of possibilities have been added. One of the biggest features is validating collections. It's now possible to validate the contents of a type-safe collection. We could, for instance, add type annotations to validate the contents of generic collections such as List<@NotNull Team>, or even better, List<@NotNull @Valid Team>:

@POST
@Produces(MediaType.APPLICATION_JSON)
public @NotNull List<@NotNull @Valid Team> add(@Valid Team team)
{ ... }

You could also use the @Email annotation on a collection like this, List<@Email String>, to ensure the emails present within the list are conforming to the email validation constraint. The API also allows you to supply your own regex to validate the input. Also, it's interesting to note that @Email validation doesn't mean the value cannot be null. What it means is that if a string is present, then it must be a valid email but can be null, too. It's best to separate concerns from the core validation of the email and the NotNull validation for the input.

A few more examples are:

  • List<@Positive Integer> positiveNumbers;
  • Map<@Valid Team, @Positive Integer> teamSizeMap;

In the preceding example, we want our map to have valid team instances as the key, and the value must be a positive integer:

@Size(max=10) 
private List<String> only10itemsWillBeAllowed;

In the preceding case, we want to have a list containing a maximum of 10 items.

A more complex case would be validating a player list with a maximum of 11, and each player must be a valid instance. Valid would mean meeting all validation constraints put on a player class:

@Size(max=11) 
private List<@Valid Player> players;

The preceding constraints provide for a very natural way to put constraints declaratively on your code, which is much more readable and closer to the definition where it’s used.

Another way of validating a collection of items is to put the validation constraint near the type parameter. So, while both the following approaches would work, the latter is preferred:

@Valid
private List<Player> players;

It can also be:

private List<@Valid Player> players; //Preferred

Now, Bean Validation also makes use of the Optional class, where you can put validation on the type it holds. For example:

public Optional<@Valid Team> getTeam() { ... }

A few more built-in constraints worth checking out are @NotBlank, @Future, @Past, @Negative, @Pattern, @Min, and @Max. By default, the Bean Validation API has very nice integration with key life cycle events of other specifications. This allows for all of the validation to happen at key stages of an object's life cycle, such as that of JPA or JSF managed beans.

 

Java EE Security API 1.0

Security, while arguably not close to every developer's heart, sooner or later becomes a critical topic that needs attention. In Java EE, security has generally been supported by servers and also provided by third-party solutions. This is also one of the reasons why Java EE security is considered confusing or non-portable at times. Security is not something new to any of us, and put simply, web applications in general need to establish the identity of the user and then decide if the user is allowed to see or perform an operation. This is called authentication and authorization of resources. Security has evolved over the years from simple form-based authentication or BASIC auth to LDAP and OAuth-based solutions.

If you are wondering why there's a need for security standards in Java EE, then couple of reasons are to standardize the security mechanism and avoid vendor-specific configurations when working with security, as well as meeting modern day demands. This being a new specification and owing to various other reasons, the specification doesn't change things drastically but instead will be focusing on standardization of existing security features offered by various Java EE vendors. To ensure what is already out there doesn't break, the enhancements have been modified to provide alternative options when configuring security, rather than replacing what might already be in use.

This initiative will simplify the API by allowing for sensible defaults where applicable, and will not require server configuration changes, which becomes a challenge with today's PaaS or cloud-based delivery models. Another feature is using annotation defaults and integration with other specs such as CDI. There is now an API for authentication and authorization, along with an identity store API. An identity store can take the form of a database, LDAP, or some other custom store. If you haven't heard of LDAP, then its just a protocol to access data from a directory server which basically stores users. From an API perspective, IdentityStore would be an abstraction of a user store. This would be used by HttpAuthenticationMechanism implementations to authenticate users and find their groups. Here, a group is used to denote a role to which the user belongs, but unlike a role, think of a group as a more flexible option to map users in and out of. There will be two methods provided by the IdentityStore API:

  • validate(Credential)
  • getGroupsByCallerPrincipal(CallerPrincipal)

Support may be provided for either one or both based on the underlying implementation. So, if the implementation only supports authentication but not authorization then only the validate(Credential) method would be supported.

The feature list also includes additions related to password aliasing and role mapping and excludes CDI support. The reference implementation for security in Java EE is provided by the project Soteria.

The link to GitHub project is https://github.com/javaee-security-spec/soteria.

 

Summary

We have covered quite a few aspects of the Java EE 8 release train, while catching up on new and existing changes. As a developer working on Java EE solutions, these capabilities provide a big boost to one's productivity. The growing adoption of REST APIs with JSON as its preferred data-interchange format has led to better support of JSON in the platform. JSON now enjoys the same status as XML for support in Java EE. The widely used Servlets API has been updated to be aligned with the HTTP/2 standards and its offerings. Other noteworthy enhancements include updates to JAXRS, JSF, WebSockets, Server-Sent Events, and a new reactive client API for JAXRS. Many of these APIs are influenced by Java 8 changes, which have played an influential role in updates.

CDI standard is now very well integrated in all the other specifications offered by Java EE. The influence of CDI is not limited to EE alone, and it's a welcome entry into Java SE as well. Bean Validation updates have made constraint additions to code an ease to work with. Additionally, there have been maintenance updates to existing APIs and new additions including Java security and JSON-B, which are now available under the umbrella spec of Java EE.

About the Author

  • Prashant Padmanabhan

    Prashant Padmanabhan is a professional Java developer and solutions architect. He has been developing software since 2002 and is still loving it. Professionally, he has over a decade of experience and considers himself a coding architect, building enterprise-scale software using Java, JEE, and open source technologies put together.

    Browse publications by this author

Latest Reviews

(9 reviews total)
Good book. Hasslefree purchase
Bin sehr zufrieden mit dem Buch und auch dem adäquaten Preis.
This book is the better reading about Java and Angular

Recommended For You

Java EE 8 and Angular
Unlock this book and the full library FREE for 7 days
Start now