Reader small image

You're reading from  Java Coding Problems - Second Edition

Product typeBook
Published inMar 2024
PublisherPackt
ISBN-139781837633944
Edition2nd Edition
Right arrow
Author (1)
Anghel Leonard
Anghel Leonard
author image
Anghel Leonard

Anghel Leonard is a Chief Technology Strategist and independent consultant with 20+ years of experience in the Java ecosystem. In daily work, he is focused on architecting and developing Java distributed applications that empower robust architectures, clean code, and high-performance. Also passionate about coaching, mentoring and technical leadership. He is the author of several books, videos and dozens of articles related to Java technologies.
Read more about Anghel Leonard

Right arrow

Join our book community on Discord

https://packt.link/vODyi

Qr code Description automatically generated

This chapter includes 18 problems meant to cover in detail the Java records introduced in JDK 16 (JEP 395), and record patterns introduced as a preview feature in JDK 19 (JEP 405) and as a second preview feature in JDK 20 (JEP 432).We start by defining a simple Java record. We continue by analyzing record's internals, what it can and cannot contain, how to use records in streams, how they improve serialization, and so on. We are also interested in how we can use records in Spring Boot applications, including JPA and jOOQ technologies.Next, we focus on record patterns for instanceof, and switch. We will talk about nested record patterns, guarded record patterns, handling null values in record patterns, and so on.At the end of this chapter, you'll master Java records. This is great because records are a must-have for any Java developer who wants to adopt the coolest Java features.

Problems

Use the following problems to test your programming prowess on Java records. I strongly encourage you to give each problem a try before you turn to the solutions and download the example programs:

  1. Declaring a Java record: Write an application that exemplifies the creation of a Java record. Moreover, provide a short description of the artifacts generated by the compiler for a record behind the scene.
  2. Introducing the canonical and compact constructors for records: Explain the role of the built-in record's canonical and compact constructors. Provide examples when it makes sense to provide explicit such constructors.
  3. Adding more artifacts in a record: Provide a meaningful list of examples about adding explicit artifacts in Java records (for instance, adding instance methods, static artifacts, and so on).
  4. Iterating what we cannot have in a record: Exemplify what we cannot have in a record (for instance, we cannot have explicit private fields) and explain why.
  5. Defining multiple...

88. Implementing interfaces in records

Java records cannot extend another class but they can implement any interface exactly as a typical class. Let's consider the following interface:

public interface PestInspector {
  public default boolean detectPest() {
    return Math.random() > 0.5d;
  }
  public void exterminatePest();
}

The following snippet of code is a straightforward usage of this interface:

public record MelonRecord(String type, float weight)  
       implements PestInspector {
      
  @Override
  public void exterminatePest() { 
     
    if (detectPest()) {
      System.out.println("All pests have been exterminated");
    } else {
      System.out.println(
        "This melon is clean, no pests have been found");
    }
  }
}

Notice that the code overrides the abstract method exterminatePest() and calls the default method detectPest().

89. Understanding records serialization

In order to understand how Java records are serialized/deserialized, let's have a parallel between a classical code based on plain Java classes and the same code but expressed via the Java record's syntactical sugar.So, let's consider the following two plain Java classes (we have to explicitly implement the Serializable interface because, in the second part of this problem, we want to serialize/ deserialize these classes):

public class Melon implements Serializable {
  private final String type;
  private final float weight;
  public Melon(String type, float weight) {
               
    this.type = type;
    this.weight = weight;
  }
  // getters, hashCode(), equals(), and toString()
}

And, the MelonContainer class that uses the previous Melon class:

public class MelonContainer implements Serializable {
   
  private final LocalDate expiration;
  private final String batch;
  private final Melon melon;
  public MelonContainer...

Summary

The goal of this chapter is to deeply cover Java records and record patterns. We have assigned the same importance to both the theoretical and the practical parts so that in the end there are no secrets regarding the use of these two topicsNote for me: JDK 20 for-loop and record pattern, and _

110. Summing two arrays unrolled via the Vector API

In this problem, we take the example of summing two arrays from the previous problem and re-write the loop in an unrolled fashion.

Loop unrolling can be applied manually (as we will do here) or by the compiler, and it stands for an optimization technique meant to reduce the loop iteration count.

In our case, in order to reduce the loop iteration count, we use more vectors to repeat the sequence of loop body statements that are responsible for summing the items. If we know that our arrays are long enough to always require at least 4 loop iterations, then rewriting the code as follows will reduce the loop iterations by 4 times:

public static void sumUnrolled(int x[], int y[], int z[]) {
 int width = VS256.length();
 int i = 0;
 for (; i <= (x.length - width * 4); i += width * 4) {
  IntVector s1 = IntVector.fromArray(VS256, x, i)
      .add(IntVector.fromArray(VS256, y, i));
  IntVector s2 = IntVector.fromArray(VS256...

111. Benchmarking the Vector API

Benchmarking the Vector API can be accomplished via JMH. Let’s consider three Java arrays (x, y, z) each of 50,000,000 integers, and the following computation:

z[i] = x[i] + y[i];
w[i] = x[i] * z[i] * y[i];
k[i] = z[i] + w[i] * y[i];

So, the final result is stored in a Java array named k. And, let’s consider the following benchmark containing four different implementations of this computation (using a mask, no mask, unrolled, and plain scalar Java with arrays):

@OutputTimeUnit(TimeUnit.MILLISECONDS)
@BenchmarkMode({Mode.AverageTime, Mode.Throughput})
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 5, time = 1)
@State(Scope.Benchmark)
@Fork(value = 1, warmups = 0, 
    jvmArgsPrepend = {"--add-modules=jdk.incubator.vector"})
public class Main {
  private static final VectorSpecies<Integer> VS 
    = IntVector.SPECIES_PREFERRED;
  ...
  @Benchmark
  public void computeWithMask(Blackhole blackhole...

112. Applying the Vector API to compute FMA

In a nutshell, Fused Multiply Add (FMA) is the mathematical computation (a*b) + c, which is heavily exploited in matrix multiplications. That’s all we need to cover for this problem, but if you need a primer on FMA, consider Java Coding Problems, First Edition, Chapter 1, problem 38.

Implementing FMA via the Vector API can be done via the fma(float b, float c) or fma(Vector<Float> b, Vector<Float> c) operation, the latter is the one you’ll see in an example shortly.

Let’s assume that we have the following two arrays:

float[] x = new float[]{1f, 2f, 3f, 5f, 1f, 8f};
float[] y = new float[]{4f, 5f, 2f, 8f, 5f, 4f};

Computing FMA(x, y) can be expressed as the following sequence: 4+0=4 → 10+4=14 → 6+14=20 → 40+20=60 → 5+60=65 → 32+65=97. So, FMA(x, y) = 97. Expressing this sequence via the Vector API can be done as shown in the following code:

private static...

113. Multiplying matrices via the Vector API

Let’s consider two matrices of 4x4 denoted as X and Y. Z=X*Y is as follows:

Figure 5.10.png

Figure 5.10: Multiplying two matrices (X * Y = Z)

Multiplying X with Y means multiplying the first row from X with the first column from Y, the second row from X with the second column from Y, and so on. For instance, (1 x 3) + (2 x 7) + (5 x 5) + (4 x 5) = 3 + 14 + 25 + 20 = 62. Basically, we repeatedly apply FMA computation and fill up Z with the results.

In this context, and based on the previous problem about computing FMA, we can produce the following code for multiplying X with Y:

private static final VectorSpecies<Float> VS 
  = FloatVector.SPECIES_PREFERRED;
public static float[] mulMatrix(
    float[] x, float[] y, int size) {
  final int upperBound = VS.loopBound(size);
  float[] z = new float[size * size];
  for (int i = 0; i < size; i++) {
    for (int k = 0; k < size; k++) {
      float elem = x[i * size + k]...

114. Hooking the image negative filter with the Vector API

An image is basically a matrix of pixels represented in the Alpha, Red, Green, Blue (ARGB) spectrum. For instance, an image of 232x290 can be represented as a matrix of 67,280 pixels. Applying specific filters (sepia, negative, grayscale, and so on) to an image typically requires processing each pixel from this matrix and performing certain calculations. For instance, the algorithm for applying the negative filter to an image can be used as follows:

Figure 5.11.png

Figure 5.11: Apply the negative filter effect to an image

For each pixel, we extract the color components A, R, G, and B. We subtract the R, G, and B values from 255, and finally, we set the new value to the current pixel.

Let’s assume that we have an array (pixel[]) containing all pixels of an image. Next, we want to pass pixel[] as an argument to a method powered by the Vector API capable of applying the negative filter and setting the new values directly...

115. Dissecting factory methods for collections

Using factory methods for collections is a must-have skill. It is very convenient to be able to quickly and effortlessly create and populate unmodifiable/immutable collections before putting them to work.

Factory methods for maps

For instance, before JDK 9, creating an unmodifiable map could be accomplished like this:

Map<Integer, String> map = new HashMap<>();
map.put(1, "Java Coding Problems, First Edition");
map.put(2, "The Complete Coding Interview Guide in Java");
map.put(3, "jOOQ Masterclass");
Map<Integer, String> imap = Collections.unmodifiableMap(map);

This is useful if, at some point in time, you need an unmodifiable map from a modifiable one. Otherwise, you can take a shortcut as follows (this is known as the double-brace initialization technique and, generally, an anti-pattern):

Map<Integer, String> imap = Collections.unmodifiableMap(
  new HashMap...

116. Getting a list from a stream

Collecting a Stream into a List is a popular task that occurs all over the place in applications that manipulate streams and collections.

In JDK 8, collecting a Stream into a List can be done via the toList() collector as follows:

List<File> roots = Stream.of(File.listRoots())
  .collect(Collectors.toList());

Starting with JDK 10, we can rely on the toUnmodifiableList() collector (for maps, use toUnmodifiableMap(), and for sets, toUnmodifiableSet()):

List<File> roots = Stream.of(File.listRoots())
  .collect(Collectors.toUnmodifiableList());

Obviously, the returned list is an unmodifiable/immutable list.

JDK 16 has introduced the following toList() default method in the Stream interface:

default List<T> toList() {
  return (List<T>) Collections.unmodifiableList(
    new ArrayList<>(Arrays.asList(this.toArray())));
}

Using this method to collect a Stream into an unmodifiable/immutable...

117. Handling map capacity

Let’s assume that we need a List capable of holding 260 items. We can do it as follows:

List<String> list = new ArrayList<>(260);

The array underlying ArrayList is created directly to accommodate 260 items. In other words, we can insert 260 items without worrying about resizing or enlarging the list several times in order to hold these 260 items.

Following this logic, we can reproduce it for a map as well:

Map<Integer, String> map = new HashMap<>(260);

So, now we can assume that we have a map capable of accommodating 260 mappings. Actually, no, this assumption is not true! A HashMap works on the hashing principle and is initialized with an initial capacity (16 if no explicit initial capacity is provided) representing the number of internal buckets and a default load factor of 0.75. What does that mean? It means that when a HashMap reaches 75% of its current capacity, it is doubled in size and a rehashing...

118. Tackling Sequenced Collections

The Sequenced Collections API was added as a final feature in JDK 21 under JEP 431. Its main goal is to make the navigation of Java collections easier by providing a common API to all collections having a well-defined encounter order.

A Java collection with a well-defined encounter order has a well-defined first element, second element, and so on, until the last element. The encounter order is the order in which an Iterator will iterate the elements of a collection (list, set, sorted set, map, and so on). The encounter order can take advantage of stability over time (lists) or not (sets).

This API consists of 3 interfaces named SequencedCollection (valid for any collection that has a well-defined encounter order), SequencedSet (extends SequencedCollection and Set to provide support for Java sets), and SequencedMap (extends Map to give support to any Java map that has a well-defined encounter order). In the following diagram, you can see...

119. Introducing the Rope data structure

Prerequisite: Starting with this problem, we will cover a bunch of complex data structures that require previous experience with binary trees, lists, heaps, queues, stacks, and so on. If you are a novice in the data structure field, then I strongly recommend you postpone the following problems until you manage to read The Complete Coding Interview Guide in Java, which provides deep coverage of these preliminary topics.

When we need to handle large amounts of text (for instance, if we were developing a text editor or a powerful text search engine), we have to deal with a significant number of complex tasks. Among these tasks, we have to consider appending/concatenating strings and memory consumption.

The Rope data structure is a special binary tree that aims to improve string operations while using memory efficiently (which is especially useful for large strings). Its Big O goals are listed in the following figure:

Figure 5.12.png

Figure 5...

120. Introducing the Skip List data structure

The Skip List data structure is a probabilistic data structure built on top of a linked list. A Skip List uses an underlying linked list to keep a sorted list of items, but it also provides the capability to skip certain items in order to speed up operations such as insert, delete, and find. Its Big O goals are listed in the following figure:

Figure 5.16.png

Figure 5.17: Big (O) for Skip List

A Skip List has two types of layers. The base layer (or the lower layer, or layer 0) consists of a regular linked list that holds the sorted list of all items. The rest of the layers contain sparse items and act as an “express line” meant to speed up the search, insert, and delete items. The following figure helps us to visualize a Skip List with three layers:

Figure 5.17.png

Figure 5.18: Skip List sample

So, this Skip List holds on layer 0 the items 1, 2, 3, 4, 5, 8, 9, 10, 11, and 34 and has two express lines (layer 1 and layer 2) containing...

121. Introducing the K-D Tree data structure

A K-D Tree (also referred to as a K-dimensional tree) is a data structure that is a flavor of Binary Search Tree (BST) dedicated to holding and organizing points/coordinates in a K-dimensional space (2-D, 3-D, and so on). Each node of a K-D Tree holds a point representing a multi-dimensional space. The following snippet shapes a node of a 2-D Tree:

private final class Node {
  private final double[] coords;
  private Node left;
  private Node right;
  public Node(double[] coords) {
    this.coords = coords;
  }
  ...
}

Instead of a double[] array, you may prefer java.awt.geom.Point2D, which is dedicated to representing a location in (x, y) coordinate space.

Commonly, K-D Trees are useful for performing different kinds of searches such as nearest-neighbor searches and range queries. For instance, let’s assume a 2-D space and a bunch of (x, y) coordinates in this space:

double[][] coords = {
  {3, 5}, {1, 4}, {5, 4...

122. Introducing the Zipper data structure

The Zipper data structure is meant to facilitate cursor-like navigation capabilities over another data structure such as a tree. Moreover, it may provide capabilities for manipulating the tree like adding nodes, removing nodes, and so on.

The Zipper is created on the top of a tree and is characterized by the current position of the cursor and the current range or the current visibility area. At any moment, the Zipper doesn’t see or act on the entire tree; its actions are available only on a subtree or a range of the tree relative to its current position. The modification accomplished via the Zipper is visible only in this range, not in the entire tree.

In order to navigate and determine the current range, a Zipper must be aware of the tree structure. For instance, it must be aware of all the children of each node, which is why we start from an interface that must be implemented by any tree that wants to take advantage of a...

123. Introducing the Binomial Heap data structure

A Binomial Heap data structure is a set composed of Binomial Trees. Each Binomial Tree is a Min Heap, which means that it follows the min-heap property. In a nutshell, a heap is a Min Heap if its items are in descending order, meaning that the minimum item is the root (more details are available in The Complete Coding Interview Guide in Java book).

In a nutshell, a Binomial Tree is ordered and typically defined in a recursive fashion. It is denoted as Bk, where k implies the following properties:

  • A Binomial Tree has 2k nodes.
  • The height of a Binomial Tree is equal to k.
  • The root of a Binomial Tree has the degree k, which is the greatest degree.

A B0 Binomial Tree has a single node. A B1 Binomial Tree has two B0 Trees, and one of them is a left subtree of the other one. A B2 Tree has two B1, one of which is the left subtree of the other. In general, a Bk Binomial Tree contains two Bk-1 Binomial Trees...

124. Introducing the Fibonacci Heap data structure

A Fibonacci Heap is a flavor of Binomial Heap with excellent performance in amortized time for operations such as insert, extract minimum, and merge. It is an optimal choice for implementing priority queues. A Fibonacci Heap is made of trees, and each tree has a single root and multiple children arranged in a heap-ordered fashion. The root node with the smallest key is always placed at the beginning of the list of trees.

It is called a Fibonacci Heap because each tree of order k has at least Fk+2 nodes, where Fk+2 is the (k+2)th Fibonacci number.

In the following figure, you can see a Fibonacci Heap sample:

Figure 5.39.png

Figure 5.39: Fibonacci Heap sample

The main operations in a Fibonacci Heap are (Big O represents the amortized time): insert (O(1)), decrease key (O(1)), find the minimum (O(1)), extract minimum (O(log n)), deletion (O(log n)), and merge (O(1)). You can find an implementation of these operations in the bundled...

125. Introducing the Pairing Heap data structure

The Pairing Heap is a flavor of Binomial Heap with the capability of self-adjusting/rearranging to keep itself balanced. It has very good performance in amortized time and is a good fit for the task of implementing priority queues.

A Pairing Heap is a pairing tree with a root and children. Each heap of a Pairing Heap represents a value and has a set of children that are also heaps. The value of a heap is always less than (min-heap property) or greater than (max-heap property) the value of its children heaps.

In the following figure, you can see a Min Pairing Heap:

Figure 5.38.png

Figure 5.40: A Min Pairing Heap sample

The main operations in a Pairing Heap are: insert (O(1)), decrease key (actual time: O(1), amortized time O(log n)), find the minimum (O(1)), extract the minimum (actual time: O(n), amortized time (O (log n)), and merge (actual time: O(1), amortized time (O(log n)). You can find an implementation of these operations...

126. Introducing the Huffman Coding data structure

The Huffman Coding algorithm was developed by David A. Huffman in 1950 and can easily be understood via an example. Let’s assume that we have the string shown in the following figure.

Figure 5.40.png

Figure 5.41: Initial string

Let’s assume that each character needs 8 bits to be represented. Since we have 14 characters, we can say that we need 8*14=112 bits to send this string over a network.

Encoding the string

The idea of Huffman Coding is to compress (shrink) such strings to a smaller size. For this, we create a tree of character frequencies. A Node of this tree can be shaped as follows:

public class Huffman {
  private Node root;
  private String str;
  private StringBuilder encodedStr;
  private StringBuilder decodedStr;
  private final class Node {
    private Node left;
    private Node right;
    private final Character character;
    private final Integer frequency;
    //  constructors
  }
  ...
}...

127. Introducing the Splay Tree data structure

A Splay Tree is a flavor of Binary Search Tree (BST). Its particularity consists of the fact that it is a self-balancing tree that places the recently accessed items at the root level.

The splaying operation or splaying an item is a process that relies on tree rotations meant to bring the item to the root position. Every operation on the tree is followed by splaying.

So, the goal of splaying is to bring the most recently used item closer to the root. This means that subsequent operations on these items will be performed faster.

The splaying operation relies on six rotations:

  • Zig rotation – the tree rotates to the right (every node rotates to the right)
  • Zag rotation – the tree rotates to the left (every node rotates to the left)
  • Zig-Zig rotation – double Zig rotation (every node moves twice to the right)
  • Zag-Zag rotation – double Zag rotation (every node moves twice to...

128. Introducing the Interval Tree data structure

An Interval Tree is a flavor of Binary Search Tree (BST). Its particularity consists of the fact that it holds intervals of values. Beside the interval itself, a Node of an Interval Tree holds the maximum value of the current interval and the maximum value of the subtree rooted with this Node.

In code form, an Interval Tree is shaped as follows:

public class IntervalTree {
  private Node root;
  public static final class Interval {
    private final int min, max;
    public Interval(int min, int max) {
      this.min = min;
      this.max = max;
    }
    ...
  }
  private final class Node {
    private final Interval interval;
    private final Integer maxInterval;
    private Node left;
    private Node right;
    private int size;
    private int maxSubstree;
    Node(Interval interval, Integer maxInterval) {
      this.interval = interval;
      this.maxInterval = maxInterval;
      this.size = 1;
      this.maxSubstree...

129. Introducing the Unrolled Linked List data structure

An Unrolled Linked List is a flavor of a linked list that stores arrays (multiple items). Each node of an Unrolled Linked List can store an array. It is like combining the powers of an array with those of a linked list. In other words, an Unrolled Linked List is a data structure with a low memory footprint and high performance on insertion and deletion.

Insertion and deletion from an Unrolled Linked List have different implementations.

For instance, we can insert arrays (insert(int[] arr)), which means that for each insertion, we create a new node and insert that array into it.

Deleting an item is equivalent to removing the item from the specified index in the proper array. If, after deletion, the array is empty, then it is removed from the list as well.

Another approach assumes that the Unrolled Linked List has a fixed capacity (each node holds an array of this capacity). Further, we insert items one by one...

130. Implementing join algorithms

Join algorithms are typically used in databases, mainly when we have two tables in a one-to-many relationship and we want to fetch a result set containing this mapping based on a join predicate. In the following figure, we have the author and book tables. An author can have multiple books and we want to join these tables to obtain a result set as the third table.

Figure 5.46.png

Figure 5.47: Joining two tables (author and book)

There are three popular join algorithms for solving this problem: Nested Loop Join, Hash Join, and Sort Merge Join. While databases are optimized to choose the most appropriate join for the given query, let’s try to implement them in plain Java on the following two tables expressed as records:

public record Author(int authorId, String name) {}
public record Book(int bookId, String title, int authorId) {}
List<Author> authorsTable = Arrays.asList(
  new Author(1, "Author_1"), new Author(2, "Author_2...

Summary

In this chapter, we covered a lot of interesting topics. We started with the new Vector API for empowering parallel data processing, then we continued with a bunch of cool data structures like Zipper, K-D Trees, Skip List, Binomial Heap, and so on. We finished with a nice overview of the three main join algorithms. Moreover, we’ve covered the JDK 21 Sequenced Collections API.

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://discord.gg/8mgytp5DGQ

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Java Coding Problems - Second Edition
Published in: Mar 2024Publisher: PacktISBN-13: 9781837633944
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Anghel Leonard

Anghel Leonard is a Chief Technology Strategist and independent consultant with 20+ years of experience in the Java ecosystem. In daily work, he is focused on architecting and developing Java distributed applications that empower robust architectures, clean code, and high-performance. Also passionate about coaching, mentoring and technical leadership. He is the author of several books, videos and dozens of articles related to Java technologies.
Read more about Anghel Leonard