What is Hazelcast?

Exclusive offer: get 50% off this eBook here
Getting Started with Hazelcast

Getting Started with Hazelcast — Save 50%

An easy-to-follow and hands-on introduction to the highly scalable data distribution system, Hazelcast, and its advanced features with this book and ebook

$23.99    $12.00
by Mat Johns | August 2013 | Open Source Web Development

This article by Mat Johns, the author of Getting Started with Hazelcast, gives a brief description about Hazelcast. By the end of this article, you will be aware that Hazelcast is a radical new approach to data, designed from the ground up around distribution. It embraces a new scalable way of thinking. The major feature about Hazelcast is its master less nature; each node is configured to be functionally the same.

Most, if not all, applications need to store some data, some applications far more than others. By holding this article in your eager hands and starting to flip through its pages, it might be safe to assume you have previously worked to architect, develop, or support applications more towards the latter end of that scale. We could imagine that you are all too painfully familiar with the common pitfalls and issues that tend to crop up around scaling or distributing your data layer. But to make sure we are all up to speed, in this article, we shall examine:

  • Traditional approaches to data persistence
  • How caches have helped improve performance, but bring about their own problems
  • Hazelcast's fresh approach to the problem
  • A brief overview its generic capabilities
  • Summary of what type of problems we might solve using it

(For more resources related to this topic, see here.)

Starting out as usual

In most modern software systems, data is the key. For more traditional architectures, the role of persisting and providing access to your system's data tends to fall to a relational database. Typically this is a monolithic beast, perhaps with a degree of replication, although this tends to be more for resilience rather than performance.

For example, here is what a traditional architecture might look like (which hopefully looks rather familiar)

This presents us with an issue in terms of application scalability, in that it is relatively easy to scale our application layer by throwing more hardware at it to increase the processing capacity. But the monolithic constraints of our data layer would only allow us to do this so far before diminishing returns or resource saturation stunted further performance increases; so what can we do to address this?

In the past and in legacy architectures, the only solution would be to increase the performance capability of our database infrastructure, potentially by buying a bigger, faster server or by further tweaking and fettling the utilization of currently available resources. Both options are dramatic, either in terms of financial cost and/or manpower; so what else could we do?

Data deciding to hang around

In order for us to gain a bit more performance out of our existing setup, we can hold copies of our data away from the primary database and use these in preference wherever possible. There are a number of different strategies we could adopt, from transparent second-level caching layers to external key-value object storage. The detail and exact use of each varies significantly depending on the technology or its place in the architecture, but the main desire of these systems is to sit alongside the primary database infrastructure and attempt to protect it from an excessive load. This would then tend to lead to an increased performance of the primary database by reducing the overall dependency on it. However, this strategy tends to be only particularly valuable as a short-term solution, effectively buying us a little more time before the database once again starts to reach saturation. The other downside is that it only protects our database from read-based load; if our application is predominately write-heavy, this strategy has very little to offer.

So our expanded architecture could look a bit like the following figure:

Therein lies the problem

However, in insulating the database from the read load, we have introduced a problem in the form of a cache consistency issue, in that, how does our local data cache deal with changing data underneath it within the primary database? The answer is rather depressing: it can't! The exact manifestation of any issues will largely depend on the data needs of the application and how frequently the data changes; but typically, caching systems will operate in one of the two following modes to combat the problem:

  • Time bound cache: Holds entries for a defined period (time-to-live or TTL)
  • Write through cache: Holds entries until they are invalidated by subsequent updates

Time bound caches almost always have consistency issues, but at least the amount of time that the issue would be present is limited to the expiry time of each entry. However, we must consider the application's access to this data, because if the frequency of accessing a particular entry is less than the cache expiry time of it, the cache is providing no real benefit.

Write through caches are consistent in isolation and can be configured to offer strict consistency, but if multiple write through caches exist within the overall architecture, then there will be consistency issues between them. We can avoid this by having a more intelligent cache, which features a communication mechanism between nodes, that can propagate entry invalidations to each other.

In practice, an ideal cache would feature a combination of both features, so that entries would be held for a known maximum time, but also passes around invalidations as changes are made.

So our evolved architecture would look a bit like the following figure:

So far we've had a look through the general issues in scaling our data layer, and introduced strategies to help combat the trade-offs we will encounter along the way; however, the real world isn't quite as simple. There are various cache servers and in-memory database products in this area: however, most of these are stand-alone single instances, perhaps with some degree of distribution bolted on or provided by other supporting technologies. This tends to bring about the same issues we experienced with just our primary database, in that we could encounter resource saturation or capacity issues if the product is a single instance, or if the distribution doesn't provide consistency control, perhaps inconsistent data, which might harm our application.

Breaking the mould

Hazelcast is a radical new approach to data, designed from the ground up around distribution. It embraces a new scalable way of thinking; in that data should be shared around for both resilience and performance, while allowing us to configure the trade-offs surrounding consistency as the data requirements dictate.

The first major feature to understand about Hazelcast is its master less nature; each node is configured to be functionally the same. The oldest node in the cluster is the de facto leader and manages the membership, automatically delegating as to which node is responsible for what data. In this way as new nodes join or dropout, the process is repeated and the cluster rebalances accordingly. This makes Hazelcast incredibly simple to get up and running, as the system is self-discovering, self-clustering, and works straight out of the box.

However, the second feature to remember is that we are persisting data entirely in-memory; this makes it incredibly fast but this speed comes at a price. When a node is shutdown, all the dta that was held on it is lost. We combat this risk to resilience through replication, by holding enough copies of a piece of data across multiple nodes. In the event of failure, the overall cluster will not suffer any data loss. By default, the standard backup count is 1, so we can immediately enjoy basic resilience. But don't pull the plug on more than one node at a time, until the cluster has reacted to the change in membership and reestablished the appropriate number of backup copies of data.

So when we introduce our new master less distributed cluster, we get something like the following figure:

We previously identified that multi-node caches tend to suffer from either saturation or consistency issues. In the case of Hazelcast, each node is the owner of a number of partitions of the overall data, so the load will be fairly spread across the cluster. Hence, any saturation would be at the cluster level rather than any individual node. We can address this issue simply by adding more nodes. In terms of consistency, by default the backup copies of the data are internal to Hazelcast and not directly used, as such we enjoy strict consistency. This does mean that we have to interact with a specific node to retrieve or update a particular piece of data; however, exactly which node that is an internal operational detail and can vary over time we as developers never actually need to know.

If we imagine that our data is split into a number of partitions, that each partition slice is owned by one node and backed up on another, we could then visualize the interactions like the following figure:

This means that for data belonging to Partition 1, our application will have to communicate to Node 1, Node 2 for data belonging to Partition 2, and so on. The slicing of the data into each partition is dynamic; so in practice, where there are more partitions than nodes, each node will own a number of different partitions and hold backups for others. As we have mentioned before, all of this is an internal operational detail, and our application does not need to know it, but it is important that we understand what is going on behind the scenes.

Moving to new ground

So far we have been talking mostly about simple persisted data and caches, but in reality, we should not think of Hazelcast as purely a cache, as it is much more powerful than just that. It is an in-memory data grid that supports a number of distributed collections and features. We can load in data from various sources into differing structures, send messages across the cluster; take out locks to guard against concurrent activity, and listen to the goings on inside the workings of the cluster. Most of these implementations correspond to a standard Java collection, or function in a manner comparable to other similar technologies, but all with the distribution and resilience capabilities already built in.

  • Standard utility collections
    • Map: Key-value pairs
    • List: Collection of objects?
    • Set: Non-duplicated collection
    • Queue: Offer/poll FIFO collection
  • Specialized collection
    • Multi-Map: Key-list of values collection
  • Lock: Cluster wide mutex
  • Topic: Publish/subscribe messaging
  • Concurrency utilities
    • AtomicNumber: Cluster-wide atomic counter
    • IdGenerator: Cluster-wide unique identifier generation
    • Semaphore: Concurrency limitation
    • CountdownLatch: Concurrent activity gate-keeping
  • Listeners: Application notifications as things happen

In addition to data storage collections, Hazelcast also features a distributed executor service allowing runnable tasks to be created that can be run anywhere on the cluster to obtain, manipulate, and store results. We could have a number of collections containing source data, then spin up a number of tasks to process the disparate data (for example, averaging or aggregating) and outputting the results into another collection for consumption.

Again, just as we could scale up our data capacities by adding more nodes, we can also increase the execution capacity in exactly the same way. This essentially means that by building our data layer around Hazelcast, if our application needs rapidly increase, we can continuously increase the number of nodes to satisfy seemingly extensive demands, all without having to redesign or re-architect the actual application.

With Hazelcast, we are dealing more with a technology than a server product, a library to build a system around rather than retrospectively bolting it on, or blindly connecting to an off-the-shelf commercial system. While it is possible (and in some simple cases quite practical) to run Hazelcast as a separate server-like cluster and connect to it remotely from our application; some of the greatest benefits come when we develop our own classes and tasks run within it and alongside it.

With such a large range of generic capabilities, there is an entire world of problems that Hazelcast can help solve. We can use the technology in many ways; in isolation to hold data such as user sessions, run it alongside a more long-term persistent data store to increase capacity, or shift towards performing high performance and scalable operations on our data. By moving more and more responsibility away from monolithic systems to such a generic scalable one, there is no limit to the performance we can unlock.

This will allow us to keep our application and data layers separate, but enabling the ability to scale them up independently as our application grows. This will avoid our application becoming a victim of its own success, while hopefully taking the world by storm.

Summary

In this article, we learned about Hazelcast. With such a large range of generic capabilities, Hazelcast can solve a world of problems.

Resources for Article:


Further resources on this subject:


Getting Started with Hazelcast An easy-to-follow and hands-on introduction to the highly scalable data distribution system, Hazelcast, and its advanced features with this book and ebook
Published: August 2013
eBook Price: $23.99
Book Price: $39.99
See more
Select your format and quantity:

About the Author :


Mat Johns

Mat Johns is an agile software engineer, hands-on architect, and a general technologist based in London. His experience with the Web reaches all the way back to his misspent youth and some rather hacktastic code, but eventually he grew up to graduate from the University of Southampton with a Masters in Computer Science with Distributed Systems and Networks. He has worked for a number of startups on various web projects and systems since then and nowadays he specializes in designing and creating high performance and scalable web services, currently in the Internet TV world.

Away from technology, he is an avid explorer and endeavors to seek out new destinations and adventures as much as possible. He is also a qualified yacht skipper and regularly races in, around, and beyond the Solent.

You can follow him on Twitter at @matjohns.

Books From Packt


JBoss ESB Beginner’s Guide
JBoss ESB Beginner’s Guide

JBoss AS 7 Configuration, Deployment and Administration
JBoss AS 7 Configuration, Deployment and Administration

Drools Developer’s Cookbook
Drools Developer’s Cookbook

Drools JBoss Rules 5.0 Developer's Guide
Drools JBoss Rules 5.0 Developer's Guide

JBoss Drools Business Rules
JBoss Drools Business Rules

JBoss AS 7 Development
JBoss AS 7 Development

Instant Drools Starter [Instant]
Instant Drools Starter [Instant]

Infinispan Data Grid Platform
Infinispan Data Grid Platform


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
M
K
R
j
a
B
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software