Configuring Load Balancers and High Availability in GlassFish

Exclusive offer: get 50% off this eBook here
GlassFish Administration

GlassFish Administration — Save 50%

Administer and configure the GlassFish v2 application server

€20.99    €10.50
by Xuekun Kou | December 2009 | Java Open Source

In this article by Xuekun Kou, we will discuss how to use a load balancer to distribute load across the server instances in the cluster. We will also discuss the High Availability (HA) options supported by GlassFish, and how to enable HA.

Configuring load balancers

For a cluster with multiple server instances, using a load balancer in front of the server instances not only simplifies the client's view of the system through a single address, but also improves the overall system's reliability. If one server fails, the load balancer can detect the failure and distribute the load to other live server instances. In this section, we will discuss the load balancer support in GlassFish, and how to configure it.

GlassFish can work with a variety of load balancers, both hardware and software based. For example, the GlassFish project maintains and releases a load balancer plug-in, which works with Apache, Sun Java System Web Server, and Microsoft IIS. The load balancer plug-in is freely downloadable at http://download.java.net/javaee5/external/<os>/aslb/jars, where <os> indicates the operating system on which the web server is running.

Also, GlassFish can be load balanced by Apache using mod_jk. However, the mod_jk based load balancing is not officially supported in GlassFish 2. In this article, we will focus on the GlassFish load balancer plugin.

The mod_jk based load balancing will be supported in GlassFish 3.

The GlassFish load balancer plug-in accepts HTTP and HTTPS requests and forwards them to one of GlassFish instances in the cluster. It uses a health checker to detect server instance status. If a server instance fails, requests are redirected to existing, available machines. Once a failed server instance comes back online, the load balancer can also recognize when a failed instance has recovered and redistributed the load accordingly.

Like most load balancer solutions for web applications, the GlassFish load balancer plugin implements session affinity or sticky session. On the first request from a client, a new session is established on the server instance node that is processing the request. The load balancer routes all subsequent requests for this session to that particular instance.

The load balancer plugin configuration actually involves two steps. First, we need to configure the web server to enable the load balancer; we then need to configure the GlassFish Server to specify how the load balancer should be configured for a cluster, or an application in a cluster.

In the following section, let's discuss how to configure the Apache web server for the load balancer plugin.

Configuring the load balancer plug-in for Apache web server

The Apache web server is the most popular web server. Its modular architecture makes it easy to extend for additional functionality. The GlassFish load balancer plugin for Apache is configured as an additional module. In this section, we discuss how to configure this module.

In this section, we discuss the configuration of the load balancer plug-in for the default Apache server, version 2.2, with SSL enabled on port 443, installed on Solaris x86. The configuration process for other operating systems is very similar; the only difference is the directory structure of the Apache server. For more detailed information, you can refer to the GlassFish High Availability Administration Guide, located at http://docs.sun.com/app/docs/doc/821-0182.

Complete the following steps to configure the load balancer plug-in on the Apache web server:

  1. Download the latest load-balancer plug-in from http://download.java.net/javaee5/external/<os>/aslb/jars. Extract the JAR file, and then extract the two ZIP files, SUNWaspx.zip and SUNWaslb.zip to a directory, $lbplug-in_root.
  2. Copy the errorpages directory from $lbplug-in_root/lib/webserverplug-in/<os>/apache2.2 to /usr/apache2/2.2/modules/errorpages. The errorpages directory contains the web pages used by the load balancer plug-in to render load balancer error messages.
  3. Copy the two files, LBPlug-inDefault_root.res and LBPlug-inD_root.res from $lbplug-in_root/lib/webserver-plug-in/<os>/apache2.2 to the /usr/apache2/2.2/modules/resource directory.
  4. Copy the three files, cert8.db, key3.db, and secmpd.db from $lbplugin_root/lib/webserver-plug-in/<os>/apache2.2 to the /usr/apache2/2.2/sec_db_files directory.
  5. Copy the file mod_load_balancer.so from $lbplug-in_root/lib/webserver-plug-in/<os>/apache2.2 to the /usr/apache2/2.2/libexec directory.
  6. Copy the file loadbalancer.xml.example from $lbplug-in_root/lib/install/templates to the /etc/apache2/2.2/loadblancer.xml. The loadbalancer.xml file is the primary configuration file that defines the load balancer. We will discuss this file in the next section. At this moment, we can consider it as a place holder.
  7. Copy the file sun-loadbalancer_1_2.dtd from $lbplug-in_root/lib/dtds to /etc/apache2/2.2.
  8. Append the following elements to /etc/apache2/2.2/conf.d/ modules-32.load:
    LoadModule apachelbplug-in_module libexec/mod_loadbalancer.so
    <IfModule apachelbplug-in_module>
    <config-file /etc/apache2/2.2/loadbalancer.xml</IfModule>
  9. Add the following definition of the LD_LIBRARY_PATH variable to /etc/apache2/2.2/envvars:
    LD_LIBRARY_PATH=/usr/lib/mps:$lbplug-in_root/lib:/usr/apache2/2.2/libexec:$LD_LIBRARY_PATH
  10. Copy the mpm.conf file from the /etc/apache2/2.2/envvars/samplesconf.d directory to /etc/apache2/2.2/envvars/conf.d, and change the values of the StartServers and MaxClients variables in the file to 1. This change is necessary. Otherwise, every new session request will spawn a new Apache process and the load balancer plug-in will be initialized resulting in requests landing in the same instance.
  11. Restart the Apache server.

Once we have applied these steps to the Apache web server, it is ready to work as a load balancer for our GlassFish cluster. As we discussed earlier, the essential configuration file for the load balancer plug-in is loadbalancer.xml. This file captures the essential information about the server instances being load balanced, and the load distribution configuration. This information is actually generated on the GlassFish Server, and then transferred to the web server. In the next section, let's discuss how we can configure the load balancer information on GlassFish.

Configuring GlassFish for load balancing

Once the web server has been configured, we can define a GlassFish load balancer configuration. This can be done using the Admin Console, or the create-http-lb command of the asadmin CLI utility. To create a load balancer configuration using the Admin Console, click the HTTP Load Balancer node in the navigational panel, and then click New in the content panel. The following screenshot shows the input form for creating a load balancer configuration.

GlassFish Administration

The important input parameters for the load balancer configuration are explained as follows:

  • Name: A unique name of the new load balancer configuration.
  • All instances: Where all the server instances of the selected target will be load balanced.
  • All applications: Whether all the applications deployed to the target will be load balanced.
  • Device host and admin port: The load balancing device's server information. GlassFish can rely on this information to automatically push the new load balancer configuration information to the device.
  • For software load balancers, the device is the server that runs the load balancing software. The admin port is the port through which administration tasks can be performed on the load balancing device. The port must be SSL enabled. If the device requires authentication, you should configure the device to support client certificate authentication. The information is used purely for pushing the load balancer configuration. Therefore, if you have a different mechanism to transfer the information, the device host and admin port values are not important, even though they are necessary.

  • Automatically apply changes: Whether GlassFish pushes the new load balancer configuration information immediately to the device.
  • Targets: The clusters and server instances that will participate in the load balancing.

Once the load balancer configuration is created, we can modify the settings for the load balancer, including the following parameters:

  • Response timeout: Time in seconds within which a server instance must return a response. If no response is received within the time period, the server is considered unhealthy. The default is 60.
  • HTTPS routing: Whether HTTPS requests to the load balancer result in HTTPS or HTTP requests to the server instance. For more information, see Configuring HTTPS Routing.
  • Reload pool interval: Interval between checks for changes to the load balancer configuration file loadbalancer.xml. When the check detects changes, the configuration file is reloaded. A value of 0 disables reloading.
  • Monitoring: Whether monitoring is enabled for the load balancer.
  • Route cookie: Whether to use cookie to store the session routing information. Name of the cookie the load balancer plug-in uses to record the route information.
  • Target: Target for the load balancer configuration. If you specify a target, it is the same as adding a reference to it. Targets can be clusters or stand-alone instances.

In addition, we can configure the load balancing algorithm, and the health checker of the load balance plug-in. By default, the load balancer uses simple round-robin mechanism to distribute the load across server instances. The load balancer plug-in also supports a weighted round robin algorithm; it allows us to favor certain instances based on their relative weights. If neither algorithm satisfies the requirement, we can develop a custom algorithm.

By default, the health checker of the load balancer plugin uses a specified URL to check all unhealthy GlassFish instances, and determines if they have returned to the healthy state. If the health checker finds that an unhealthy instance has become healthy, that instance is added to the list of healthy instances, and the load balancer will distribute load to it.

To configure the health checker, click the Target tab of the load balancer configuration form, and click the Edit Health Check link of a specific target. The load balancing algorithm and health checker configuration form is shown in the following screenshot.

GlassFish Administration

If we click the Export tab in the load balancer editing form, we get the options to either export the load balancer configuration as loadbalancer.xml file. This file can be copied to the load balancer host, which effectively updates the load balancer information. We can also click Apply Changes Now to push the configuration, provided that the load balancer device is configured appropriately.

GlassFish Administration Administer and configure the GlassFish v2 application server
Published: December 2009
eBook Price: €20.99
Book Price: €34.99
See more
Select your format and quantity:

Disabling (Quiescing) targets and applications

In a production environment, when we need to shut down a target, such as a server instance or cluster, we would like the target to shut down gracefully, that is, it stops accepting new requests and it keeps working until it has finished processing all the existing requests. This mechanism is called quiescing. The load balancer plug-in supports quiescing. In addition, it allows us to specify a timeout. If the target is still processing some existing request when the timeout limit is reached, the target will be shut down.

Similarly, in GlassFish, we can quiesce an application. For example, before we undeploy a web application, we may want the application to complete serving requests. We can also apply a time out value to the applications to be quiesced.

Now that we understand clusters, let's discuss how we can configure a GlassFish cluster for high availability in the following section.

Configuring high availability

Availability is typically measured in the form of system down time. The single most important mechanism to achieve high availability (HA) is through redundancy and replication. For example, by running multiple server instances on different hardware, we are eliminating a single point of failure. However, in many cases, hardware redundancy alone is not sufficient, especially when the application maintains a conversation state, or session, with the client. In this case, in order to deliver high availability for the application, we must be able to replicate the session data so that in the event of a server instance failure, the session state information maintained in that server instance is not lost, and a different server instance can restore the session from the replicated session data store.

For session replication, GlassFish supports two options, using the in-memory replication, and using the High Availability Database (HADB) for more reliable state persistence. In the following section, we discuss both of these options, and show you how to configure them.

Working with in-memory replication

In-memory session replication is enabled by default for GlassFish running in the cluster profile. It provides a light-weight failover mechanism that is high performance, and easy to configure. The high-level architecture of in-memory replication is illustrated in the following figure.

In this figure, a cluster of four server instances are running on two different server hosts. Each server instance not only maintains a data store for sessions established on itself, it also maintains a replicated data store of one of the server instance in the cluster. For example, the session store of server instance 1 is replicated in server instance 2, whose session store is replicated in server instance 3, and so on. Effectively, the server instances form a replication ring.

In order to take advantage of in-memory session replication, make sure all the server instances of a cluster belong to the same subnet.

GlassFish Administration

As we can see, in the normal state, every session is maintained by one server instance, and replicated in another server instance. If server instance 1 fails, a session originally established on server instance 1 can be failed over to server instance 2 without interruption.

If two adjacent server instances in the ring fail at the same time, some session state information will be lost. For example, if both server instance 1 and 2 fail, those sessions established on instance 1 will be lost. Due to this, in-memory replication may not be as reliable as more sophisticated mechanisms. However, if server instances are running on different physical hardware, the likelihood of both server instances failing can be greatly reduced.

In addition, in-memory replication provides very high performance, and its configuration is extremely simple. Due to this, in practice, unless the system availability requirement is very high, we strongly recommend you use in-memory replication.

Now, let's see how easy it is to configure in-memory replication.

Configuring in-memory replication

Configuring in-memory replication is very straightforward. We can configure it using the Admin Console as follows:

  1. Log on to the Admin Console, expand the Configurations node, and then expand the cluster's configuration node.
  2. Click Availability Service. The Availability Service of the cluster is shown in the Admin Console. You should see that the Availability Service for the cluster is enabled, which is the default value.
  3. Note that on the form, there are several additional properties, such as the MQ Store Pool Name and HA Store Name and so on. The MQ Store Pool Name property allows us to provide a JDBC resource as a message persistence mechanism (the default persistence mode for Open MQ is filesystem). Other HA related properties are related to the HADB configuration.

  4. Click the Web Container Availability tab.
    Now we can configure the Web Container for session replication, as shown in the following figure. Among others, the most important parameter is Persistent Type. It specifies the session persistence mechanism for web applications that have availability enabled. For the cluster profile of GlassFish, the default value is replicated, meaning the web container session state is replicated in-memory.
  5. Click the EJB Container Availability tab, and we can configure the availability service for stateful session beans. The parameters for EJB container's availability are similar to those for web container availability.
  6. Click the JMS Availability tab, and we can configure the Open MQ based JMS service for high availability. We can specify a JDBC resource as the JMS message persistence mechanism (The default message persistence for Open MQ is done using the filesystem).
  7. OpenMQ supports Message Broker clusters that can be deployed as an enterprise-level, high performance, and highly available message backbone. GlassFish's JMS resources can take full advantage of a clustered Open MQ installation. Discussing Open MQ Broker clustering is out of scope of this article. You can refer to the Open MQ documentation and the GlassFish High Availability Administration Guide for more information.

    GlassFish Administration

Configuring the Group Member Service (GMS)

The in-memory state replication in GlassFish relies on the Group Management Service (GMS) for keeping track of the server instance status. GMS is based on the open source project Shoal (https://shoal.dev.java.net). It uses an event driven mechanism to track the group membership. For example, when a server instance of a cluster fails, GMS uses an event to notify the cluster about the state change, and all other server instances will reform a replication ring (this is called a reshape).

By default, GMS is enabled for a cluster. We can disable/enable it using the Admin Console by following these steps:

  1. Log on to the Admin Console.
  2. Click Clusters in the navigation pane.
  3. Click the name of the cluster.
  4. Click the General tab. Check the Heartbeat Enabled box to enable GMS.We can also change the default port and IP address, as shown in the following screenshot.
  5. Click Save.

GlassFish Administration

Once GMS is enabled for the cluster, the cluster's configuration will contain the parameters for the Group Management Services, and we can change these settings to suit our need. To configure GMS in the Admin Console, complete the following steps:

  1. Log on to the Admin Console.
  2. Expand the Configurations node in the navigation pane, and then expand the configuration for the cluster.
  3. Click Group Management Service. The Group Management Service parameters should be displayed in the content pane, as shown in the following screenshot.
  4. GlassFish Administration

  5. Set the parameters to appropriate values, and click Save.

The configurable parameters for GMS are as follows:

  • Protocol Maximum Trial: Maximum number of attempts before GMS confirms that a failure is suspected in the group
  • Protocol Timeout: Period of time between monitoring attempts to detect Failure
  • Maximum Interval: Maximum amount of time to wait to collect sub-group information before performing a merge
  • Minimum Interval: Minimum amount of time to wait to collect sub-group information before performing a merge
  • Ping Timeout: Amount of time that GMS waits for discovery of other members in this group
  • Verified Timeout: After this timeout, a suspected failure is marked as verified

    HADB-based session failover

    HADB-based session persistence is another option supported for GlassFish Server running in the enterprise profile. Using this approach, session state is maintained by the high availability database. Compared to the in-memory session replication, the HADB approach is definitely heavy weight. Not only do we need to manage the GlassFish cluster, we also need to configure the HADB. In addition, there is definitely an overhead using the HADB to persist session data. However, the HADB-based session persistence is a proven technology that can produce very high system availability. Due to this, for systems with availability being the number one priority, using HADB as the session persistence mechanism is definitely an excellent choice.

    The HADB implementation is not open source, therefore, you won't be able to download the GlassFish Server with HADB support from the GlassFish website.However, you can download it from the Sun download site http://www.sun.com/software/products/appsrvr.

    This version of the GlassFish Server can be considered as the "enterprise edition". In fact, not only does the software distribution include the HADB support, it also includes a tested GlassFish load balancer plug-in, and it is also bundled with several additional resources, such as JDBC drivers.

    Installing the GlassFish Server with HADB software is relatively easy, as its graphical installer is very simple. The only caveat is that when you install the software, you will be asked if you want to install the load balancer plug-in. In order to install the load balancer plug-in, a supported web server must be pre-installed on the system. If you don't have a supported web server installed, you can simply skip it. As we have seen earlier in this article, we can always manually configure the load balancer later.

    Configuring HADB enabled clusters is much more involved than configuring clusters using in-memory session replication.

    Summary

    In this article, we showed you how to configure high availability for GlassFish. We also showed you how to use a load balancer to distribute the load across multiple server instances. At this point, you should have a clear understanding of GlassFish clusters and the high availability configuration options.

    To know more about how to configure Clusters in Glassfish read Configuring Clusters in GlassFish.

GlassFish Administration Administer and configure the GlassFish v2 application server
Published: December 2009
eBook Price: €20.99
Book Price: €34.99
See more
Select your format and quantity:

About the Author :


Xuekun Kou

Xuekun Kou has been architecting, designing, and building enterprise Java applications since the early days of J2EE. He also trains architects and developers on the Java technology, software engineering, and software architecture. He has extensive experience with most application server products, and his experience with the GlassFish application server dates back to its ancestor, Sun Microsystems' application server series: iPlanet, Sun ONE, and Sun Java System application server. He holds degrees from the Florida State University and the University of Science and Technology of China.

Books From Packt


Zend Framework 1.8 Web Application Development
Zend Framework 1.8 Web Application Development

Joomla! E-Commerce with VirtueMart
    Ext JS 3.0 Cookbook

Drupal 6 JavaScript and jQuery
Drupal 6 JavaScript and jQuery

WordPress 2.7 Cookbook
WordPress 2.7 Cookbook

RESTful Java Web Services
RESTful Java Web Services

Spring Persistence with Hibernate
Spring Persistence with Hibernate 

JasperReports 3.5 for Java Developers
JasperReports 3.5 for Java Developers

WebSphere Application Server 7.0 Administration Guide
Zabbix 1.6 Network Monitoring [RAW]


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software