Configuring Clusters in GlassFish

Exclusive offer: get 50% off this eBook here
GlassFish Administration

GlassFish Administration — Save 50%

Administer and configure the GlassFish v2 application server

£16.99    £8.50
by Xuekun Kou | December 2009 | Java Open Source

In this two part article series by Xuekun Kou, we will discuss how to configure clusters for the GlassFish Server, and use a load balancer to distribute load across the server instances in the cluster. We will also discuss the High Availability (HA) options supported by GlassFish, and how to enable HA. The goal of this article is to help you gain the knowledge necessary for planning and creating a production-ready GlassFish Server deployment.

Configuring clusters for GlassFish

In order to deliver the required performance, throughput, and reliability, a production environment typically needs to host enterprise applications using multiple running application server instances. In order to easily configure and maintain these server instances, most application server products, including GlassFish, allow these server instances to be grouped into a cluster and administered together. In this section, we first review the core concepts of the GlassFish cluster, and then show you how to configure and manage clusters.

Understanding GlassFish clusters

A GlassFish cluster is a logical entity that groups multiple GlassFish Server instances. The server instances within a cluster can run on different physical machines or on the same machine. The cluster is administered by the Domain Administration Server (DAS). The server instances in a cluster share the same configuration, and they host all applications and resources deployed to the cluster.

The main benefit of a cluster is that it significantly simplifies the administration of server instances. Instead of configuring these server instances and deploying applications to them individually, a cluster provides a one-stop administration facility to enforce the homogeneity of server instances. Besides, a cluster provides very good support for horizontal scalability. For example, if the production environment no longer has sufficient processing power, we can dynamically create a GlassFish Server instance and add it to the existing cluster without extensive reconfiguration. Finally, with the help of a load balancer and appropriate HA configuration, a cluster can be made resilient to server instance issues.

We will focus on the clustering aspect of GlassFish in this section. Load balancers and HA will be discussed later in this article.

The following figure illustrates the main components of a cluster from the administration perspective.

GlassFish Administration

The components illustrated in the figure are described as follows:

  • The Domain Administration Server (DAS): DAS is a special server instance responsible for administration of a domain. All administrative operations are routed to DAS. Upon receiving administrative requests, DAS is responsible for sending the request to an individual server instance, or broadcasting it to all the server instances in a cluster. DAS can administer server instances running on remote hosts as well.
  • Node agent: A node agent is a light-weight process running on the physical server that hosts GlassFish Server instances. The node agent is responsible for managing the life cycle of these server instances. It can perform the following tasks:
    • Start, stop, restart, create, and delete server instances
    • Provide a view of the log files of failed server instance
  • If the node agent crashes, it does not affect the server instances and user applications that are currently running. However, a failed node agent can no longer manage and monitor those server instances.

  • Server instance: With the exception of DAS, all the other server instances must be created with a reference to a node agent. A server instance can be stand-alone, or it can belong to a cluster. A stand-alone instance maintains and uses its own configuration, while a clustered instance inherits majority of the configuration information from the cluster.
  • In our experience, stand-alone server instances are rarely used. Even if you only need one server instance to host applications, we recommend that you define a cluster with only one instance. This approach always allows the server to be potentially scaled out by adding additional server instances to the cluster. The overhead of clustered instance is completely negligible.

  • The central repository and repository cache: The central repository is maintained by DAS. The central repository contains the server instance configuration data, and the applications deployed to the GlassFish domain. Each server instance and cluster synchronizes the central repository to its local repository cache. Keep in mind that the repository cache is a subset of the central repository, because the server instance and cluster only synchronize the information pertinent to itself.
  • The local repository cache makes it possible to keep stand-alone and clustered server instances running while DAS is shut down. In fact, many organizations indeed shut down DAS in production environment, and they only start up the DAS when there is a new deployment of applications or resources. Without the DAS running, the GlassFish configuration cannot be changed using common administrative tasks.

A node agent is associated with a particular domain when it is created, and it can service only a single domain. If a physical machine hosts server instances that belong to multiple domains, it must run multiple node agents, one for each domain. DAS only needs the node agent to perform administrative operations on the server instances. The synchronization between DAS and server instances takes place directly through the JMX API remote connector.

The server instance or node agent synchronizes its state with the central repository in the following cases: completely at instance or node agent creation and start-up time, and incrementally as configuration changes are made to the central repository.

Now let's dive into the process of configuring a GlassFish cluster.

Configuring clusters

In this section, we discuss the necessary steps to configure a GlassFish cluster, and along the way we will discuss more features of the GlassFish cluster.

GlassFish clusters can be created on most operating system and hardware platforms. The noticeable exceptions are Microsoft Windows running the 64-bit JDK software, and Mac OSX.

Obtaining cluster support

The very first thing we need to do in order to configure a cluster is to make sure that the GlassFish Server we are working with has cluster support. Earlier, we introduced the concept of the usage profile of GlassFish. As it turned out, clustering support is available in the cluster and enterprise profiles, and not in the developer profile by default. However, even if we originally installed GlassFish in the developer profile, we can easily upgrade the GlassFish Server to enable clustering support. To do this, complete the following steps:

  1. Log on to the GlassFish Admin Console.
  2. Click the Application Server node in the navigation pane.
  3. Click the General tab in the main content pane.
  4. Click Add Cluster Support, as shown in the following screenshot.
  5. Click OK to confirm the choice.
  6. Restart the GlassFish Server.

GlassFish Administration

Once GlassFish restarts, we can log on to the Admin Console. We can confirm the cluster support by verifying the new Admin Console, as shown in the following screenshot.

GlassFish Administration

As we can see, in the navigation panel, the previous Application Server node is replaced by the Domain node. If we click this node in the navigation panel, we will see just a few configuration options, such as managing the administrator password. Most of the other management options, such as JVM settings are no longer there in the Admin Console. The reason is that those features, such as JVM settings are applied to individual server instances. Therefore, these properties are now associated with the server instances in the cluster profile.

For a GlassFish Server that is upgraded from the developer profile, the original server instance now becomes the DAS instance and GlassFish treats the DAS as a stand-alone server instance. When we log on to the Admin Console, this server is listed under the Stand-Alone Instances node.

Once we have enabled clusters for GlassFish, we can start creating clusters. As all server instances are managed through node agents, the next step in creating a cluster is to create and start the node agents.

GlassFish Administration Administer and configure the GlassFish v2 application server
Published: December 2009
eBook Price: £16.99
Book Price: £27.99
See more
Select your format and quantity:

Creating node agents

We can create a node agent in two ways: using the Admin Console, or the create-node-agent command of the asadmin CLI. The following screenshot shows the Admin Console user interface for creating a node agent.

GlassFish Administration

As you can see in the screenshot, to create a node agent using the Admin Console, simply click Node Agents in the navigation panel, enter the name of the node agent in the content panel, and click OK.

The node agent created using the Admin Console is merely a place holder. The Admin Console shows the status of the newly created node agent as "Waiting for rendezvous". In other words, at this point the node agent is created from the administration perspective and we can go on and perform additional configuration, such as defining server instances for the node agent. However, the node agent has not yet been materialized on a target server machine. This is sometimes called offline node agent creation.

We can also create a node agent using the asadmin CLI. For example, to physically create a node agent, we use the following command:

# cd $AS_INSTALL/bin
# ./asadmin create-node-agent -H <admin-host> <node-agent-name>

This command must be executed on any server that will host server instances. Creating a node agent using the asadmin CLI is sometimes called online deployment, because this command will create a materialized agent. In the above command, the option -H <admin-host> indicates the DAS host name. This parameter is necessary because each node agent must know which DAS it should communicate with.

A node agent must be materialized before it can be started. Due to this, if we have created a node agent using the Admin Console, we still need to use the asadmin CLI to create one that has the same name as the place holder name specified in the Admin Console.

After creating the node agent, we can start it by using the start-node-agent command of the asadmin CLI on the machine where the node agent is defined, for example:

# cd $AS_INSTALL/bin
# ./asadmin start-node-agent osdev

The next step is to define a cluster, create several server instances, and add the server instances to the cluster.

Creating clusters

We can create a cluster using either the Admin Console or the asadmin CLI's create-cluster command. To use the Admin Console, complete the following steps:

  1. Log on to the Admin Console.
  2. Click Clusters in the navigation panel.
  3. Click New in the content panel.
  4. Enter appropriate information, as shown in the following screenshot, and click OK.

GlassFish Administration

The parameters shown in the above screenshot are explained as follows:

  • Name: Each cluster must have a unique name.
  • Configuration: For GlassFish running in the cluster or enterprise profile, the GlassFish Server provides two configuration templates. The serverconfig template defines the default configuration data for the DAS server. The default-config is used to provide the same for other server instances or clusters. Typically, we can select default-config and make a copy of the selected configuration. For clusters, GlassFish will make a copy of the default-config, and save it under the name <cluster-name>-config.
  • Server instances to be clustered: All the server instances of a cluster must be managed through node agents. Therefore, for each server instance we want to add to the cluster, we need to specify a name and associate it with a defined node agent.

We can create server instances of a cluster using the offline node agent place holders. Also, server instances can be created or deleted after the cluster has been created.

Clusters can be also created using the create-cluster command of the asadmin CLI.

Administering clusters

Once the cluster has been created and one or multiple server instances have been added to the cluster, we can start administering it. The easiest way to administer the cluster is to use the Admin Console. To do this, complete the following steps:

  1. Log on to the Admin Console.
  2. Expand the Clusters node, and click the target cluster name in the navigation panel.

The following screenshot shows the cluster administration interface.

GlassFish Administration

The General tab of the cluster management page allows us to do the following:

  • Start and stop the server instances of the cluster.
  • Enable the Group Management Service (GMS) for in-memory replication. We will discuss GMS in more detail later in this article.
  • Configure EJB timer migration.

The Applications tab allows us to track applications deployed to the cluster, and it also allows us to enable, disable, deploy, and remove applications. In addition, when we use a load balancer to distribute the processing across multiple server instances of the cluster, we can enable or display one of the cluster-deployed applications for load balancing. The Resources tab allows us to track the Java EE resources used in our environment. It also allows us to deploy new resources to GlassFish. Other tabs of the cluster management page are actually very similar to the tabs for the Application Server node for GlassFish running in the developer profile. For example, if we want to create a new JDBC resource, we can click the Resources tab, and it will allow us to create the desired JDBC resource. The Physical Destinations tab tracks the MQ destinations created for the new cluster.

Creating server instances for the cluster

The Instances tab of the cluster configuration page allows us to manage the server instances defined for the cluster. It also allows us to create new server instances for the cluster. The create-instance command of the asadmin CLI utility can perform the same functionality.

We can also configure each server instance's weight in the cluster. As we will see later in this article, the weight value will affect how a load balancer distributes load among the server instances.

Summary

In this article, we showed you how to configure clusters.

Now that we have created node agents, clusters, and server instances, let's examine another piece of the puzzle—the load balancer.

GlassFish Administration Administer and configure the GlassFish v2 application server
Published: December 2009
eBook Price: £16.99
Book Price: £27.99
See more
Select your format and quantity:

About the Author :


Xuekun Kou

Xuekun Kou has been architecting, designing, and building enterprise Java applications since the early days of J2EE. He also trains architects and developers on the Java technology, software engineering, and software architecture. He has extensive experience with most application server products, and his experience with the GlassFish application server dates back to its ancestor, Sun Microsystems' application server series: iPlanet, Sun ONE, and Sun Java System application server. He holds degrees from the Florida State University and the University of Science and Technology of China.

Books From Packt


Zend Framework 1.8 Web Application Development
Zend Framework 1.8 Web Application Development

Joomla! E-Commerce with VirtueMart
  Ext JS 3.0 Cookbook

Drupal 6 JavaScript and jQuery
 Drupal 6 JavaScript and jQuery

WordPress 2.7 Cookbook
WordPress 2.7 Cookbook

RESTful Java Web Services
RESTful Java Web Services

Spring Persistence with Hibernate
Spring Persistence with Hibernate 

JasperReports 3.5 for Java Developers
JasperReports 3.5 for Java Developers

WebSphere Application Server 7.0 Administration Guide
WebSphere Application Server 7.0 Administration Guide


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software