Various subsystem configurations

Exclusive offer: get 50% off this eBook here
WildFly Performance Tuning

WildFly Performance Tuning — Save 50%

Develop high-performing server applications using the widely successful WildFly platform with this book and ebook

$29.99    $15.00
by Anders Welén Arnold Johansson | June 2014 | Open Source

This article by Arnold Johansson and Anders Welen, the authors of WildFly Performance Tuning, talks about the various subsystem configurations available for WildFly.

(For more resources related to this topic, see here.)

In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached.

In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools.

The thread pool executor subsystem

The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems.

The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools:

<subsystem xmlns="urn:jboss:domain:threads:1.0"> <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor=
"infinispan-listener" eviction-executor=
"infinispan-eviction"replication-queue-executor
="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container>

The following thread pools are available:

  • unbounded-queue-thread-pool

  • bounded-queue-thread-pool

  • blocking-bounded-queue-thread-pool

  • queueless-thread-pool

  • blocking-queueless-thread-pool

  • scheduled-thread-pool

The details of these thread pools are described in the following sections:

unbounded-queue-thread-pool

The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely.

The configuration properties are shown in the following table:

max-threads

Max allowed threads running simultaneously

keepalive-time

This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)

thread-factory

This specifies the thread factory to use to create worker threads.

bounded-queue-thread-pool

The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded.

The configuration properties are shown in the following table:

core-threads

Optional and should be less that max-threads

queue-length

This specifies the maximum size of the queue.

max-threads

This specifies the maximum number of threads that are allowed to run simultaneously.

keepalive-time

This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)

Handoff-executor

This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted.

allow-core-timeout

This specifies whether core threads may time-out; if false, only threads above the core size will time-out.

thread-factory

This specifies the thread factory to use to create worker threads.

blocking-bounded-queue-thread-pool

The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked.

The configuration properties are shown in the following table:

core-threads

Optional and should be less that max-threads

queue-length

This specifies the maximum size of the queue.

max-threads

This specifies the maximum number of simultaneous threads allowed to run.

keepalive-time

This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)

allow-core-timeout

This specifies whether core threads may time-out; if false, only threads above the core size will time-out.

thread-factory

This specifies the thread factory to use to create worker threads

queueless-thread-pool

The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded.

The configuration properties are shown in the following table:

max-threads

Max allowed threads running simultaneously

keepalive-time

The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)

handoff-executor

Specifies an executor to delegate tasks to in the event that a task cannot be accepted

thread-factory

The thread factory to use to create worker threads

blocking-queueless-thread-pool

The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked.

The configuration properties are shown in the following table:

max-threads

Max allowed threads running simultaneously

keepalive-time

This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)

thread-factory

This specifies the thread factory to use to create worker threads

scheduled-thread-pool

The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time.

The configuration properties are shown in the following table:

max-threads

Max allowed threads running simultaneously

keepalive-time

This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)

thread-factory

This specifies the thread factory to use to create worker threads

Monitoring

All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test.

Using CLI, run the following command:

/subsystem=threads/unbounded-queue-thread-pool=test:read-resource
(include-runtime=true)

The response to the preceding command is as follows:

{ "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } }

Using JMX (query and result in the JConsole UI), run the following code:

jboss.as:subsystem=threads,unbounded-queue-thread-pool=test

An example thread pool by JMX is shown in the following screenshot:

An example thread pool by JMX

The following screenshot shows the corresponding information in the Admin Console

Example thread pool—Admin Console

The future of the thread subsystem

According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though.

Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.

WildFly Performance Tuning Develop high-performing server applications using the widely successful WildFly platform with this book and ebook
Published: June 2014
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Java EE Connector Architecture and resource adapters

The Java EE Connector Architecture (JCA) defines a contract for an Enterprise Information Systems (EIS) to use when integrating with the application server. EIS includes databases, messaging systems, and other servers/systems external to an application server. The purpose is to provide a standardized API for developers and integration of various application server services such as transaction handling.

The EIS provides a so called Resource Adaptor (RA) that is deployed in WildFly and configured in the resource-adaptor subsystem. The RA is normally realized as one or more Java classes with configuration files stored in a Resource Archive (RAR) file. This file has the same characteristics as a regular Java Archive (JAR) file, but with the rar suffix.

The following code is a dummy example of how a JCA connection pool setup may appear in a WildFly configuration file:

<subsystem xmlns="urn:jboss:domain:resource-adapters:2.0"> <resource-adapters> <resource-adapter> <archive>eisExample.rar</archive> <!-- Resource adapter level config-property --> <config-property name="Server"> localhost </config-property> <config-property name="Port"> 6666 </config-property> <transaction-support> LocalTransaction </transaction-support> <connection-definitions> <connection-definitionclass-name="ManagedConnectionFactory"
jndi-name="java:/eisExample/ConnectionFactory"pool-name=
"EISExampleConnectionPool"> <pool> <min-pool-size>10</min-pool-size> <max-pool-size>100</max-pool-size> <prefill>true</prefill> </pool> </connection-definition> </connection-definitions> </resource-adapter> </resource-adapters> </subsystem>

By default in WildFly, these pools will not be populated until used for the first time. By setting prefill to true, the pool will be be populated during deployment. Retrieving and using a connection as a developer is easy. Just perform a JNDI lookup for the factory at java:/eisExample/ConnectionFactory and then get a connection from that factory. Other usages that will be running for a long time will not benefit from pooling and will create their connection directly from the RA. An example of this is a Message Driven Bean (MDB) that listens on a RA for messages.

The settings for this connection pool can be fetched in runtime by running the following command in the CLI:

/subsystem=resource-adapters/resource-adapter
=eisExample.rar/connection-definitions=
EISExampleConnectionPool:read-resource(include-runtime=true)

The response to the preceding command is as follows:

{ "outcome" => "success", "result" => { "allocation-retry" => undefined, "allocation-retry-wait-millis" => undefined, "background-validation" => false, "background-validation-millis" => undefined, "blocking-timeout-wait-millis" => undefined, "capacity-decrementer-class" => undefined, "capacity-decrementer-properties" => undefined, "capacity-incrementer-class" => undefined, "capacity-incrementer-properties" => undefined, "class-name" => "ManagedConnectionFactory", "enabled" => true, "enlistment" => true, "flush-strategy" => "FailingConnectionOnly", "idle-timeout-minutes" => undefined, "initial-pool-size" => undefined, "interleaving" => false, "jndi-name" => "java:/eisExample/ConnectionFactory", "max-pool-size" => 100, "min-pool-size" => 10, "no-recovery" => false, "no-tx-separate-pool" => false, "pad-xid" => false, "pool-prefill" => false, "pool-use-strict-min" => false, "recovery-password" => undefined, "recovery-plugin-class-name" => undefined, "recovery-plugin-properties" => undefined, "recovery-security-domain" => undefined, "recovery-username" => undefined, "same-rm-override" => undefined, "security-application" => false, "security-domain" => undefined, "security-domain-and-application" => undefined, "sharable" => true, "use-ccm" => true, "use-fast-fail" => false, "use-java-context" => true, "use-try-lock" => undefined, "wrap-xa-resource" => true, "xa-resource-timeout" => undefined, "config-properties" => undefined } }

Using JMX (URI and result in the JConsole UI):

jboss.as:subsystem=resource-adapters,resource-adapter=
eisExample.rar,connection-definitions=EISExampleConnectionPool

An example connection pool for a RA is shown in the following screenshot:

An example connection pool for an RA

Besides the connection pool, the JCA subsystem in WildFly uses two internal thread pools:

  • short-running-threads

  • long-running-threads

These thread pools are of the type blocking-bounded-queue-thread-pool and the behavior of this type is described earlier in the Thread pool executor subsystem section.

The following command is an example of a CLI command to change queue-length for the short-running-threads pool:

/subsystem=jca/workmanager=default/short-running-threads=
default:write-attribute(name=queue-length, value=100)

These pools can all be administered and monitored using both CLI and JMX. The following example and screenshot show the access to the short-running-threads pool:

Using CLI, run the following command:

/subsystem=jca/workmanager=default/short-running-threads=
default:read-resource(include-runtime=true)

The response to the preceding command is as follows:

{ "outcome" => "success", "result" => { "allow-core-timeout" => false, "core-threads" => 50, "current-thread-count" => 0, "handoff-executor" => undefined, "keepalive-time" => { "time" => 10L, "unit" => "SECONDS" } "largest-thread-count" => 0, "max-threads" => 50, "name" => "default", "queue-length" => 50, "queue-size" => 0, "rejected-count" => 0, "thread-factory" => undefined } }

Using JMX (URI and result in the JConsole UI):

jboss.as:subsystem=jca,workmanager=default,short-running-threads=default

The JCA thread pool can be seen in the following screenshot:

The JCA thread pool

If your application depends heavily on JCA, these pools should be monitored, and perhaps tuned as needed, to provide improved performance.

The Batch API subsystem

The Batch API is new in JEE 7 and is implemented in WildFly by the Batch subsystem. Internally it uses an unbounded-queue-thread-pool (see the description earlier in this article). If the application uses the Batch API extensively, the pool settings may need adjustment.

The configuration can be fetched using the CLI or by JMX.

Using CLI, run the following command:

/subsystem=batch/thread-pool=batch:read-resource(include-runtime=true)

The response to the preceding command is as follows:

{ "outcome" => "success", "result" => { "keepalive-time" => { "time" => 100L, "unit" => "MILLISECONDS" }, "max-threads" => 10, "name" => "batch", "thread-factory" => undefined } }

Using JMX (URI and result in the JConsole UI):

jboss.as:subsystem=batch,thread-pool=batch

The Batch API thread pool is shown in the following screenshot:

The Batch API thread pool

The Remoting subsystem

The Remoting subsystem exposes a connector to allow inbound communications with JNDI, JMX, and the EJB subsystem through multiplexing over the HTTP port (default 8080).

What happens is that the web container (the subsystem Undertow in WildFly) uses something called HTTP Upgrade to redirect, for example, EJB3 calls to the Remoting subsystem, if applicable. This new feature in WildFly makes life easier for administrators as all the scattered ports from earlier versions are now narrowed down to two: one for the application (8080) and one for management (9990).

All this is based on Java NIO API and utilizes a framework called XNIO (http://www.jboss.org/xnio).

The XNIO-based implementation uses a bounded-queue-thread-pool (see the description earlier in this article) with the following attributes:

Attribute

Description

task-core-threads

This specifies the number of core threads for the Remoting worker task thread pool

task-max-threads

This specifies the maximum number of threads for the Remoting worker task thread pool

task-keepalive

This specifies the number of milliseconds to keep noncore Remoting worker task threads alive

task-limit

This specifies the maximum number of Remoting worker tasks to allow before rejecting

The settings can be managed using CLI by running the following command:

/subsystem=remoting:read-resource(include-runtime=true)

The response to the preceding command is as follows:

{ "outcome" => "success", "result" => { "worker-read-threads" => 1, "worker-task-core-threads" => 4, "worker-task-keepalive" => 60, "worker-task-limit" => 16384, "worker-task-max-threads" => 8, "worker-write-threads" => 1, "connector" => undefined, "http-connector" => {"http-remoting-connector" => undefined}, "local-outbound-connection" => undefined, "outbound-connection" => undefined, "remote-outbound-connection" => undefined } }

The Transactions subsystem

The Transaction subsystem has a fail-safe transaction log. It will, by default, store data on disk at ${jboss.server.data.dir}/tx-object-store. For a standalone server instance, this will point to the $WILDFLY_HOME/standalone/data/tx-object-store/ directory. The disk you choose to store your transaction log must give high performance and must be reliable. A good choice would be a local RAID, configured to write through cache. Even if remote disk storage is possible, the network overhead can be a performance bottleneck.

One way to point out another path for this object storage is to use the following CLI commands specifying an absolute path:

/subsystem=transactions:write-attribute(name=
object-store-path,value="/mount/diskForTx")
reload

XA – Two Phase Commit (2PC)

The use of XA is somewhat costly and it shouldn't be used if it isn't necessary with distributed transaction between two or more resources (often databases, but also such things as JMS). If needed, we strongly recommend using XA instead of trying to build something yourself, such as compensating transactions to guarantee consistency between the resources. Such solutions can very quickly become quite advanced and the result will probably not outperform the XA protocol anyway.

Even though WildFly supports Last Resource Commit Optimization (LRCO), it shouldn't be used for performance optimization. It is only intended as a workaround to provide limited support to use one non-XA resource within an XA transaction.

These were the various configurations possible in WildFly.

Resources for Article:


Further resources on this subject:


WildFly Performance Tuning Develop high-performing server applications using the widely successful WildFly platform with this book and ebook
Published: June 2014
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

About the Author :


Anders Welén

Anders Welén embraced the object-oriented techniques of the Java language early in life, and later evolved to Java Enterprise specifications. As a true believer and evangelist of Open Source, he naturally discovered the JBoss Application Server, which led to years of providing expert consultation, training, and support for the JBoss and Java EE infrastructures.

As a result, Anders has seen a lot of both good and bad architectures, software solutions, and projects, most of which were a struggle from time to time due to performance problems.

Whenever Anders, through presentations, consultation, training, and (in this case) a book, sees that what he's trying to explain is getting through and the audience is picking up on it and adopting it for their own challenges, it gives him a warm feeling inside.

Arnold Johansson

Arnold Johansson is a versatile information technologist with a true passion for improving people, businesses, and organizations using "good tech".

As an early adapter of the Java language and its growing ecosystem, he is an outspoken proponent of secure Java Enterprise solutions and real Open Source software.

After nearly two decades as an IT consultant in many levels and verticals, Arnold now focuses on leading organizations on an architectural stable and efficient path of excellence.

Books From Packt


WildFly: New Features
WildFly: New Features

JBoss ESB Beginner’s Guide
JBoss ESB Beginner’s Guide

Oracle SOA Suite 11g Performance Tuning Cookbook
Oracle SOA Suite 11g Performance Tuning Cookbook

Drools JBoss Rules 5.X Developer’s Guide
Drools JBoss Rules 5.X Developer’s Guide

JBoss EAP6 High Availability
JBoss EAP6 High Availability

JBoss Weld CDI for Java Platform
JBoss Weld CDI for Java Platform

Java EE 7 Performance Tuning and Optimization
Java EE 7 Performance Tuning and Optimization

JBoss AS 5 Performance Tuning
JBoss AS 5 Performance Tuning


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software