Oracle SOA Suite 11g Performance Tuning Cookbook

4.5 (2 reviews total)
By Matt Brasier , Nicholas Wright
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Identifying Problems

About this book

Oracle SOA Suite 11g forms the heart of many organisations’ Service Oriented Architecture. Yet for such a core component, simple information on how to tune and configure SOA Suite and its infrastructure is hard to find. Because Oracle SOA Suite 11g builds on top of a variety of infrastructure components, up until now there has been no one single complete reference that brings together all the best practices for tuning the whole SOA stack.

Oracle SOA Suite 11g Performance Tuning Cookbook contains plenty of tips and tricks to help you get the best performance from your SOA Suite infrastructure. From monitoring your environment so you know where bottlenecks are, to tuning the Java Virtual Machine, WebLogic Application Server, and BPEL and BPMN mediator engines, this book will give you the techniques you need in a easy to follow step-by-step guide.

Starting with how to identify problems, and building on that with sections on monitoring, testing, and tuning, the recipes in this book will take you through many of the options available for performance tuning your application.

There are many considerations to make when trying to get the best performance out of the Oracle SOA Suite platform. This performance Cookbook will teach you the whole process of tuning JVM garbage collection and memory, tuning BPEL and BPMN persistence settings, and tuning the application server. This book focuses on bringing together tips on how to identify the key bottlenecks in the whole SOA Suite infrastructure, and how to alleviate them.

The Oracle SOA Suite 11g Performance Tuning Cookbook will ensure that you have the tools and techniques to get the most out of your infrastructure, delivering reliable, fast, and scalable services to your enterprise.

Publication date:
July 2013
Publisher
Packt
Pages
328
ISBN
9781849688840

 

Chapter 1. Identifying Problems

This chapter looks at some of the tools and techniques that you can use to identify the areas where any performance bottlenecks are in your SOA Suite application. It covers the following recipes:

  • Identifying new size problems with jstat

  • Identifying permanent generation problems with jstat

  • Monitoring garbage collection with jstat

  • Identifying locking issues with jstack

  • Identifying performance problems with jstack

  • Identifying performance problems using VisualVM on HotSpot

  • Identifying performance problems with JRMC on JRockit

  • Using JRockit flight recorder to identify problems

  • Monitoring JDBC connections with the WebLogic console

  • Identifying slow running database queries

  • Identifying slow running components with the Enterprise Manager

 

Introduction


When asked to look into performance issues with a SOA Suite installation, we commonly start by collecting log files from the various system components, looking for clues on any errors or points of contention. The more advanced among us may even have collected Java garbage collection logs and runtime data, and then reviewed it for the same reasons. We encourage these actions as a starting point, but what do we do when the information isn't available in log files? The goal of this chapter is to bridge the gap between knowing there's an issue and identifying it, by introducing a range of tools and techniques for looking at scenarios, before thinking about how to take a remedial action.

The remainder of this book contains a number of recipes that will help you improve the performance of your SOA Suite application by tuning various aspects of the infrastructure, but the changes will generally only make a difference in the area in which the application is slow.

The first step in being able to performance tune your SOA Suite application is to be able to identify where the performance bottlenecks are. Performance is always governed by the availability of resources, and there is always some bottleneck in an application, which prevents it from performing faster. These resources can include the following:

  • CPU time

  • Network bandwidth

  • Input/Output (IO) time

  • Memory

  • Availability of data to process

Memory is not generally a significant bottleneck on its own, but not having enough memory available to your SOA Suite application results in excessive Java garbage collection, which is a CPU-intensive activity, resulting in contention on the CPU. The most common bottlenecks in SOA Suite applications are IO time (waiting for data to be written to some resource, such as a disk or the network) and availability of data (waiting for data to be provided by some external system or user). However, all of the resources mentioned in the preceding list can cause bottlenecks in a SOA Suite application.

It is worth noting that different use cases can have different bottlenecks, so you need to tune for a realistic set of use cases; this is something that is discussed in more detail in a later performance tuning chapter.

We'll start by looking at some tools bundled in the various SOA Suite Java Virtual Machines for diagnosing Java issues, and look at some other common causes of resource contention in SOA Suite. The recipes in this chapter offer suggestions on next steps for common issues, but as mentioned earlier, later chapters will target specific areas of the SOA infrastructure in more detail, once you get over the hurdle of knowing where to start looking.

 

Identifying new size problems with jstat


jstat is a Java Virtual Machine (JVM) command-line tool, which shows you a number of statistics regarding how the JVM is performing. In this recipe, we will use it to understand how large the new size is, and how frequently it is filling up.

Getting ready

You will need the SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running under a normal load.

The tools jps and jstat are included with both the HotSpot and JRockit JVMs. For brevity, this recipe assumes that you have the relevant bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server.

    jps –v

    The process ID is the number in the first column. The SOA Suite server can be identified by the command-line options that the JVM has. If you have multiple SOA Suite servers running on the same machine, you can identify the relevant server by the –Dweblogic.Name parameter, as shown in the following screenshot:

  2. Use the jstat command to view the sizes of the survivor spaces and Eden:

    jstat -gc -h10 <pid> 2000

    The metrics we are interested in are S0C, S1C, EC, and OC. These are the sizes (capacity) of the two survivor spaces, the Eden space and the old generation, all in KB.

  3. Check that S0C and S1C have values of 10000 or higher.

  4. Check that EC has a size of between 20 percent and 50 percent of OC.

How it works…

jstat is a JVM tool that can be used to view a number of runtime statistics regarding the JVM. More detail on the JVM and its memory layouts is available in the recipes in Chapter 5, JVM Garbage Collection Tuning. The option -gc prints the statistics about garbage collection, including the capacity and utilization of each memory pool. In the preceding example, the parameter –h10 prints the headers every 10 lines, to make the output easier to read, and 2000 is the time in milliseconds between each sample (2 seconds).

What we are looking for are problems caused by the new generation memory spaces (Eden and the two survivor spaces) having capacities that are too small. This will cause garbage collection to occur more frequently than necessary, resulting in poor performance. Due to their nature, SOA Suite applications usually involve a lot of object allocation, but many of these objects generally do not last very long (the duration of a request), so they can be collected while they are still in the Eden or one of the survivor spaces. For this reason, we generally want a SOA Suite application to have an Eden size that is between 10 percent to 33 percent of the total heap size. From our experience, this is a good starting point for an Eden space that needs to cope with lots of short-lived objects. The survivor spaces should be large enough to hold the objects generated by one or two large requests, and we have found that values smaller than 10 MB (around 10000 in jstat) can cause problems.

There's more…

If you have set the new size (or new ratio) and survivor ratio values on the command line, then the values you see in the output of jstat should match those you specified.

See also

  • The Setting the survivor ratio and Setting the JVM new size recipe in Chapter 5, JVM Garbage Collection Tuning

 

Identifying permanent generation problems with jstat


The permanent generation is a special part of the HotSpot JVM's memory, which stores "class templates"' for the JVM class loader to use when creating and removing objects.

jstat is a JVM command-line tool, which shows you a number of statistics regarding how the JVM is performing.

In this recipe, we use jstat to understand the size of the permanent generation.

Getting ready

You will need SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the HotSpot JVM tools. This recipe assumes that your SOA Suite application is running such that we may run jstat against it.

The tools jps and jstat are included with both the HotSpot and JRockit JVMs. For brevity, this recipe assumes that you have the relevant bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

This recipe assumes you are using HotSpot, as JRockit does not have a permanent generation.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server; see step 1 of the Identifying New Size problems with jstat recipe.

  2. Use the jstat command to view the sizes of the survivor spaces and Eden:

    jstat -gc -h10 <pid> 2000

    The metrics we are interested in are PC and PU. PC is the capacity (size) of the permanent generation (in KB), and PU is the utilization of the permanent generation (in KB).

  3. We want to see that PU is around 80 percent or less of PC.

How it works…

jstat is a JVM tool that can be used to view a number of runtime statistics regarding the JVM. The option –gc printsthe statistics about garbage collection, including the capacity and utilization of each memory pool. In the preceding example, the parameter –h10 prints the headers every 10 lines, to make the output easier to read, and 2000 is the time in milliseconds between each sample (2 seconds).

Permanent generation is the section of JVM memory that is used to store classes and other objects that cannot be represented as Java types. The larger a SOA Suite application is, the more classes it will contain, and the more permanent generation it will need. If the permanent generation is too small, classes will need to be unloaded and reloaded in the permanent generation, causing full garbage collections to occur, and a performance hit. We want permanent generation to be large enough to hold all the classes in your SOA Suite application, with a little overhead for other resources, so if the utilization of the permanent generation is around 80 percent of its capacity, we should have sufficient overhead to prevent classes needing to be unloaded.

See also

  • The Setting the permanent generation size recipe

 

Monitoring garbage collection with jstat


jstat is a JVM command-line tool, which shows you a number of statistics regarding how the JVM is performing. In this recipe, we use jstat to understand the size of the permanent generation.

Getting ready

You will need the SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running under a normal load.

The tools jps and jstat are included with both the HotSpot and JRockit JVMs. For brevity, this recipe assumes that you have the relevant bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

This recipe assumes you are using HotSpot, as JRockit does not have a permanent generation.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server, see step 1 of the Identifying New Size Problems with jstat recipe.

  2. Use the jstat command to view the sizes of the survivor spaces and eden:

    jstat -gc -h10 <pid> 2000

    The metrics we are interested in are YGCC, YGCT, FGCC, and FGCT. These are the young garbage collection count, young garbage collection time, full garbage collection count, and full garbage collection time.

How it works…

jstat is a JVM tool that can be used to view a number of runtime statistics regarding the JVM. The option –gc prints statistics about garbage collection, including the capacity and utilization of each memory pool. In the preceding example, the parameter –h10 prints the headers every 10 lines, to make the output easier to read, and 2000 is the time in milliseconds between each sample (2 seconds).

Since garbage collection is a "stop the world" activity, any time spent garbage collecting is not the time spent in executing your business logic. We therefore want to minimize the amount of time spent in performing garbage collection. See the chapter on garbage collection for more details on what we are looking to achieve with garbage collection tuning.

There's more…

In Chapter 5, JVM Garbage Collection Tuning, we also pass the JVM a startup option to enable verbose GC logging output, so we can capture this data all the time.

We can then run free tools such as GCviewer (http://www.tagtraum.com/gcviewer.html) on this data output to visualize the data trends. Later in this chapter, we will introduce using real-time monitoring tools bundled with the JVM to view this same data.

See also

  • The Identifying performance problems using VisualVM on HotSpot and Identifying performance problems with jrmc on JRockit recipes

  • The Tuning to reduce the number of major garbage collections and Turning on verbose garbage collection recipes in Chapter 5, JVM Garbage Collection Tuning

 

Identifying locking issues with jstack


As SOA Suite is a heavily multi-threaded Java application, where locks are used to synchronize resources. If the threads encounter issues sharing these locks, then the impact on performance can be severe.

jstack is a HotSpot JVM tool that allows you to view thread dumps, which can be a useful way of identifying locking problems in your application.

Getting ready

You will need SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running, and under a normal load.

The tools jps and jstack are included with the HotSpot JVM. If you're using JRockit, then see the There's more.. section of this recipe for the alternative command to use. For brevity, this recipe assumes that you have the relevant JVM bin directory on your command line path. If you do not, simply use the fully qualified paths to the relevant bin directory.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server; see step 1 of the Identifying New Size Problems with jstat recipe.

  2. Use the jstat command to create a thread dump for the process. We specify the -l parameter to display additional information about locks, and redirect the output to a file, so we can inspect it more easily:

    jstack -l <pid> >jstack-output.txt

    This will generate a file called jstack-output.txt. Alternatively kill -3 <pid> can be used on Linux or Ctrl + Break in the console in Windows.

  3. Open jstack-output.txt in your favorite text editor. There are two main types of locking problem we are looking for. The first is a deadlock, where two threads each own one or more locks, but are both waiting on each other's locks. This will look something similar to the following screenshot:

    Note that each thread is waiting on the lock held by the other thread. If you see thread deadlocks, this generally requires that you redesign your application to acquire locks in the correct order.

    The second type of problem we are looking for is lock contention. This occurs when many threads are all waiting to get access to the same lock. When this occurs, you will see many threads all waiting on the same lock. This usually also requires application code changes to resolve.

How it works…

The jstack tool comes with the HotSpot JVM, and is used to produce stack dumps for a running JVM. The output of stack dumps contains information on any locks held by each thread, which we can use to diagnose common locking-related problems.

There's more…

There are two ways of performing locking in a Java application. The first is by using the synchronized keyword, and is the most common. Locks obtained using this method appear in the thread dumps as threads marked as - locked <object reference>, and threads waiting to enter a synchronized block that they do not own the lock to are marked as – waiting on <object reference>. The second mechanism for obtaining locks is using the java.utilconcurrent.locks package, and locks obtained in this way do not appear in a standard thread dump. However, by specifying the -l parameter to jstack, we are able to view the information on which threads hold locks. This information is displayed in the locked ownable synchronizers section below each thread dump.

Free tools are available to examine lock dumps, but we will not discuss them in this book.

The techniques in this recipe can be replicated on the JRockit JVM with the command jrcmd <pid> print_threads, where pid can be identified using JRockit's jps in the same way as with the HotSpot JVM.

 

Identifying performance problems with jstack


Jstack is a JVM tool that will generate thread dumps for a running JVM. We can use this as a way to do some lightweight profiling of our running SOA Suite application, to identify where performance bottlenecks are.

Getting ready

You will need the SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running under a normal load.

The tools jps and jstack are included with the HotSpot JVM. If you're using JRockit then see the There's More… section of this recipe for the alternative command to use. For brevity, this recipe assumes that you have the relevant bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server; see step 1 of the Identifying New Size Problems with jstat recipe.

  2. Execute a slow performing use case on your SOA Suite server.

  3. While the use case is executing, use the jstat command to create a thread dump for the process. We redirect the output to a file to inspect it more easily:

    jstack<pid> >jstack-output-1.txt

    This will generate a file called jstack-output-1.txt. Alternatively, kill -3 <pid> can be used on Linux or Ctrl + Break in the console in Windows.

  4. Repeat step 3 a number of times, changing the number of the generated file each time, to create a sequence of thread dumps. I suggest aiming to create thread dumps at intervals of one or two seconds, for a period of 30 seconds, or the amount of time that it takes to execute your slow use case.

  5. Open the output files in your favorite text editor.

To identify the poor performing code, we are looking for thread stacks that appear frequently across the files. These will either be operations that are executed multiple times, or operations that are executed once but take a long time to complete.

How it works…

A thread dump is a snapshot of what the every thread in the application is doing at a particular moment in time. By taking multiple thread dumps over a period of time, we can build up a picture of where the application is spending its time. This approach is essentially how a sampling profiler works, so we are effectively doing manual profiling of the application.

It is common to find that the cause of poor performance is waiting on data from external systems, such as a database or web service. In this case, what we will see in our thread dumps is that there will be many instances where a thread is in JVM code that is reading data from a socket. This is the thread waiting on a response from the remote system, and we know that we should target the external system in order to improve the performance.

We can quickly look through large numbers of stacks by noting that, in general, threads of interest to us will be those that are running the application code. We can identify the threads of interest, as they will generally have two properties; they will be executing the code from the packages that are used in your application, and they will have stack traces that are longer than those of threads that are currently idle.

There's more…

What we are doing here is essentially what a sampling profiler tool would do, although we are doing it manually. While there may be cases in which a sampling profiler would be a preferable way of doing this, we have found that doing the profiling manually gives a number of benefits. Firstly, it uses only tools that already exist on the server—there is nothing that needs installing, and given that we may be doing this on a live system, this usually means less paperwork. Secondly, we have more control over how and when we generate the thread dumps, which helps reduce the overhead on the application.

If we find this technique is something that we do frequently, it may be worth writing a script that will generate the thread dumps for us. We have found this to be a good compromise between using a sampling profiler and manually sampling, using thread dumps.

It is worth noting that WebLogic can report Java threads as "stuck" if they run for a longer period of time than a configurable timeout. This itself is not indicative of a problem, as long-running activities (such as database or file polling) can be valid things for application components to do. Observing thread dumps will tell us definitively whether we need to take further action.

Generating heap dumps is also a good technique for investigating performance issues, and free tools such as the Memory Analyzer Toolkit can generate reports on object allocation and aid with detecting memory leaks. We do not discuss heap dumps explicitly in this book, because SOA Suite is an Oracle supported application, so issues with the core components themselves should be raised with Oracle support, but there are plenty of good resources online to get use this technique if you desire.

See also

  • The Identifying performance problems using VisualVM on HotSpot, Identifying performance problems using JRMC on JRockit, and Using JRockit Flight Recorder to identify problems recipes

 

Identifying performance problems using VisualVM on HotSpot


VisualVM is a powerful graphical tool that comes with the HotSpot JVM. It has a number of views, which can be useful for diagnosing performance problems with SOA Suite. This recipe looks at a number of these features.

Getting ready

You will need SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running under a normal load, or a load sufficient to demonstrate any performance problems.

The tool jvisualvm is included with the HotSpot JVM, JRockit has a different tool described in a different recipe. For brevity, this recipe assumes that you have the relevant JVM bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server, see step 1 of the Identifying New Size Problems with jstat recipe. This will let us identify a target Java process in a host running multiple JVMs. We're trying to identify the WebLogic admin server to use with Visual VM.

  2. Start VisualVM by using the jvisualvm command:

    jvisualvm

    If this is the first time you have run VisualVM, it will first perform calibration on your server. Once the calibration is completed, the VisualVM homepage will open as shown:

  3. Select the WebLogic process that has the process ID you identified in step 1. Either double-click on it, or right-click and select Open.

  4. VisualVM will open the Overview page for the WebLogic Server instance. You should check that the server name matches the one you were expecting.

  5. Switch to the Monitor tab, which provides an overview of the performance of the application.

  6. There are a number of things that we want to check on this tab:

    • The CPU usage should be relatively low (below 30 percent, ideally) and the amount of CPU usage spent on the GC activity should also be low.

    • When garbage collection occurs, Used Heap should drop to a significant amount.

  7. VisualVM contains two profilers that can be used to identify performance problems. We prefer to use the sampling profiler, which is available on the Sampler tab.

  8. Click on the CPU button to begin profiling your application. The table will show which Java methods are taking the most time.

How it works…

VisualVM hooks into the JVM to read the statistics about memory usage, garbage collection, and other internals, and displays them graphically for the user. Many of these metrics are useful in identifying where a performance problem lies.

An application with high CPU usage may be CPU bound, but before just putting it on a host with more or faster CPUs, we need to identify what it is that is using the CPU. Garbage collection is a common culprit, so the graph showing the amount of CPU time spent running the garbage collector is particularly useful. If garbage collection is not the culprit, then the sampling profiler will show us where the CPU time is being spent. This is essentially a graphical version of using the jstat and jstack tools mentioned in other recipes.

There's more…

Additional plugins can be installed for VisualVM to display the information in more convenient ways, or to display additional information.

The Profiler can also be used to profile your application, but it imposes a much higher overhead, and so is more likely to affect the results.

See also

  • The Identifying performance problems using JRMC on JRockit recipe

 

Identifying performance problems using JRMC on JRockit


JRockit Mission Control is a graphical management console for the JRockit JVM, which comes with SOA Suite 11g. We can use it to help identify a number of performance problems that we might encounter with SOA Suite when running on JRockit.

Getting ready

You will need SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running, and under a normal load or a load sufficient to demonstrate any performance problems.

The tool jrmc is included with the JRockit JVM; HotSpot has a different tool described in a different recipe. For brevity, this recipe assumes that you have the relevant JVM bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

How to do it…

  1. Use JPS to determine the process ID of your SOA Suite server; see step 1 of the Identifying new size problems with jstat recipe. This will let us identify a target Java process in a host running multiple JVMs. We're trying to identify the WebLogic Admin server to use with JRockit Mission Control.

  2. Launch JRockit Mission Control by executing the command jrmc from the JVM bin directory:

    jrmc

    If this is the first time you have run jrmc, it will open the JRockit Mission Control Welcome page:

  3. Click on X next to the Welcome tab in the top-left to close the welcome screen. This will display the birds-eye view, which can be selected at any time from the Window menu.

  4. Select the WebLogic instance that you want to connect to, based on the process ID that you established in step one. Right-click on the instance and select Start Console.

  5. This will display the JRockit Mission Control console for the selected WebLogic instance:

  6. There are a number of things that we are looking for on this page:

    • JVM CPU Usage should be low, not higher than 30 percent ideally.

    • Used Java Heap (%) should increase as memory is used, but should come down again when garbage collection occurs.

How it works…

JRockit Mission control gathers management metrics made available by JRockit at runtime, and makes these visible graphically. The Mission Control console gives us an overview of the CPU and memory usage of the application with graphs of historical data. This gives us an overview that we can use to determine whether applications are having memory- or CPU-related performance problems.

Mission Control can also monitor JVMs remotely; refer to the JRockit documentation for guidelines on achieving this at http://www.oracle.com/technetwork/middleware/jrockit/documentation/index.html.

See also

  • The Using JRockit Flight Recorder to identify problems recipe

 

Using JRockit flight recorder to identify problems


JRockit Flight Recorder is a component of JRockit Mission control that can be used to record detailed statistics about your application, to be viewed later.

Getting ready

You will need SOA Suite installed for this recipe, and will need permission to execute the domain control scripts, as well as the JVM tools. This recipe assumes that your SOA Suite application is running under a normal load or a load sufficient to demonstrate any performance problems.

The tool jrmc (of which Flight Recorder is a component) is included with the JRockit JVM; HotSpot has a different tool called VisualVM described in a different recipe. For brevity, this recipe assumes that you have the relevant JVM bin directory on your path. If you do not, simply use the fully qualified paths to the relevant bin directory.

How to do it…

  1. Start JRockit Mission Control by following steps from 1 through 3 of the Identifying performance problems using JRMC on JRockit recipe.

  2. Select the WebLogic instance that you want to connect to, based on the process ID that you established in step 1. Right-click on the instance and select Start Flight Recording. This will display the flight recording settings dialog box.

  3. Accept the default Template and Filename settings, and set Name to be something descriptive. Recording Time should be long enough to encapsulate a use case that is considered to be slow. Once you are happy with the settings, click on OK and the recording will begin.

  4. While recording is in progress, you can see the remaining time from the Flight Recorder Control view at the bottom of the screen.

  5. Once the flight recorder finishes recording, it will open the Overview screen as shown:

    This screen displays the overview of the recorded data. The event strip at the top of the screen can be used to select a specific part of the timeline to view. Clicking on Synchronize Selection will keep the selected time range the same across all the viewed screens.

  6. Select the Memory tab; this tab shows us a number of interesting statistics about the JVM memory during the runtime. Notice that for more detailed information, you can select one of the subtabs at the bottom of the screen.

    On this page, we want to check a number of things:

    • That the average and maximum Main Pause times are reasonable; for example, below 10 s

    • That the heap usage comes down by a reasonable amount when garbage collection occurs, and that it is not just "bouncing around near the top of the graph"

  7. Select the CPU/Threads tab. This tab shows the statistics regarding CPU usage and threads. We want to check that the Application + JVM CPU usage does not make up the majority of the total CPU usage, and that the total CPU usage is low (ideally at 30 percent or below).

How it works…

JRockit Flight Recorder collects a large number of detailed statistics about an application execution, and stores them so that they can be analyzed later. This method allows a more detailed analysis to be performed then, if you are trying to observe the statistics in real time, you can go back and "replay" the gathered statistics to focus on certain time periods. Flight recorder can be executed with a number of profiles; the Normal profile has the lowest overhead, so is most suitable for use in production environments. The other profiles gather more information, but have a higher overhead, so are more suitable when you can reproduce the problem in a test environment. One of the advertised powerful features of JRockit is that the Normal collection profile adds virtually no overhead to the JVM execution time, as the collected metrics are used internally by JRockit during execution for runtime optimization.

It can be helpful to take Flight Recorder profiles of healthy systems, so that if a future release introduces a performance problem, you have a baseline to compare it to. This can make it much easier to identify the area in which performance is likely to be suffering. With SOA Suite in mind, it is worthwhile using Flight Recorder when running load tests (see Chapter 3, Performance Testing) against the WebLogic servers.

See also

  • The Identifying performance problems with JRockit Mission Control recipe

 

Monitoring JDBC connections with the WebLogic console


One of the things that can cause poor application performance is not having enough connections to the database available. SInce Oracle SOA Suite is a heavily database intensive application, it can cause significant performance problems, particularly if using lots of transient SOA BPEL processes (see Chapter 8, BPEL and BPMN Engine Tuning, for more on this topic). In this recipe, we will look at how to use the WebLogic console to view a number of available database connections.

Getting ready

For this recipe, you will need to know the URL and login credentials for the WebLogic administration console that runs on the admin server in your SOA Suite domain. The default URL for the console is http://<servername>:7001/console. The username and password will have been specified when you created the domain.

You will need SOA Suite installed and running, and to have loaded it with representative load or live data, or be able to create a realistic load on the application.

How to do it…

  1. Connect to the WebLogic administration console at http://localhost:7001/console; replace localhost and 7001 with the hostname and port of the server if it is not running locally on 7001. If this is the first time you have accessed the WebLogic console since starting the server, WebLogic will first deploy the console application, which may take a few minutes.

  2. Log in to the console by using your administration credentials.

  3. Open the Services category on the left-hand navigation pane, and select Data Sources.

  4. This will display a list of all the data sources in the main panel.

  5. Select the first of these data sources, which will load its properties page. Select the Monitoring tab and the Statistics subtab.

  6. This will display the statistics for the data source. If you see nothing on this page, then it is likely that none of the servers on which this data source exists are running.

  7. Click on the Customise this table button, and add the following columns to the view: Active Connections Current Count, Active Connections High Count, Current Capacity, Number Available, Waiting for Connection Current Count, and Waiting For Connection High Count. Click on Apply.

  8. Now, when you view the Statistics panel, it should show the following new statistics:

  9. Check the statistics for any problems, see the How it Works... secion of this recipe for more details about what to look for.

  10. Repeat steps from 5 through 9 for each data source.

How it works…

The Oracle WebLogic application server exposes management and monitoring metrics using the JMX standard, and the Oracle WebLogic console makes use of these to display information on components such as the database connection pool sizes. It is worth noting that most connection pools have an initial size of zero. So if you view the pool size before the application has been used, then you will see zero for all of the statistics we are monitoring. If this is the case, then you should run some load through your application and check back again.

These connection pools are used by SOA Suite components to talk to the database for many purposes including storing process instance state, restoring state, storing logging information, and so on. If WebLogic runs out of these connections, then requests have to queue, waiting for a connection to become available, eventually timing out if no connection becomes available. By monitoring the number of connections in use, you can increase the maximum number of connections in the pool, if you notice that they are running out.

We are looking for a number of different things in the connection statistics:

  • Active Connections Current Count equal to Current Capacity, which indicates that every connection from the pool is currently in use. This means that there are no free connections. If the Current Capacity is the maximum number of connections that the pool will hold, then new requests will have to wait for connections.

  • Active Connections High Count equal to Current Capacity, which indicates that at some point, every connection from the pool was in use. Active Connections High Count gives the highest value that Active Connections Current Count has had since the server was restarted (or the statistics reset), so can be useful for diagnosing the cause of performance problems that happened in the past, and the system has now recovered.

  • Number Available equal to zero, which indicates that the pool is empty (unless the pool has not been initialized yet). This will usually occur when all the other conditions are true.

  • Waiting for Connection Current Count or Waiting for Connection High Count not equal to zero, indicating that at some point there have been threads waiting to get a connection from the pool. As threads will only wait for connections if there are none in the pool, then we would expect to only see this if all of the other conditions are also true.

These are all indicators of the same underlying problem, which is that there are insufficient available connections to the database. This is usually caused by one of three things—either a sudden burst of traffic has used up more than normal, the database is running slower than normal, or there is a connection leak in the application. To find out which of these is the case, you should look at the output of the thread dumps and database statistics.

There's more…

As well as being able to see this information via the WebLogic console, it is also made available via the JMX standard and via the WebLogic scripting tool, which is based on Jython, allowing you to use many tools to obtain and process connection pool sizes and more. We will discuss more about JMX in Chapter 2, Monitoring Oracle SOA Suite, and look at some of the tools that can be used for obtaining JMX metrics.

See also

  • The Identifying slow-running database queries and Identifying performance problems with JStack recipes

  • Chapter 2, Monitoring Oracle SOA Suite

 

Identifying slow-running database queries


SOA Suite applications make heavy use of the database. Identifying the database queries that are running slowly can allow you to identify areas where performance improvements can be made by tuning or reducing the amount of database access.

Getting ready

You will need SOA Suite installed and running, and have loaded it with representative load or live data.

This recipe requires that you are able to log in to the SOA Suite database as an administrator and are able to execute the necessary SQL statements.

How to do it…

  1. Log in to the SQL database as an administrator.

  2. Execute the following SQL query:

    SELECT * FROM
    (SELECT
    sql_fulltext,
    sql_id,
    child_number,
    disk_reads,
        executions,
    first_load_time,
    last_load_time
    FROM    v$sql
    ORDER BY elapsed_time DESC)
    WHERE ROWNUM < 10
    /

    This will return the top 10 recent slowest SQL statements, although statements eventually get removed from the v$sql table. So, this will contain statements that have been recently executed.

How it works…

Oracle SOA Suite makes very heavy use of the database, being able to identify the queries that take the longest time, which allows these to be optimized. The v$sql table in Oracle is a virtual table that contains the statistics about the recently executed SQL statements, so it can be used to identify the statements that take the longest to execute.

There are a number of ways of discovering slow-running database queries, but we have selected the preceding method because it is simple and easy to understand. More complex statistics can be used to identify exactly what it is that is causing queries to run slowly.

Once you have identified slow-running database queries, the steps you need to take to resolve them depends upon what the queries are. If they are application-related queries, then you can focus on changing the application code, but if they relate to Oracle SOA Suite itself, then you will have to look at things like reducing the amount of logging and persistence in your application, or rearchitecting it to behave differently.

See also

 

Identifying slow-running components with the Enterprise Manager


We can use Oracle Enterprise Manager Fusion Middleware Control to view the slow-performing components of our SOA Suite middleware infrastructure, allowing us to target our performance improvements at those components.

Getting ready

This recipe assumes that you have Oracle SOA Suite installed and running, and are able to generate sufficient load against it to demonstrate any performance problems.

You must have the Oracle Enterprise Manager component installed in your domain, and you will need to know the administration username and password for the domain.

How to do it…

  1. Connect to Oracle Enterprise Manager running on the admin server on the URL http://<hostname>:<port>/em; by default, this will be http://localhost:7001/em if you are connecting from the same host.

  2. Log in to Oracle Enterprise Manager by using your administration credentials.

  3. Open Application Deployments, right-click on soa_infra, and select Performance Summary.

  4. Check that request processing time is not high.

How it works…

Oracle Enterprise Manager is an Oracle product that is used to manage and monitor components of the SOA Suite infrastructure. Some of its features are licensed separately to Oracle SOA Suite. So, if you are unsure, you should check with your Oracle account manager to check whether you are licensed or not to use it.

Oracle Enterprise Manager can be used to manage many components within Oracle SOA Suite, and here we use it to view the performance summary for the SOA infrastructure components. In this instance, we view the request processing time, which tells us how long each request is taking to be processed. It is averaged across a large number of components, so you are really looking to see that it is not higher than usual. This of course requires you to know what usual is, so like all of these statistics, it is worth understanding how the system behaves when there are no performance problems.

There's more…

If you get a 404 error when attempting to connect to the Oracle Enterprise Manager Fusion Middleware Control console, it is likely that you did not add it to your game when you created your domain. You can add it to your domain subsequently by running the configuration manager, selecting your current domain, and extending it to add Oracle Enterprise Manager, although you should be aware that this may overwrite any configuration changes that you have made to the startup scripts.

About the Authors

  • Matt Brasier

    Matt Brasier is a professional services consultant with 10 years of experience tuning Enterprise Java applications. He has 9 years of experience working with Enterprise Java middleware, and has always focused on the performance aspects of application servers and SOA platforms. Matt has spoken at numerous technical conferences, including The Serverside Java Symposium and JUDCon. Matt is the head of consulting at C2B2 Consulting Ltd., an independent middleware consultancy specializing in SOA and JEE, providing professional services on the leading Java middleware platforms. C2B2 is an Oracle Gold Partner with Application Grid Specialization.

    Browse publications by this author
  • Nicholas Wright

    Nicholas Wright is a Java middleware consultant and technical evangelist with 9 years of experience from Embedded C real-time systems to Enterprise Java. He has served on a Java Community Process Expert Group, and spoken at conferences across the world. Nick is a senior consultant at C2B2 Consulting Ltd., where he works with a wide range of commercial and open source JEE and SOA middleware platforms.

    Browse publications by this author

Latest Reviews

(2 reviews total)
Muito bom. Muito bom. Muito bom.
Excellent