(For more resources related to this topic, see here.)
In this article, we'll focus on recipes for designing high performance SOA Suite 11g applications. These recipes look at how you can design your applications for high performance and scalability, where high performance is defined as providing low response times even under load, and scalability is defined as the ability to expand to cope with large numbers of requests.
While many of the recipes in other articles can be applied after the application has been designed and written, those in this article need to be applied while the application is being written, and may require that your application is implemented in a certain way. Designing an application with performance as a requirement from the start is much easier than trying to add performance to an application that is already live. So, the recipes in this article provide some of the best value for money in terms of getting the most performance out of your SOA Suite infrastructure. However, while this book focuses on decisions that should be made during the design stages of a development process, this article is not a list of general SOA Suite design patterns.
As for many of the recipes in other articles, a lot of the focus in this article is on reducing the amount of time your application spends waiting on external services and the SOA Suite database tables.
There are many aspects to the performance of a SOA Suite application, and the design guidelines depend very much on the particular business problems that your application is designed to solve. Factors such as payload size, number of external systems being orchestrated, data transformation complexity, and persistence requirements, all have an impact on the performance of your application. Performance is a relative term, with each application and use-case having its own requirements, but there are a number of basic principles that can help ensure that your application will have a good chance of meeting its goals.
- Design for peak loads, not average loads. Average loads can be very misleading; there are many situations in which the average load of a system is not a good indicator of the expected load. A good example of this would be a tax return system, where the usage for most of the year is very low, building into a peak in 30 or so days before people's tax returns are due.
- Smaller payloads are faster. When designing your application, try and limit the amount of payload data that goes through your composites and processes. It is often better to store the data in a database and send the key and metadata through the processes, only going to retrieve data when required.
- Understand your transaction boundaries. Many applications suffer performance problems because their transactions boundaries are in the wrong places, causing work to be redone unnecessarily when failures happen, or leaving data in an inconsistent state.
- Understand what causes your application to access the database, and why. Much of the performance overhead of Oracle SOA Suite applications is in repeated trips to the database. These trips add value by persisting state between steps or within processes, but the overuse of steps that cause database persistence is a common cause of performance problems.
- Follow standard web service design patterns, such as using asynchronous callbacks and stateless invocations, where you are using web services.
Using BPEL process parallelization
By having your BPEL process execute steps in parallel when there are no dependencies, you can increase the performance by spending less time waiting for external systems to complete.
You will need JDeveloper installed, and have an open BPEL project.
How to do it...
Follow these steps to use BPEL process parallelization:
- Expand the BPEL Constructs section in the component palette.
- Drag Flow from the palette onto the process.
- Click on the + icon next to the flow to expand it.
- Populate the flow with the process steps.
How it works...
If you have a number of tasks that do not have dependencies on each other, you can improve performance by executing the preceding tasks in parallel. This is most effective with partner links, where you know you are waiting on an external system to produce a response. The default behaviors of these flows is still to use a single thread to execute the branches if external systems are invoked. See the Using non-blocking service invocations in BPEL recipe to learn how to execute flows that contain partner links in parallel.
It is possible to include a limited amount of synchronization between branches of a flow, so that tasks on one branch will wait for tasks on another branch to complete before proceeding. This is best used with caution, but it can provide benefits, and allow tasks that would not otherwise easily lend themselves to parallelization to be run in parallel.
Using non-blocking service invocations in BPEL flows
We can reduce the latency of forked external service invocations in a BPEL process to the longest flow's execution time if we assign a thread to each flow, making it multi-threaded.
You'll need a composite loaded in JDeveloper to execute this recipe. This composite will need a flow that makes calls to a partner link external service.
How to do it...
Follow these steps to use non-blocking service invocations:
- Right-click on each partner link that is being executed in your BPEL process flow, and select Edit.
- In the Property tab, select the green + icon and add nonBlockingInvoke as a property name. In the Value box at the bottom, enter true.
How it works...
This recipe causes flow branches to be executed in parallel, with a new thread to be used for each branch flow.
For multiple service invocations that each have a high latency, this can greatly improve the total BPEL execution time. For example, assume we have a BPEL process that calls two web services, one that takes four seconds to execute, and one that takes six seconds to execute. Applying this change will prevent the BPEL process making the calls serially, which would take 10 seconds in total, and enforce parallel service calls in separate threads, reducing the execution time to just over six seconds, or the latency of the longest call plus time to collate the results in the main BPEL process execution thread.
While it may sound like a silver bullet performance improvement, this recipe is actually not necessarily going to improve the execution time of our BPEL process! Consider that we may now be at the mercy of greater thread context switching in the CPU; for every invocation of our process, we now have a larger number of threads that will be spawned. If each service invocation has a low latency, the overhead of creating threads and collating callbacks might actually be greater than the cost of invoking the services in a single thread. Our example in this explanation is contrived, so ensure to test the response time of your composite and the profile of your application, when placed under operational load (which may result in lots of threads spawning), as these may well be different with the configuration applied.
This recipe used an alternative way of setting property values to that which we've used elsewhere in the book. Previously, we've edited composite files directly; here, we used the JDeveloper BPEL graphical editor to achieve the same end result. If you check the composite.xml source, you'll see a property added with a name, such as partnerLink.[your service name].nonBlockingInvoke for each service added.
Turning off payload validation and composite state monitoring
Payload validation checks all inbound and outbound message data thus adding an overhead, especially for large message types. Composite state monitoring allows for administrators to view the results of all instance invocations. We can disable these to improve performance.
You will need to know the administration credentials for your Oracle SOA Suite WebLogic domain, and have access to the Oracle Enterprise Manager console.
How to do it...
By following these steps, we can turn off payload validation:
- Log in to Enterprise Manager.
- Open the SOA tab, and right-click on soa_infra , select SOA Administration and Common Properties .
- Un-tick the checkbox for Payload Validation to disable this feature.
- Un-tick the checkbox for Capture Composite Instance State.
How it works...
In this recipe, we globally disabled payload validation. This instructs SOA Suite to not check the inbound and outbound message payloads against the schemas associated with our services. This can be particularly useful, not only if the payload is coming from a trusted source, but even if the source is untrusted. A common alternative to payload validation is to add steps to manually validate the payloads at the point that we first receive the request, while not validating those that have come from internal or trusted sources.
There are a number of levels of granularity for payload validation; it can be applied at the SOA Engine (BPEL) and composite levels to allow for fine-grained application of this property. You can access these properties via the enterprise manager console right-click menu on the SOA engines and deployed composites. For performance, I would recommend disabling this in all environments above development.
Composite state management is responsible for tracking and representing the health of our running composites. This is a powerful administration feature, but costs a lot in terms of performance. Anecdotal testing shows that this can be responsible for up to 30 percent of processing time. As such, for high throughput applications, the value of this feature should be considered.
See the recipes on audit logging to further control composite recording activities at runtime.
Ensure that you check the payload validation at the Engine and Composite levels to ensure that they meet your performance requirements.
Designing BPEL processes to reduce persistence
Every step in a BPEL process adds an overhead. In this recipe we'll suggest some design options to consider when deciding how to construct your processes.
You will need an understanding of SOA Suite programming concepts.
How to do it...
The following steps cover some of the techniques for reducing process persistence:
- If you have lots of variable assignment steps, consider a call out to a Business Rules component to check and set multiple values in response to the user input.
- If complex logic is required, consider embedding a call to a Java class in the composite if this can reduce the number of steps.
How it works...
By default, the BPEL composite design will consist of adding steps to a process until the business logic can be satisfied. This can make it easy to end up with monolithic processes that have too many steps. For example, processes which make decisions and then fork on the results, require an increasing number of steps to deal with varying logic flows.
While it sounds easy to recommend simply reducing the number of steps, in practice this requires careful analysis of the composite requirements and consideration of where process dehydration will occur. SOA Suite offers two powerful options that can be leveraged to reduce the persistence between BPEL process steps.
Using business rules will transfer the process execution to an in-memory evaluation of the business logic. This can help with speeding up, forming and mutating the composite output payload.
Embedding calls to Java programs can help deal with replacing forking processes, as we can reduce multiple BPEL process steps to a number of if/else blocks in a program method. Note that to keep the execution speed high, we should resist the urge to perform slow external logic in the method, such as blocking calls to external databases.
Using parallel routing rules in Mediator components
Mediator routing rules can be set to parallel or sequential execution. Using parallel execution can improve performance.
You will need to have JDeveloper, and have a good understanding of SOA Suite programming concepts.
How to do it...
These steps show us how to configure parallel routing rules:
- Using JDeveloper, open the Mediator file that contains the rules that you wish to execute in parallel.
- If the routing rules are not visible, click on + to expand them:
- From the drop-down list, select Parallel .
- Save the file.
How it works...
The Mediator can execute routing rules either in parallel or sequentially. Parallel rules are executed at the same time in separate threads, using an algorithm that ensures that no rule can use up all of the threads, and starve another rule from executing. Each parallel rule is initiated in a new transaction, and that transaction is committed (or rolled back) by the Mediator process, once the rule has executed.
You can set the priority of a parallel routing rule to a value between zero and nine. Higher priority rules will take precedence when threads are checking whether any parallel routing rules need executing.
The Oracle SOA Suite Mediator component is the old Oracle Enterprise Service Bus ( OESB ), which is being gradually replaced by Oracle Service Bus ( OSB ). OSB provides many more tuning options, and is often more efficient than the Mediator; so, for new SOA applications, we would recommend using OSB to perform the process mediation role, rather than the SOA Suite Mediator component.
Setting HTTP timeouts for external services
Composite services often call out to external HTTP web services. If these external services are not available, or are slow to respond, then it can impact the application performance. By tuning the timeout to a lower or higher value, performance can be improved.
You will need to be familiar with SOA application development principals for this recipe.
How to do it...
These steps will set the timeout for an external HTTP service:
- Open the composite.xml file in JDeveloper.
- Select the source view at the bottom of the main pane.
- Locate the <reference> section for the external service that you want to set the timeout for.
- Inside the <binding> tags, add the property oracle.webservices.httpReadTimeout of type xs:string with a value high enough to allow the service to respond, such as 60000 (60 seconds).
- Add the property oracle.webservices.httpConnTimeout with a type xs:string, and a low value, such as 5000 (5 seconds).
- Save the composite.
- Deploy the application.
How it works...
When calling external services over HTTP, the application will need to wait for a response before it can continue. If the external service is slow to respond, and Oracle SOA Suite gives up too soon, then the request will fail and roll back to the most recent transactional save point. This is often compounded by retry logic, which will cause the request to be retried, increasing the load on an already busy system. By setting the read timeout to a high value, we can ensure that we give plenty of time for an external application to respond. In the preceding steps, we use a value of 60000 (60 seconds), but you can increase this value if you have a service that takes longer.
If the external service is not available, then waiting for a response is not going to work, and in fact we will usually be completely unable to establish the initial connection. By setting the connection timeout to a low value (we use 5000 or 5 seconds), we can return a fault quickly to the SOA Suite application and not wait for a response that will not arrive.
By tuning both of these parameters, it should be possible to come up with settings that will fail quickly if the server is not there, but will wait long enough for a response if the service is busy, while still timing out if the external service has stopped responding completely.
Tuning BPEL adapter properties
Each of the SOA Suite adapters has a number of properties that can be tuned depending upon your application requirements. This high-level recipe describes some of the options available, and how to tune them for your application.
You will need a good understanding of SOA Suite programming principles for this recipe.
How to do it...
The following steps explain how to tune the properties of the commonly used BPEL adapters.
- Tune inbound FTP and file adapters to use a dedicated thread pool, by setting the ThreadCount property to a positive value, such as 10. If your file adapter is only being used to detect the arrival of a file, then setting UseHeaders to true can prevent the whole file payload being passed into the process. If the files do not arrive frequently, then PollingFrequency can be increased to above the default of one minute. If multiple, small files arrive, then MaxRaiseSize can be used to increase the number of files that are read in at a time.
- Tune the outbound FTP and file adapters by setting the ConcurrentThreshold property to a higher value (for example, 50), and setting the UseStaging property to false.
- Tune the JCA adapter by ensuring that it uses a connection pool to manage the connections, rather than setting up and tearing down a connection for each request.
- For socket adapters, ensure that KeepAlive is set to true.
- For JMS adapters, increase the value of adapter.jms.receive.threads property to 5.
- For Oracle AQ adapters, set the adapter.aq.dequeue.threads property to 5.
- Tune the Oracle MQ adapter by setting the InboundThreadCount property to 5.
How it works...
The preceding sets of recipes are the basics of configuring the adapters available in Oracle SOA Suite. There are many other properties that can be tuned, but their effectiveness is situational, so we have chosen to focus on the ones that we feel give improvement for the most projects.
The ThreadCount property on the FTP and file adapters controls the number of threads that are used to poll for new files and process them. It defaults to -1, which causes the default thread pool to be used, rather than a dedicated pool. This can lead to situations where the FTP and file adapters are starved of threads to process new files. It can also be set to 0, which causes the behavior to be the same as the Single Thread Model, where files are not placed on an in-memory queue between processing steps.
For outbound FTP and file adapters, ConcurrentThreshold controls the maximum number of translation activities for a particular scenario that can be occurring at any one time. Because translation is CPU-intensive, having very large numbers of concurrent translations can use up all of the available CPU cycles. So it is sensible to limit the number that can occur, however the default of 20 is a bit low, given the increased parallelization available in modern CPUs.
The UseStaging property for file and FTP adapters determines whether an intermediate file is written between the translation of a file and later processing steps. The default is true, but if sufficient memory is available, this can be set to false to improve performance.
Setting KeepAlive to true on socket adapters will ensure that the TCP socket connections are kept open for multiple requests, rather than closing and reopening connections for each request. This can improve performance, as setting up and tearing down TCP connections is a high overhead.
The adapter.jms.receive.threads, adapter.aq.dequeue.threads, and InboundThreadCount properties set the number of threads available for processing JMS, AQ, and MQ messages. The default value for these is 5, and performance can be improved by using more threads to dequeue messages from these providers.
See http://docs.oracle.com/cd/E14571_01/core.1111/e10108/adapters.htm for more information on tuning the BPEL adapter properties.
In this article we saw how you can design your applications for high performance and scalability, where high performance is defined as providing low response times even under load, and scalability is defined as the ability to expand to cope with large numbers of requests.
Resources for Article:
- Performance Tuning – Systems Running BPEL Processes [Article]
- Installation and Configuration of Oracle SOA Suite 11g R1: Part 1 [Article]
- Oracle Integration and Consolidation Products [Article]