Welcome to our exploration of some of the advanced topics in BPMN. When we set out to write this book, we chose the areas where we see the most confusion and difficulty in understanding how to use BPMN. Over the next five chapters, we will look at how process instances can communicate, how exceptions are handled and propagated, and how to deal with data in arrays. We will present theory and also build practical exercises together so that you can see how the theory is applied. Let's start our journey by building an understanding of inter-process communication.
Inter-process communication refers to the ability for instances of processes to communicate with other instances of the same process, with instances of other processes, and with services. Such communication is usually implemented so that process instances can work collaboratively to achieve a given goal. Common scenarios when this may occur include:
When common logic is extracted from a number of processes into a reusable "utility" process
When the occurrence of an event in one process means that another, perhaps separate, process needs to be started—this is often seen where the second process is an audit or investigation process
Where a process has a set of data, needs to run a common set of logic over each item in that data set, and then consolidate the results
Through normal decomposition of a business process into various levels of granularity, resulting in the need for the process to invoke one or more sub-processes to accomplish its work
There are different mechanisms available for processes to communicate with each other. In this chapter, we will explore the options and when we should employ each.
A conversation is a set of message exchanges. The message exchanges can be synchronous or asynchronous , but they should all be about the same subject matter, for example, a particular order, customer, case, and so on. The set of messages that forms the conversation is typically a request and a response, or a request and several (possible) responses.
The collaboration diagram allows you to visualize the process in the context of its conversations. You can access the collaboration diagram using the Collaboration Diagram tab at the bottom of the process editor in JDeveloper. An example of a collaboration diagram is shown in the following diagram:

This example includes a number of features that will be discussed in this book. The small, disconnected process that begins with Order Over Limit
is an event sub-process
. These will be discussed in detail in Chapter 4, Handling Exceptions. Briefly, they are invoked if a particular event (set of circumstances) occurs at any time during the execution of the process they belong to, the ProcessOrder
process in this example. If at any time it is determined that the order is over some predefined limit, then an audit is required. The event sub-process sends a message to start the Audit
process using a throw message event
, which we will discuss later in this chapter. The collaboration diagram allows us to see both of the processes that are involved in this collaboration and shows us visually where the interaction between them occurs (with the dotted arrow from the throw message event to the start of the Audit
process).
Conversations may also be scoped
; this means that they may be defined in a smaller scope than the process as a whole. For example, you can define a conversation inside an embedded sub-process
. To define a scoped conversation, you must do so in the Structure pane so that the conversation is placed in the correct scope. If you do not define the conversation in the Structure pane, it will inherit the process scope. The following image shows a process with two conversations defined: myconv1
at the process (global) scope, and the scoped conversation scopeConv
, which is inside an embedded sub-process:

In addition to defining conversations for communication with other processes, each service that you want to interact with will also require a conversation. When implementing your process, you need to create a conversation for each service, choose Service Call as the type, and then select the service you wish to interact with.
Each process has a default conversation. The default conversation is used to expose services provided by the process, and it therefore controls the interface for invocation of the process. This interface manifests itself as the WSDL port type.
The default conversation can be defined "top down" by starting with WSDL (service contract ) for the process and creating the conversation from that, or "bottom up" by defining the arguments for the process and having the service interface (WSDL) generated from the process.
If we are using the bottom-up approach, the interface is defined by adding arguments to the start node, as shown in the following screenshot. You need to select Define Interface as the message exchange type to use the bottom-up approach. The arguments can have simple types (String
, Date
, Integer
, and so on) or complex types, that is, they can be based on the business object
(which in turn can be based on an element or type definition in an XSD).

Correlation is the mechanism that is used to associate a message with the conversation that it belongs to. There are two main classes of correlation:
Automatic correlation refers to mechanisms where the correlation is handled automatically. BPM uses mechanisms like WS-Addressing and JMS Message IDs to achieve automatic correlation.
Message-based correlation refers to the mechanism the process developer needs to define some keys , which can be extracted from the message in order to determine which conversation a message belongs to. Examples are given in the next section.
There are some occasions when message-based correlation is necessary because automatic correlation is not available, for example:
When the other participant does not support WS-addressing, or
When a participant joins the conversation part way through but has only the data values, but no other information about the conversation
If you do not specify any settings for message-based correlation, the runtime engine will attempt to use automatic correlation. If it is not possible to do so, then you will get a correlation fault . The engine checks to see if the called process or service supports WS-addressing, in which case it will insert a WS-addressing header into the call. It will then wait for a matching reply. Similarly, if JMS is being used to transport the message, it will look for a reply message with the JMS correlation ID that matches the JMS message ID of the message it sent.
Correlation is especially important inside a loop construct, as there may be multiple threads/receives waiting at once, and the engine needs a way to know which reply belongs with which receive.
When using message-based correlation, you define a set of keys that are used to determine which conversation a message belongs to. This set of keys is called a correlation set.
A correlation set is a list of the (minimum) set of attributes that are needed to uniquely identify the conversation. An example of a correlation set may be orderNumber
plus customerNumber
.
When the runtime engine sees a conversation that uses message-based correlation, which has a correlation set attached to the start activity, it will create an MD5 hash from the values of the correlation keys and use that to identify the correct reply message if and when it arrives.
When you are using message-based correlation , only the called process needs to be aware of correlation, not the calling process . The runtime engine will take care of details for the calling process, so you do not need to include any correlation details in the process model for the calling process.
Note
It is important to understand that these rules do not apply when the calling process wants to call the called process more than once, as is the case when the call is inside a loop, for example. This scenario will be discussed shortly.
In the called process, you need to include the correlation set definition, and specify that the appropriate events or tasks use correlation. Let's look at an example in the following diagram:

The receive task
in this process has correlation specified in its properties. It has a correlation set identified, which contains a single key called ck_number
, and the mode is set to Initiates as shown in the following screenshot. This tells the runtime engine that this process instance is going to use message-based correlation. It also has the Create Instance property set. This tells the runtime engine that an inbound message will start an instance of this process.

If there are other receive tasks or message catch events in this process, they need to have correlation defined with the same correlation set and the mode set to Uses. These are called mid-point receives —places where the process instance can receive another message after it has already started executing. These could be used by the calling process to send a "cancel" message to tell the running instance of the called process to stop work, for example.
You do not need to define any correlation properties on the outputs of the process, for example its send task , or any end (message) nodes or throw message events. Only inputs have correlation properties defined.
There are some occasions when you will want to call a service or process several times from the same instance of a process. This commonly occurs when you want to call the service for every item in a collection, for example.
In this scenario, you need to place the send task and receive task (or throw and catch events) inside an embedded sub-process and define a scoped conversation inside the embedded sub-process. As mentioned previously, you will not need to define correlation information in the calling process, just the called process.
Here is an example of a process that contains a multi-instance embedded sub-process that iterates over an array of input data, calling another process to carry out some work on each element in that array, in parallel.

There is a scoped conversation defined inside the embedded sub-process as we see in the following image. The send and receive tasks each use this conversation, rather than the default conversation. We will build this process in the next chapter.

Throw and catch events provide a mechanism to communicate with another process or service. Specifically, you can use throw events to invoke:
Another BPMN process
A BPEL process
An adapter
A mediator that is exposed as a service
Any other component that is exposed as a service
Throw events are usually asynchronous. As soon as the throw event is executed, the process continues with the next task. It does not wait for a response. It is possible for a throw event to be synchronous, in the sense that you can invoke a synchronous service with a throw event and it can reply on the same connection—as opposed to sending a callback later. You can specify that you want to wait for a synchronous reply using the Synchronous property on the throw event. If you want to invoke a synchronous service or process, you could alternatively use a service task .
It is also important to understand that processes invoked through throw/catch events (and also those invoked through send/receive tasks) are not child processes of the invoking process, they are peers . This will be important later on when we discuss exception handling .
You can throw a message or a signal using a throw event. Throwing a message is the equivalent of sending a SOAP message to a service. Throwing a signal is the equivalent of publishing an event into the event delivery network. You can use a throw event to invoke a process that starts with a receive task, but only if that receive task has the Create New Instance property set.
The send task allows you to send a message to a receive task in another process, and the receive task allows you to receive a message from a send task in another process. The send task is similar to the throw message event; however, you cannot use the send task to invoke a process that starts with a message start event. There are no send and receive tasks for signals, only for messages. Send and receive tasks also allow you to attach boundary events (which will be discussed in Chapter 4, Handling Exceptions) to them. This is an important difference.
You can use the receive task to start a process, however, in this case, you must set the Create Instance property and there must be a single start node of type "none" immediately before the receive task.
The following diagram shows three processes that use the methods we have discussed to communicate with each other. The dotted arrows indicate where throw and catch message events are used by Process3
to invoke Process1
, and by Process1
to return some data to Process3
when it is finished. The red arrows indicate where send and receive message tasks are used by Process1
to invoke Process2
, and by Process2
to return some data to Process1
when it is finished.

Let us consider what happens when an instance of Process3
is executed:
Process3
starts.Process3
throws a message event to startProcess1
.Right away,
Process3
goes on toActivity
.At the same time (more or less,)
Process1
starts.Process1
sends a message to startProcess2
.Right away,
Process1
goes on toDo something
.At the same time (more or less),
Process2
starts.Process2
goes on toDo something else
.While all of this is going on, when
Process3
finished doingActivity
, it went on toCatchEvent
and paused there waiting for a response back fromProcess1
.Similarly, when
Process1
finishedDo something
, it went on toReceiveTask
and paused there waiting for a response back fromProcess2
.When
Process2
finishedDo something else
, it sent a response (in this case by sending a message) back toProcess1
.Process1
wakes up upon receiving a response (message) fromProcess2
and then sends its response (by throwing a message event) back toProcess3
.Process3
wakes up upon receiving a response (catching a message event) fromProcess1
and then moves on to its end.
The following table is a quick guide to which kind of inter-process communication mechanism you should use in various circumstances:
Throw/catch message events |
Throw/catch signal events |
Send/receive tasks | |
---|---|---|---|
Ability to attach a boundary event to catch errors |
No |
No |
Yes |
Asynchronous |
Either |
Yes |
Yes |
Invoked process becomes a ... |
Child |
Child |
Peer |
The process you want to invoke starts with a ... |
Catch message event or receive task that creates an instance |
Catch signal event |
Receive task |
You know who the receiver is at design time |
Yes |
No |
Yes |
You want to send the 'message' to ... receivers |
One |
Any number |
One |
Failure of called process propagates to calling process* |
No |
No |
Yes |
Note
Propagation of failures will be covered in Chapter 4, Handling Exceptions.
Throw and catch events come in several types including messages, signals, and errors. Let us consider these different types and when we might use each.
A message is a set of data based on some type definition (a data structure), which is sent from a sender to a receiver. The sender knows who the receiver is and addresses the message to the receiver. If the message cannot be delivered, the sender is informed and can then take the appropriate action, for example, they might retry sending the message later. In the context of the runtime environment, a message is a SOAP message sent from a service consumer to a service provider (or vice versa). The type definition is normally placed in an XSD for easy reuse, however, it may be in a WSDL file. It will often be in a WSDL file for pre-existing services.
A signal is a set of data, based on some type definition, which is broadcast from a sender and enters the Event Delivery Network as an event. If there are any subscribers for that particular type of event, the EDN will (most likely) deliver the event to them. We say "most likely" because the EDN does not offer the same guarantees about delivery as, for example, SOAP over JMS does.
The EDN does allow you to configure once and only once delivery, which is transactional—it is delivered in a global transaction—but it is not possible to create a durable subscriber. This means that if there is a system failure, signals may be lost and may not be delivered when the system is restarted.
Neither rollback nor retry mechanisms are provided by the EDN—except in the case of once and only once delivery. For this reason, signals are normally used when delivery is time sensitive and it no longer matters if a signal was delivered after a certain period of time has passed. The signal's type definition is also in XSD. Note that the sender (broadcaster) does not know whether there are any receivers (subscribers), how many there are, and whether the signals are ever delivered to them.
Note
The Event Delivery Network is a feature of the Oracle BPM Suite that provides a mechanism to publish events and optionally take various actions on them, such as pattern matching and to subscribe to events so that they will be delivered to the subscriber when they are generated. An in-depth discussion of its capabilities is beyond the scope of this volume.
Errors are exceptions. These would normally manifest as SOAP faults in the runtime environment. Exceptions are discussed in detail in Chapter 5, Handling Exceptions in Practice.
There are two methods available to invoke a sub-process—the embedded sub-process and the reusable sub-process. The embedded sub-process also contains a special case called the multi-instance embedded sub-process, which as the name suggests, allows us to run multiple instances of the embedded sub-process. Let us take a look at the differences and when we might use each.
An embedded sub-process is included in the same process model as its parent process. It is, in fact, included in the flow of the parent process. The embedded sub-process can be expanded to show its contents, or collapsed, in which case it is shown as a single task in the parent process as we can see in the following diagram:

Embedded sub-processes provide a number of capabilities that make them useful:
They establish scope for conversations, variables, and exceptions. This means that we can define a conversation or a variable inside an embedded sub-process and it will only be visible inside that embedded sub-process. This is particularly useful if we need to deal with a large amount of data for a short time during the process. By placing that data in variables that are scoped (defined) inside an embedded sub-process, we will only force the runtime environment to persist them while the embedded sub-process is running, thereby improving performance and minimizing our storage needs.
They also set the boundary for exceptions. We can attach boundary events to an embedded sub-process (these will be discussed in detail in Chapter 4, Handling Exceptions) so that we can localize the exception handling for anything that goes wrong during the embedded sub-process. This can be useful if we want to be able to catch an error and then retry the logic inside the embedded sub-process. In this case, you can think of the embedded sub-process as being similar to the
try/catch
structure in many common programming language environments.Embedded sub-processes can see and manipulate their parent's variables, unlike reusable sub-processes.
Embedded sub-processes can be placed inside each other to create hierarchies of scopes, each with their own variables, conversations, and exception handling if desired.
They provide a mechanism to loop or repeat. You can specify an arbitrary number of times to repeat, or you can use an expression to calculate how many times to repeat the embedded sub-process. These expressions are able to reference variables and can also use XPath functions. You can evaluate the expression before or after the loop execution, giving you the equivalent of
do...while
andwhile
semantics. You can also set a maximum number of iterations to prevent infinite loops.They also provide a mechanism that iterates over a collection, which is discussed in the next section.
The multi-instance embedded sub-process is a special case that allows you to iterate over a collection of data. This will be covered in detail in Chapter 3, Working with Arrays, but for now let's discuss the main characteristics of the multi-instance embedded sub-process:
The multiple instances can be run sequentially (one after the other) or in parallel.
You can specify how many instances to run at runtime based on the cardinality of an object (like an array) or by iterating over a collection . Loops based on cardinality resemble a
for
loop, while those based on a collection resemble aforeach
loop.You can additionally specify a completion condition so that you are able to "short circuit" the iteration if you find that you are finished before all of the iterations are complete. This may be the case, for example, when you are searching for a single item in the collection that you want to do something to or with. Once you find that item, it is no longer necessary to continue iterating over the rest of the collection.
Multi-instance embedded sub-processes also share the characteristics of "normal" embedded sub-processes. They establish scope for conversations, variables, and exception handling, can be placed inside each other, and can access their parent's variables.
An interesting case to consider is iteration over lists of lists. Using a multi-instance embedded sub-process you can iterate over the items in the outer list in parallel, while a second multi-instance embedded sub-process iterates over the items in the inner list, which is the current element of the outer list sequentially.
Note
A good example of when this might happen is performing pathology tests. Often a series of tests can be performed one after the other on a single sample, but other tests require different samples. If there were n
series of tests to be performed, this could be represented as a list of lists and modeled in this fashion.
This is illustrated in the following process model, which also includes a final review and possible repeating of one or more series of tests:

Reusable sub-processes are included in the same project as their parent process(es), but in a separate process model. They must start with a catch none event and end with a throw none event.
Any process in the same project (composite) as the reusable sub-process is able to call the reusable sub-process, however, they are not exposed as services, they are not shown in composite, and there is no way to invoke them directly from outside of the composite in which they are defined. Additionally, at runtime a reusable sub-process is shown as executing inline, within the outer process flow—the process that invoked it—even though it was modeled in a separate process model.
Reusable sub-processes are invoked using the call activity. Variables of the parent (calling) process are not available to the reusable sub-process unless you pass them to the reusable sub-process as arguments .
The following table is a quick guide to which kind of sub-process you should use in various circumstances.
Embedded |
Multi-instance |
Reusable | |
---|---|---|---|
Want access to parent's variables |
Yes |
Yes |
Must pass them |
Need looping |
Yes |
No |
No* |
Need to iterate over a collection |
No |
Yes |
No* |
Need to call it from more than one parent |
No |
No |
Yes |
Want parallel execution |
No |
Yes |
No* |
Want to establish a new scope |
Yes |
Yes |
Yes |
Want short-circuit completion |
No |
Yes |
No* |
In this chapter, we have seen how to use send and receive tasks and throw and catch events to enable inter-process communication. We have explored the important role that conversations and correlation play in ensuring that replies are delivered to the correct instances, and even threads within instances. We have also considered when to use different kinds of inter-process communication options and when to use different kinds of processes.
In the next chapter, we will put this new knowledge into practice by building a number of example processes to demonstrate inter-process communication to ourselves in action.