The Trivadis Integration Architecture Blueprint: Implementation scenarios

For SOA professionals this is the classic guide to implementing integration architectures with the help of the Trivadis Blueprint. Takes you deep into the blueprint’s structure and components with perfect lucidity.

(For more resources on Trivadis Integration and SOA, see here.)

Service-oriented integration scenarios

These scenarios show how the service-oriented integration business patterns can be implemented. These business patterns are as follows:

  • Process integration: The process integration pattern extends the 1: N topology of the broker pattern. It simplifies the serial execution of business services, which are provided by the target applications.
  • Workflow integration: The workflow integration pattern is a variant of the serial process pattern. It extends the capability of simple serial process orchestration to include support for user interaction in the execution of individual process steps.

Implementing the process integration business pattern

In the scenario shown in the following diagram, the process integration business pattern is implemented using BPEL.

The Trivadis Integration Architecture Blueprint

Trigger:

An application places a message in the queue.

Primary flow:

  1. The message is extracted from the queue through JMS and a corresponding JMS adapter.
  2. A new instance of the BPEL integration process is started and the message is passed to the instance as input.
  3. The integration process orchestrates the integration and calls the systems that are to be integrated in the correct order.
  4. A content-based router in the mediation layer is responsible for ensuring that the correct one of the two systems is called. However, from a process perspective, this is only one stage of the integration.
  5. In the final step, a "native" integration of an EJB session bean is carried out using an EJB adapter.

Variant with externalized business rules in a rule engine

A variant of the previous scenario has the business rules externalized in a rule engine, in order to simplify the condition logic in the integration process. This corresponds to the external business rules variant of the process integration business pattern, and is shown in the form of a scenario in the following diagram:

The Trivadis Integration Architecture Blueprint

Trigger:

The JEE application sends an SOAP request.

Primary flow:

  1. The SOAP request initiates a new instance of the integration process.
  2. The integration process is implemented as before, with the exception that in this case, a rule engine is integrated before evaluating the condition. The call to the rule engine from BEPL takes the form of a web service call through SOAP.
  3. Other systems can be integrated via a DB adapter as shown here, for example to enable them to write to a table in an Oracle database.

Variant with batch-driven integration process

In this variant, the integration process is initiated by a time-based event. In this case, a job scheduler added before the BPEL process triggers an event at a specified time, which starts the process instance. The process is started by the scheduler via a web service call. The following diagram shows the scenario:

The Trivadis Integration Architecture Blueprint

Trigger:

The job scheduler building block does a web service request at a specified time.

Primary flow:

  1. The call from the job scheduler via SOAP initiates a new integration process instance.
  2. As in the previous variants, the BPEL process executes the necessary integration steps and, depending on the situation, integrates one system via a database adapter, and the other directly via a web service call.

Implementing the workflow business pattern

In this scenario, additional user interaction is added to the integration process scenario. As a result, the integration process is no longer fully automated. It is interrupted at a specific point by interaction with the end user, for example to obtain confirmation for a certain procedure. This scenario is shown in the following diagram:

The Trivadis Integration Architecture Blueprint

Trigger:

An application places a message in the queue.

Primary flow:

  1. The message is removed from the queue by the JMS adapter and a new instance of the integration process is started.
  2. The user interaction takes place through the asynchronous integration of a task service. It creates a new task, which is displayed in the user's task list.
  3. As soon as the user has completed the task, the task service returns a callback to the relevant instance of the integration process, and by that, informs the process of the user's decision.
  4. The integration process responds to the decision and executes the remaining steps.

(For more resources on Trivadis Integration and SOA, see here.)

Modernizing an integration solution

This section uses an example to illustrate how an existing integration solution that has grown over time can be modernized using SOA methods.

The example is a simplified version of a specific customer project in which an existing solution was modernized with the help of SOA.

The task of the integration solution is to forward orders entered in the central ERP system to the external target applications.

Initial situation

The current solution is primarily based on a file transfer mechanism that sends the new and modified orders at intervals to the relevant applications, in the form of files in two possible formats (XML und CSV). The applications are responsible for processing the files independently.

At a later date, another application (IT App in the following diagram) was added to the system using a queuing mechanism, because this mechanism allowed for the guaranteed exchange of messages with the application by reading new orders, and sending appropriate messages through the queue in the form of a transaction.

The following diagram shows the initial situation before the modernization process took place:

The Trivadis Integration Architecture Blueprint

The extraction and file creation logic is written in PL/SQL. A Unix shell script is used to send the files through the File Transfer Protocol (FTP), as no direct FTP call was possible in PL/SQL. Both a shell script and the PL/SQL logic are responsible for orchestrating the integration process.

Oracle Advanced Queuing (AQ) is used as the queuing infrastructure. As PL/SQL supports sending of AQ messages through an API (package), it was possible to implement this special variant of the business case entirely in PL/SQL, without a call to a shell script being needed. In this case, the integration is bi-directional. This means that when the order has been processed by the external system, the application must send a feedback message to the ERP system. A second queue, which is implemented in the integration layer using PL/SQL, is used for this purpose.

Sending new orders

New orders added to the master system (ERP-App) are periodically sent to interested external systems.

Trigger:

The job scheduler triggers an event every 30 minutes for each external system that has to be integrated.

Flow:

  1. The event triggered by the job scheduler starts a shell script, which is responsible for part of the orchestration.
  2. The shell script first starts a PL/SQL procedure that creates the files, or writes the information to the queue.
  3. The PL/SQL procedure reads all the new orders from the ERP system's database, and enriches them with additional information about the product ordered and the customer.
  4. Depending on the external target system, a decision is made as to whether the information about the new order should be sent in the form of files, or messages in queues.
  5. The target system can determine in which format (XML or CSV) the file should be supplied. A different PL/SQL procedure is called depending on the desired format.
  6. The PL/SQL procedure writes the file in the appropriate format using a PL/SQL tool (in other words, the built-in package UTL_FILE) to the database server. The database server is used only for interim storage of the files, as these are uploaded to the target systems in the next step.
  7. The main shell script starts the process of uploading the files to the external system, and another shell script completes the task.
  8. The files are made available on the external system and are processed in different ways depending on the application in question.
  9. A PL/SQL procedure is called to send the order information through the queue. The procedure is responsible for formatting and sending the message.
  10. The document is now in the output queue (send) ready to be consumed.
  11. The application (IT App) consumes the messages from the queue immediately and starts processing the order.
  12. When the order has been processed, the external application sends a message to the feedback queue (receive).

Receiving the confirmation

The process orders are periodically sent back to the ERP system for invoicing.

Trigger:

The job scheduler triggers an event every 15 minutes.

Flow:

  1. The job scheduler event starts a PL/SQL procedure, which processes the feedback message.
  2. The message is consumed from the feedback queue (receive).
  3. A SQL UPDATE command updates the status of the order in the ERP database.

Evaluation of the existing solution

By evaluating the existing solution we came to the following conclusions:

  • This is an integration solution that has grown up over time using a wide variety of different technologies.
  • A batch solution which does not support real-time integration. Exchanging information in files is not really a state-of-the-art solution.
  • Exchanging information in files is not really a state-of-the-art solution.
    • Data cannot be exchanged reliably, as FTP does not support transactions.
    • Error handling and monitoring are difficult and time-consuming. (It's not easy to determine if the IT app does not send a response.)
    • Files must be read and processed by the external applications, all of which use different methods.
  • Integrating new distribution channels (such as web services) is difficult, as neither PL/SQL nor shell scripts are the ideal solution in this case.
  • Many different technologies are used. The integration logic is distributed, which makes maintenance difficult:
    • Job scheduler (for orchestration)
    • PL/SQL (for orchestration and mediation)
    • Shell script (for orchestration and mediation)
  • Different solutions are used for files and queues.

Many of these disadvantages are purely technical. From a business perspective, only the first disadvantage represents a real problem. The period of a maximum of 30 minutes between the data being entered in the ERP system, and the external systems being updated, is clearly too long. From a technical point of view, it is not possible to reduce this amount of time, as the batch solution overhead is significant and, in the case of shorter cycles, the total overhead would be too large.

Therefore, the decision was made to modernize the existing integration solution and to transform it into an event-driven, service-oriented integration solution based on the processing of individual orders.

(For more resources on Trivadis Integration and SOA, see here.)

Modernizing — integration with SOA

The main objective of the modernization process, from a business perspective, is the real-time integration of orders.

From a technical standpoint, there are other objectives, including the continued use of the batch mode through file connections. This means that the new solution must completely replace the old one, and the two solutions should not be left running in parallel. A further technical objective is that of improved support as a result of the introduction of a suitable infrastructure.

On the basis of these considerations, a new SOA-based integration architecture was proposed and implemented, as shown in the following diagram:

The Trivadis Integration Architecture Blueprint

Trigger:

Each new order is published to a queue in the ERP database, using the Change Data Capture functionality of the ERP system.

Flow:

  1. The business event is consumed from the queue by an event-driven consumer building block in the ESB. The corresponding AQ adapter is used for this purpose.
  2. A new BPEL process instance is started for the integration process. This instance is responsible for orchestrating all the integration tasks for each individual order.
  3. First, the important order information concerning the products and the customer must be gathered, as the ERP system only sends the primary key for the new order in the business event. A service is called on the ESB that uses a database adapter to read the data directly from the ERP database, and compiles it into a message in canonical format.
  4. A decision is made about the system to which the order should be sent, and about whether feedback on the order is expected.
  5. In the right-hand branch, the message is placed in the existing output queue (send). A message translator building block converts the order from the canonical format, to the message format used so far, before it is sent. The AQ adapter supports the process of sending the message. The BPEL process instance will be paused until the callback from the external applications is received.
  6. The message is processed by the external application in the same way as before. The message is retrieved, the order is processed, and, at a specified time, a feedback message is sent to the feedback queue (receive).
  7. The paused BPEL process instance is reactivated and consumes the message from the feedback queue.
  8. An invoke command is used to call another service on the ESB, which modifies the status of the ERP system in a similar way to the current solution. This involves a database adapter making direct modifications to a table or record in the ERP database.
  9. In the other case, which is shown in the branch on the left, only a message is sent to the external systems. Another service is called on the ESB for this purpose, which determines the target system and the target format based on some information passed in the header of the message.
  10. The ESB uses a header-based router to support the content-based forwarding of the message.
  11. Depending on the target system, the information is converted from the canonical format to the correct target format.
  12. The UK App already has a web service, which can be used to pass the order to the system. For this reason, this system is connected via an SOAP adapter.
  13. The two other systems continue to use the file-based interface. Therefore, an FTP adapter creates and sends the files through FTP in XML or CSV format.
  14. In order to ensure that the external application (labeled GE App in the diagram) still receives the information in batch mode, with several orders combined in one file, an aggregator building block is used. This collects the individual messages over a specific period of time, and then sends them together in the form of one large message to the target system via the FTP adapter.
  15. An aggregation process is not needed for the interface to the other external application (labeled CH App in the image), as this system can also process a large number of small files.

Evaluation of the new solution

An evaluation of the new solution shows the following benefits:

  • The orchestration is standardized and uses only one technology.
  • One BPEL instance is responsible for one order throughout the entire integration process:
    • This simplifies the monitoring process, because the instance continues running until the order is completed; in other words, in one of the two cases until the feedback message from the external system has been processed.
  • The orchestration is based only on the canonical format. The target system formats are generated at the last possible moment in the mediation layer:
    • Additional distribution channels can easily be added on the ESB, without having to modify the orchestration process.
    • The solution can easily support other protocols or formats that are not yet known, simply by adding an extra translator building block.

Summary

You have now seen how the Trivadis Integration Architecture Blueprint can be used to illustrate integration scenarios that implement the various business patterns.

Most of the scenarios have been kept independent of specific vendor products on the integration level, and are based solely on the building blocks that form part of the different layers of the blueprint.


Further resources on this subject:


Books to Consider

comments powered by Disqus