Your message has been sent.
This article has been saved to your account.
Go to my account
This article has been emailed to your Kindle.
Send this article
Communication between enterprise systems is an essential part of an organization's architecture. How you decide to link these systems and by which criteria you distribute data, is something that you will be faced with time and again. In this article by Richard Seroter, co-author of Applied Architecture Patterns on the Microsoft Platform, we will look at how to send data messages to the correct target system.
McKeever Technologies is a medium-sized business, which manufactures latex products. They have recently grown in size through a series of small acquisitions of competitor companies. As a result, the organization has a mix of both home-grown applications and packaged line-of-business systems. They have not standardized their order management software and still rely on multiple systems, each of which houses details about a specific set of products. Their developers are primarily oriented towards .NET, but there are some parts of the organization that have deep Java expertise.
Up until now, orders placed with McKeever Technologies were faxed to a call center and manually entered into the order system associated with the particular product. Also, when customers want to discover the state of their submitted order, they are forced to contact McKeever Technologies' call center and ask an agent to look up their order. The company realizes that in order to increase efficiency, reduce data entry error, and improve customer service they must introduce some automation to their order intake and query processes.
McKeever Technologies receives less than one thousand orders per day and does not expect this number to increase exponentially in the coming years. Their current order management systems have either Oracle or SQL Server database backends and some of them offer SOAP service interfaces for basic operations. These systems do not all maintain identical service-level agreements; so the solution must be capable of handling expected or unexpected downtime of the target system gracefully.
The company is looking to stand up a solution in less than four months while not introducing too much additional management overhead to an already over-worked IT maintenance organization. The solution is expected to live in production for quite some time and may only be revisited once a long-term order management consolidation strategy can be agreed upon.
The following are key requirements for a new software solution:
- Accept inbound purchase requests and determine which system to add them to based on which product has been ordered
- Support a moderate transaction volume and reliable delivery to target systems
- Enable communication with diverse systems through either web or database protocols.
The technology team has acquired the following additional facts that will shape their proposed solution:
- The number of order management systems may change over time as consolidation occurs and new acquisitions are made.
- A single customer may have orders on multiple systems. For example, a paint manufacturer may need different types of latex for different products. The customers will want a single view of all orders notwithstanding which order entry system they reside on.
- The lag between entry of an order and its appearance on a customer-facing website should be minimal (less than one hour).
- All order entry systems are on the same network. There are no occasionally connected systems (for example, remote locations that may potentially lose their network connectivity).
- Strategic direction is to convert Oracle systems to Microsoft SQL Server and Java to C#.
- The new order tracking system does not need to integrate with order fulfillment or other systems at launch.
- There are priorities for orders (for example, "I need it tomorrow" requires immediate processing and overnight shipment versus "I need it next week").
- Legacy SQL Servers are SQL Server 2005 or 2008. No SQL Server 2000 systems.
The organization is trying to streamline data entry into multiple systems that perform similar functions. They wish to take in the same data (an order), but depending on attributes of the order, it should be loaded into one system or another. This looks like a content-based routing scenario.
What is content-based routing? In essence, it is distributing data based on the values it contains. You would typically use this sort of pattern when you have a single capability (for example, ADD ORDER, LOOKUP EMPLOYEE, DELETE RESERVATION) spread across multiple systems. Unlike a publish/subscribe pattern where multiple downstream systems may all want the same message (that is, one-to-many), a content-based routing solution typically helps you steer a message to the system that can best handle the request.
What is an alternative to implementing this routing pattern? You could define distinct channels for each downstream system and force the caller to pick the service they wish to consume. That is, for McKeever Technologies, the customer would call one service if they were ordering products A, B, or C, and use another service for products D, E, or F. This clearly fails the SOA rules of abstraction or encapsulation and forces the clients to maintain knowledge of the backend processing.
The biggest remaining question is what is the best way to implement this pattern. We would want to make sure that the routing rules were easily maintained and could be modified without expensive redeployments or refactoring. Our routing criteria should be rich enough so that we can make decisions based on the content itself, header information, or metadata about the transmission.
A team of technologists have reviewed the use case and drafted three candidate solutions. Each candidate has its own strengths and weaknesses, but one of them will prove to be the best choice.
Candidate architecture #1–BizTalk Server
A BizTalk Server-based solution seems to be a good fit for this customer scenario. McKeever Technologies is primarily looking to automate existing processes and communicate with existing systems, which are both things that BizTalk does well.
Solution design aspects
We are dealing with a fairly low volume of data (1000 orders per day, and at most, 5000 queries of order status) and small individual message size. A particular order or status query should be no larger than 5KB in size, meaning that this falls right into the sweet spot of BizTalk data processing.
This proposed system is responsible for accepting and processing new orders, which means that reliable delivery is critical. BizTalk can provide built-in quality of service, guaranteed through its store-and-forward engine, which only discards a message after it has successfully reached its target endpoint. Our solution also needs to be able to communicate with multiple line-of-business systems through a mix of web service and database interfaces. BizTalk Server offers a wide range of database adapters and natively communicates with SOAP-based endpoints. We are building a new solution which automates a formerly manual process, so we should be able to design a single external interface for publishing new orders and querying order status. But, in the case that we have to support multiple external-facing contracts, BizTalk Server makes it very easy to transform data to canonical messages at the point of entry into the BizTalk engine. This means that the internal processing of BizTalk can be built to support a single data format, while we can still enable slight variations of the message format to be transmitted by clients. Similarly, each target system will have a distinct data format that its interface accepts. Our solution will apply all of its business logic on the canonical data format and transform the data to the target system format at the last possible moment. This will make it easier to add new downstream systems without unsettling the existing endpoints and business logic.
From a security standpoint, BizTalk allows us to secure the inbound transport channel and message payload on its way into the BizTalk engine. If transport security is adequate for this customer, then an SSL channel can be set up on the external facing interface.
To assuage any fears of the customer that system or data errors can cause messages to get lost or "stuck", it is critical to include a proactive exception handling aspect. BizTalk Server surfaces exceptions through an administrator console. However, this does not provide a business-friendly way to discover and act upon errors. Fortunately for us, BizTalk enables us to listen for error messages and either re-route those messages or spin up an error-specific business process. For this customer, we could recommend either logging errors to a database where business users leverage a website interface to view exceptions, or, we can publish messages to a SharePoint site and build a process around fixing and resubmitting any bad orders. For errors that require immediate attention, we can also leverage BizTalk's native capability to send e-mail messages.
We know that McKeever Technologies will eventually move to a single order processing system, so this solution will undergo changes at some point in the future. Besides this avenue of change, we could also experience changes to the inbound interfaces, existing downstream systems, or even the contents of the messages themselves. BizTalk has a strong "versioning" history that allows us to build our solution in a modular fashion and isolate points of change.
Solution delivery aspects
McKeever Technologies is not currently a BizTalk shop, so they will need to both acquire and train resources to effectively build their upcoming solution. Their existing developers, who are already familiar with Microsoft's .NET Framework, can learn how to construct BizTalk solutions in a fairly short amount of time. The tools to build BizTalk artifacts are hosted within Visual Studio.NET and BizTalk projects can reside alongside other .NET project types.
Because the BizTalk-based messaging solution has a design paradigm (for example, publish/subscribe, distributed components to chain together) different from that of a typical custom .NET solution, understanding the toolset alone will not ensure delivery success. If McKeever Technologies decides to bring in a product like BizTalk Server, it will be vital for them to engage an outside expert to act as a solution architect and leverage their existing BizTalk experience when building this solution.
Solution operation aspects
Operationally, BizTalk Server provides a mature, rich interface for monitoring solution health and configuring runtime behavior. There is also a strong underlying set of APIs that can be leveraged using scripting technologies so that automation of routine tasks can be performed.
While BizTalk Server has tools that will feel familiar to a Windows Administrator, the BizTalk architecture is unique in the Microsoft ecosystem and will require explicit staff training.
BizTalk Server would be a new technology for McKeever technologies so definitely there is risk involved. It becomes necessary to purchase licenses, provision environments, train users, and hire experts. While these are all responsible things to do when new technology is introduced, this does mean a fairly high startup cost to implement this solution.
That said, McKeever technologies will need a long term integration solution as they attempt to modernize their IT landscape and be in better shape to absorb new organizations and quickly integrate with new systems. An investment in an enterprise service bus like BizTalk Server will pay long term dividends even if initial costs are high.
eBook Price: $35.99
Book Price: $59.99
Candidate architecture #2–SQL Server 2008 R2
It is possible to build a solution that meets our needs based on SQL Server tools.
Solution design aspects
The basis of this solution is a master repository that stores order information. Orders arrive into McKeever Technologies and get placed in the new Orders database. Each order is then routed to the appropriate target system based on routing rules.
If the target system is SQL Server-based then we can use SQL Server Service Broker (SSSB) to transmit data and return acknowledgements to the master repository. When the target system has an underlying Oracle database store, then we will leverage SQL Server Integration Services (SSIS) to move data between the systems.
There is a lot of value in establishing this master data repository. This will allow a single customer to enter an order for multiple products delivered from multiple legacy systems and remove the potential for improper routing of orders based on human error. Also, this makes it possible for orders to be queried from a single location instead of federating the query across multiple order management systems.
The major issue is that there will be significant data lag using SSIS and master data management tools. These systems work as batch processes, not as real-time systems. While you can make SSIS run in near real time, it requires extensive customization.
Solution delivery aspects
This solution would make heavy use of McKeever's existing SQL Server expertise. The team is well-versed in building bulk data movement jobs in SSIS, but has only limited experience authoring SSSB conversations.
Solution operation aspects
The main advantage SSIS and SSSB offer is that they are provided with McKeever's existing SQL Server licenses, avoiding the additional cost of BizTalk licenses. McKeever does have staff with on-hand experience in using SQL Server and working with SQL statements.
While McKeever Technologies hopes to move to a centralized order management system at some point in the future, they are currently more focused on extracting value from existing systems. Creating a master order management repository now moves the organization in the right direction, but there are significantly more hurdles to overcome when gaining the consensus needed to make this happen. In a federated model, each order management system can still receive and process orders using existing procedures. We may be creating more work by trying to define and synchronize a master data store. Also, this solution requires more effort when the company inevitably absorbs new companies and tries to integrate their processing systems.
Candidate architecture #3–WCF and Windows Server AppFabric
One of the key new features of WCF in .NET 4.0 is the ability to do content-based routing. This means that data inside the message can be used by the WCF framework at runtime to determine what service endpoint needs to be called. Furthermore, if nothing matches the content pattern for routing, a default endpoint can be selected to handle these requests. Routing in .NET 4.0 is accomplished using the Routing service, which can be configured and hosted inside IIS. Using Windows Server AppFabric for hosting this service will ensure the best possible execution with all the AppFabric benefits.
Let us walk though the decision framework to see if .NET 4.0 and Windows Server AppFabric would work for this scenario.
Solution design aspects
Solving this problem with .NET 4.0 and Windows Server AppFabric would have two parts. The first part would handle the order messages and the second part would handle order status messages. Both parts would follow similar patterns.
- Order Messages: Order messages will be routed to one of the backend systems using .NET 4.0 content-based routing. Each backend system will be fronted by a workflow service. This service will ensure guaranteed delivery of the message to the backend system and allow for various communication protocols between the frontend and various backend systems. Routing occurs based on the product being ordered, and product identifiers below a certain value go to one system, while product identifiers above a certain value go to the other. This means we can use the routing service to inspect the message content and choose which Workflow service to invoke.
- Order Status: Order status query messages will be routed to one of the backend systems using .NET 4.0 content-based routing. Each backend system will be fronted by a WCF service to allow for various communication protocols to these systems. If one of the backend systems is down, the WCF service will return a "not available" message to the client.
Solution delivery aspects
McKeever Technologies currently has a staff of .NET developers and has experience in developing complex solutions with .NET technologies and managing them in production. While learning some of the new features of .NET 4.0 and Windows Server AppFabric would take a little bit of time, this is something the existing resources should be able to handle.
The timeline is less than four months and given the preceding solution outline, this should be easily accomplished in that timeline.
Solution operation aspects
McKeever Technologies currently has a large block of .NET developers and hence has Information Technology support resources in this area. Further solutions based on .NET, should be easily supported by the existing operations staff. Windows Server AppFabric is a new technology for the McKeever administrators, but the integration with IIS 7 makes Windows Server AppFabric fairly easy to understand.
Given the relatively low load of the system—1,000 orders a day—performance of the system is not a huge concern. That said, a Windows Server AppFabric-based solution has extensive control on persistence points and tracking, to allow or only a minimal amount of performance impact.
As the solution is based on dealing with orders, it must be able to receive orders and to retry in the event that the backend system is down. Windows Server AppFabric would need a frontend network load balancer to ensure a highly available service endpoint, in addition to a clustered SQL data store for the persistence information. Both of these would be available for a client with existing .NET applications in production.
Organizationally, McKeever Technologies will be taking a slight risk by using Windows Server AppFabric simply because it is a new technology and employees will need to learn it. But to that point, investing now in the new technology will ensure a solution that will be around for years to come. This drives towards a key point related to maintainability and the need for this solution to survive many years.
Each solution has benefits and risks that we can use to make a final decision.
Out-of-the-box adapters to multiple target formats, including SQL Server and Oracle database
Lack of in-house expertise will require extensive training of both the developer and operations staff
|Reliable messaging infrastructure which can guarantee message delivery||Would take longer to build and deploy solutions that are purely code-based|
|Architecture built to support routing messages based on content||
|Leverages in-house expertise with SQL Server tools||Incapable of processing real-time data requests|
|Encourages master data approach which provides a unified frontend to clients||Different implementation techniques based on the type of target database|
|Can natively communicate with our target database formats||
|WCF and Windows Server AppFabric|
|Rapid, lightweight way to build service-oriented solutions||New technologies at the development (.NET 4.0) and operations tier (Windows Server AppFabric)|
|Built-in capability to do content-based routing in real time||Routing rules are relatively primitive and won't support complex conditions|
|Includes durable messaging to promote guaranteed delivery||
|Leverages existing .NET skill sets|
With all that said, the best choice for this scenario is the Windows Server AppFabric solution. This provides us a lightweight means to rapidly deploy a flexible and easy-to-maintain content-based routing solution that still gives us the quality of service that an enterprise solution such as BizTalk Server provides. In the long term, this organization will seriously consider investing in an enterprise service bus, but for this scenario, a Windows Server AppFabric host can meet the current and future needs of the company.
In this article, we saw how to route messages to specific backend systems. If this fictitious organization already had an in-house BizTalk Server and a staff of highly trained developers, then we would have chosen to go down a different path. Moreover, the .NET 4.0 Routing Service offers a compelling way to hide downstream endpoints and apply simple content-distribution filters.
In the next article, Building the Content Based Routing Solution on Microsoft Platform, we will construct a working version of the proposed solution.
eBook Price: $35.99
Book Price: $59.99
About the Author :
Richard Seroter is a solutions architect for an industry-leading biotechnology company, a Microsoft MVP for BizTalk Server, and a Microsoft Connected Systems Advisor. He has spent the majority of his career consulting with customers as they planned and implemented their enterprise software solutions. Richard worked first for two global IT consulting firms, which gave him exposure to a diverse range of industries, technologies, and business challenges. Richard then joined Microsoft as a SOA/BPM technology specialist where his sole objective was to educate and collaborate with customers as they considered, designed, and architected BizTalk solutions. One of those customers liked him enough to bring him onboard full time as an architect after they committed to using BizTalk Server as their enterprise service bus. Once the BizTalk environment was successfully established, Richard transitioned into a solutions architect role where he now helps identify enterprise best practices and applies good architectural principles to a wide set of IT initiatives.
Richard maintains a semi-popular blog of his exploits, pitfalls, and musings with BizTalk Server and enterprise architecture at http://seroter.wordpress.com.
The authors have provided a website with further information about the book here: http://appliedarchitecturepatterns.com/