So what exactly is a service? A service is essentially a well-defined interface to an autonomous chunk of functionality, which usually corresponds to a specific business process. That might sound a lot like a regular old object-oriented component to you. While both services and components have commonality in that they expose discrete interfaces of functionality, a service is more focused on the capabilities offered than the packaging. Services are meant to be higher-level, business-oriented offerings that provide technology abstraction and interoperability within a multipurpose "services" tier of your architecture.
What makes up a service? Typically you'll find:
- Contract: Explains what operations the service exposes, types of messages, and exchange patterns supported by this service, and any policies that explain how this service is used.
- Messages: The data payload exchanged between the service consumer and provider.
- Implementation: The portion of the service which actually processes the requests, executes the expected business functionality, and optionally returns a response.
- Service provider: The host of the service which publishes the interface and manages the lifetime of the service.
- Service consumer: Ideally, a service has someone using it. The service consumer is aware of the available service operations and knows how to discover the provider and determine what type of messages to transmit.
- Facade: Optionally, a targeted facade may be offered to particularly service consumers. This sort of interface may offer a more simplified perspective on the service, or provide a coarse-grained avenue for service invocation.
What is the point of building a service? I'd say it's to construct an asset capable of being reused which means that it's a discrete, discoverable, self-describing entity that can be accessed regardless of platform or technology.
Service-oriented architecture is defined as an architectural discipline based on loosely-coupled, autonomous chunks of business functionality which can be used to construct composite applications. Through the rest of this article we get a chance to flesh out many of the concepts that underlie that statement. Let's go ahead and take a look at a few of the principles and characteristics that I consider most important to a successful service-oriented BizTalk solution. As part of each one, I'll explain the thinking behind the principle and then call out how it can be applied to BizTalk Server solutions.
Many of the fundamental SOA principles actually stem from this particular one. In virtually all cases, some form of coupling between components is inevitable. The only way we can effectively build software is to have interrelations between the various components that make up the delivered product. However, when architecting solutions, we have distinct design decisions to make regarding the extent to which application components are coupled. Loose coupling is all about establishing relationships with minimal dependencies.
What would a tightly-coupled application look like? In such an application, we'd find components that maintained intimate knowledge of each others' working parts and engaged in frequent, chatty synchronous calls amongst themselves. Many components in the application would retain state and allow consumers to manipulate that state data. Transactions that take place in a tightly coupled application probably adhere to a two-phase commit strategy where all components must succeed together in order for each data interaction to be finalized. The complete solution has its ensemble of components compiled together and singularly deployed to one technology platform. In order to run properly, these tightly-coupled components rely on the full availability of each component to fulfill the requests made of them.
On the other hand, a loosely-coupled application employs a wildly different set of characteristics. Components in this sort of application share only a contract and keep their implementation details hidden. Rarely preserving state data, these components rely on less frequent communication where chunky input containing all the data the component needs to satisfy its requestors is shared. Any transactions in these types of applications often follow a compensation strategy where we don't assume that all components can or will commit their changes at the same time. This class of solution can be incrementally deployed to a mix of host technologies. Asynchronous communication between components, often through a broker, enables a less stringent operational dependency between the components that comprise the solution.
What makes a solution loosely coupled then? Notably, the primary information shared by a component is its interface. The consuming component possesses no knowledge of the internal implementation details. The contract relationship suffices as a means of explaining how the target component is used. Another trait of loosely coupled solutions is coarse-grained interfaces that encourage the transmission of full data entities as opposed to fine-grained interfaces, which accept small subsets of data. Because loosely-coupled components do not share state information, a thicker input message containing a complete impression of the entity is best. Loosely-coupled applications also welcome the addition of a broker which proxies the (often asynchronous) communication between components. This mediator permits a rich decoupling where runtime binding between components can be dynamic and components can forgo an operational dependency on each other.
Let's take a look at an example of loose coupling that sits utterly outside the realm of technology.
Completely non-technical loose coupling example
When I go to a restaurant and place an order with my waiter, he captures the request on his pad and sends that request to the kitchen. The order pad (the contract) contains all the data needed by the kitchen chef to create my meal. The restaurant owner can bring in a new waiter or rotate his chefs and the restaurant shouldn't skip a beat as both roles (services) serve distinct functions where the written order is the intersection point and highlight of their relationship.
Why does loose coupling matter? By designing a loosely-coupled solution, you provide a level of protection against the changes that the application will inevitably require over its life span. We have to reduce the impact of such changes while making it possible to deploy necessary updates in an efficient manner.
How does this apply to BizTalk Server solutions?
A good portion of the BizTalk Server architecture was built with loose coupling in mind. Think about the BizTalk MessageBox which acts as a broker facilitating communication between ports and orchestrations while limiting any tight coupling. Receive ports and send ports are very loosely coupled and in many cases, have absolutely no awareness of each other. The publish-and-subscribe bus thrives on the asynchronous transfer of self-describing messages between stateless endpoints. Let's look at a few recommendations of how to build loosely-coupled BizTalk applications.
Orchestrations are a prime place where you can either go with a tightly-coupled or loosely-coupled design route. For instance, when sketching out your orchestration process, it's sure tempting to use that Transform shape to convert from one message type to another. However, a version change to that map will require a modification of the calling orchestration. When mapping to or from data structures associated with external systems, it's wiser to push those maps to the edges (receive/send ports) and not embed a direct link to the map within the orchestration.
BizTalk easily generates schemas for line-of-business (LOB) systems and consumed services. To interact with these schemas in a very loosely coupled fashion, consider defining stable entity schemas (i.e. "canonical schemas") that are used within an orchestration, and only map to the format of the LOB system in the send port. For example, if you need to send a piece of data into an Oracle database table, you can certainly include a map within an orchestration which instantiates the Oracle message. However, this will create a tight coupling between the orchestration and the database structure. To better insulate against future changes to the database schema, consider using a generic intermediate data format in the orchestration and only transforming to the Oracle-specific format in the send port.
How about those logical ports that we add to orchestrations to facilitate the transfer of messages in and out of the workflow process? When configuring those ports, the Port Configuration Wizard asks you if you want to associate the port to a physical endpoint via the Specify Now option. Once again, pretty tempting. If you know that the message will arrive at an orchestration via a FILE adapter, why not just go ahead and configure that now and let Visual Studio.NET create the corresponding physical ports during deployment? While you can independently control the auto-generated physical ports later on, it's a bad idea to embed transport details inside the orchestration file.
On each subsequent deployment from Visual Studio.NET, the generated receive port will have any out-of-band changes overwritten by the deployment action.
Chaining orchestration together is a tricky endeavor and one that can leave you in a messy state if you are too quick with a design decision. By "chaining orchestrations", I mean exploiting multiple orchestrations to implement a business process. There are a few options at your disposal listed here and ordered from most coupled to least coupled.
- Call Orchestration or Start Orchestration shape: An orchestration uses these shapes in order to kick off an additional workflow process. The Call Orchestration is used for synchronous connection with the new orchestration while the Start Orchestration is a fire-and-forget action. This is a useful tactic for sharing state data (for example variables, messages, ports) from the source orchestration to the target. However, both options require a tight coupling of the source orchestration to the target. Version changes to the target orchestration would likely require a redeployment of the source orchestration.
- Partner direct bound ports: These provide you the capability to communicate between orchestrations using ports. In the forward partner direct binding scenario, the sender has a strong coupling to the receiver, while the receiver knows nothing about the sender. This works well in situations where there are numerous senders and only one receiver. Inverse partner direct binding means that there is a tight coupling between the receiver and the sender. The sender doesn't know who will receive the command, so this scenario is intended for cases where there are many receivers for a single sender. In both cases, you have tight coupling on one end, with loose-coupling on the other.
- MessageBox direct binding: This is the most loosely-coupled way to share data between orchestrations. When you send a message out of an orchestration through a port marked for MessageBox direct binding, you are simply placing a message onto the bus for anyone to consume. The source orchestration has no idea where the data is going, and the recipients have no idea where it's been.
MessageBox direct binding provides a very loosely-coupled way to send messages between different orchestrations and endpoints.
While MessageBox direct binding is great, you do lose the ability to send the additional state data that a Call Orchestration shape will provide you. So, as with all architectural decisions, you need to decide if the sacrifice (loose coupling, higher latency) is worth the additional capabilities.
Decisions can be made during BizTalk messaging configuration that promote a loosely-coupled BizTalk landscape. For example, both receive ports and send ports allow for the application of maps to messages flying past. In each case, multiple maps can be added. This does NOT mean that all the maps will be applied to the message, but rather, it allows for sending multiple different message types in, and emitting a single type (or even multiple types) out the other side. By applying transformation at the earliest and latest moments of bus processing, you loosely couple external formats and systems from internal canonical formats. We should simply assume that all upstream and downstream systems will change over time, and configure our application accordingly.
Another means for loosely coupling BizTalk solutions involves the exploitation of the publish-subscribe architecture that makes up the BizTalk message bus. Instead of building solely point-to-point solutions and figuring that a SOAP interface makes you service oriented, you should also consider loosely coupling the relationship between the service input and where the data actually ends up. We can craft a series of routing decision that take into account message content or context and direct the message to one or more relevant processes/endpoints. While point-to-point solutions may be appropriate for many cases, don't neglect a more distributed pattern where the data publisher does not need to explicitly know exactly how their data will be processed and routed by the message bus.
When identifying subscriptions for our send ports, we should avoid tight coupling to metadata attributes that might limit the reuse of the port. For instance, you should try to create subscriptions on either the message type or message content instead of context attributes such as the inbound receive port name. Ports should be tightly coupled to the MessageBox and messages it stores, not to attributes of its publisher. That said, there are clearly cases where a subscriber is specifically looking for data that corresponds to a targeted piece of metadata such as the subject line of the email received by BizTalk. As always, design your solution in a way that solves your business problem in an efficient manner.
The SOA concept of abstraction is all about making your service a black box to consumers. All that the consumers see is an interface while possessing no visibility into the soft meaty center of the service. The underlying service could be very simple or mind-numbingly complex. It could have a very stable core, or be undergoing consistent upgrades. The service logic could integrate with a single backend system, or choreograph communication across ten applications. None of these things should matter to a service consumer who has an interface that provides an abstract perspective of the service itself.
This is where the art of service contract design plays an immense role. The contract needs to strike the right balance of information hiding, while still demanding information material to an effective service. Consider operation granularity. I have an application that requires a series of API calls in order to insert a new order for a product. First I need to check the available stock, then decrement the stock, and then add the new order to the system. If I were a brand new SOA developer, I might take that API, slap a SOAP interface on it, and declare our application to be service-oriented. Wrong answer! We don't always need to expose that level of granularity to the consumer. Let's bestow upon them a nice coarse-grained interface that hides the underlying system API messiness and simply accepts the product order through a SubmitOrder operation.
Completely non-technical abstraction example
When my order is taken at a restaurant, I don't have the opportunity (or desire!) to outline the sequence of steps I wish the chef to take in preparing my meal. Instead, I am asked a simple series of questions that are recorded and forwarded on to the kitchen. Inside the kitchen, a swift, complex set of actions are taken to get the food ready all at once. From my perspective while seated at the table, I simply made a single request and will get back what I expect. If the chef decides to try prepare a meal in a brand new way, that's of no consequence to me (unless it tastes bad). The underlying service may undergo mild or fundamental changes, but the ordering interface provided to me will remain fairly static.
Why does abstraction matter? A well-defined interface that successfully hides the service logic provides a way to change implementation details over time, while still respecting the original contract. Just because a service undergoes plumbing modifications doesn't mean that service consumers must take note of those changes or behave any differently. As long as the interface remains consistent, the service itself can accommodate either simple or radical changes. A nicely abstracted interface promotes loose coupling between the service sender and receiver while a contract that too deeply reveals implementation details can lead to tight coupling.
How does this apply to BizTalk Server solutions?
When thinking about abstraction and information hiding in BizTalk Server, I'd like to focus on how BizTalk functionality is exposed to the outside world. Here I'll highlight two ways to respect the abstraction principle in BizTalk Server.
First, let's talk about how orchestrations are consumed by outside parties. In truth, they are never directly exposed to a service consumer. It's impossible to instantiate a BizTalk orchestration without going through the adapter layer. So when we develop external service interfaces that front orchestrations, we should be diligent and not reveal aspects of our orchestration that the service consumer shouldn't know about. We can accomplish this in part by always starting our projects in a contract-driven manner by building the schema first, and then go about building an orchestration. If we design in reverse, it is likely that the orchestration's implementation logic seeps into the schema design. For instance, let's say that my orchestration sends employee data to a SQL Server database and also interacts with a web service exposed by a TIBCO messaging server. If I built my orchestration first, and then built up my schema along the way, I might be tempted to add fields to my employee schema where I can store a TIBCO_Response and capture and store a SQL_Exception.
Then if I used the BizTalk WCF Service Publishing Wizard to expose my orchestration as a service, I'd have an externally-facing schema polluted with information about my technical implementation. My service consumer should have no knowledge about what my orchestration does to complete its task.
Another critical way to show regard for the abstraction principle is by thoughtfully considering how to expose downstream system interfaces to upstream consumers. Let's say that you need to integrate with a Siebel application and insert new customer contacts. The WCF LOB Adapter for Siebel allows you to auto-generate the bits needed by a BizTalk orchestration to consume the target Siebel operations. When exposing that orchestration's port as a service interface, it would be a very bad decision to assign the Siebel-generated schema as our instantiating contract. There are two reasons I would avoid doing this at all costs:
- This tightly couples our service consumer to an implementation decision. Ignoring the fact that LOB system generated schemas are typically verbose and hard to digest, our service consumer should neither care nor know about how the orchestration processes the new customer. By sharing LOB system schemas as orchestration schemas, you've lost any opportunity to provide an abstract interface.
- A service typically offers a simplified interface to complex downstream activities. What if Siebel required three distinct operations to be called in order to insert a new customer? Should we expose three services from BizTalk Server and expect the service client to coordinate these calls? Absolutely not. As we discussed earlier, slapping SOAP interfaces onto existing APIs does not make an application service-oriented. Instead, we want to look for opportunities to offer services that aggregate downstream actions into a single coarse-grained exterior interface.
A good strategy for interfacing with LOB systems is to identify a single canonical schema that encapsulates all the data necessary to populate the downstream LOB systems regardless of how many individual LOB operations are needed. This strategy has two benefits. First, you obtain significant control over the structure of your service contract instead of being subjected to a data structure generated by an adapter. Secondly, we achieve a much more flexible interface that is no longer dependent on a particular implementation. What if the downstream LOB system changes its interface or the target LOB system changes completely? In theory, the service consumer can remain blissfully unaware of these circumstances as their interface is cleanly separated from the final data repository used by the service.
SOA-compatible services need to support cross-platform invocation and the service itself will often access a heterogeneous set of data and functions. Interoperability is all about making diverse systems work together and it is a critical component of a long-term SOA.
Similar to all of the core SOA principles, service interoperability needs to be designed early in the project lifecycle instead of being an afterthought addressed only moments before production deployment. Now interoperability doesn't mean that the service has to accommodate a mix of runtime host environments. A service that fails to run in both Microsoft IIS and BEA WebLogic hosting platforms doesn't mean that I've written a closed, poorly-designed service. When we talk about interoperability, we are concentrating on how a wide variety of disparate clients can access a single service. Is the fact that my service was written in .NET 3.5 and hosted by IIS 7.0 completely transparent to my Java, .NET, and Ruby users? If your service was written well, then the answer to that question should be "yes".
As for service implementation, you ideally want to empower your service to yank data from any available source. To do so, the service needs a means to access diverse sets of resources that may not natively expose simple interfaces. This is where a service/integration bus can truly shine. Some applications just won't naturally play nice with each other. But a service bus with built-in adapter technology can bridge those gaps and enable unfriendly systems to share and consume data from the outside. For example, the BizTalk Adapters for Host Systems produce no-code integration solutions for IBM mainframe technology. I can write a snazzy WCF service that chews on and returns data that dwells in VSAM host file and the service client remains blissfully unaware. The adapters in BizTalk Server enable us to build rich services that penetrate existing non-service-oriented applications and seamlessly weave their data into the service result.
So how do you achieve interoperability in your service environment? First and foremost, you want to adhere to the standard entities that typically describe services and their behavior. The "big four" technologies to keep in mind are WSDL (for service contract description), XSD (for message structure definition), SOAP (the protocol for sending service messages) and UDDI (for service registration and discovery). All of these technologies are considered "cross-platform" and are readily supported by both major and minor software vendors. Do you need to use each of these technologies in order to provide an interoperable service? Definitely not. Some find WSDLs to be obtuse and unnecessary and still others find XSD to be a lousy way to organize data. However, given BizTalk's embrace of these artifact types, I'll work within these confines.
I'd be remiss if I didn't mention that service interoperability is also quite possible through services written in a RESTful manner. That is, services that don't use the more verbose SOAP interface and instead rally around HTTP URI significance, distinct "resources", transferring resource representations, and the well-defined HTTP verbs. RESTful services typically offer a looser concept of a "message contract" and don't provide a standard way to share the representations that the service expects or returns. Although WCF now has full support for RESTful services, the BizTalk WCF adapter does not readily expose or consume such endpoints.
When building a service for interoperability, what do you need to consider? From my perspective, interoperability design comes in at four major points:
- Endpoint choice: If you truly intend for your service to be available to the widest range of consumers, then you need to pick an endpoint that is accessible to the masses. Simply put, pick a protocol like HTTP that everyone can support. Now, there's no shame in exposing WCF's netTcpBinding endpoint for targeted consumers, but be aware that you've instantly settled on a .NET-to-.NET only solution.
- Data structure: Properly selecting friendly XML data types and node behaviors is a vital part of building an interoperable service message. How are decimals handled? What's the precision of a floating point number for .NET versus. Java? For intricate calculations, those answers have a significant impact on the accuracy of data used by the service. Also don't forget about date/time handling either, as XSD has a very rigid datetime data type (CCYY-MM-DDThh:mm:ss), but either source or destination systems may enforce an alternate format.
- Security scheme: Cross-platform security can be a challenge, but without it, one cannot truly put forward an interoperable service. Even with the WS-Security standard itself, you are bound to come across existing service clients who support different versions or flavors of these standards, thus making pure interoperability impossible.
- Transaction support: The naturally stateless nature of most services makes the idea of a two-phase commit problematic to implement. When either exposing a service that must accept a transaction, or when the internal functions of a service require the assistance of a transaction, you want to lean heavily on standard mechanisms that can ensure the widest range of compatibility across platforms and technologies.
Completely non-technical interoperability example
For me, a true test of a quality ethnic restaurant is if the people working there are of the same ethnicity that the restaurant touts as its speciality. However, what if the chef doesn't speak the same language as the waiter? In this case, they rely on multiple means of interoperability. First, they can use a taxonomy consisting of letter codes or numbers to represent the meals requested by patrons. Secondly, they can employ a single translator who proxies communication between the personnel who don't natively speak the same language.
How does this apply to BizTalk Server solutions?
BizTalk Server is the most vendor-neutral product that Microsoft has ever manufactured. Its 25+ built-in adapters allow it to readily access an impressive set of industry-standard and vendor-specific technologies. The question is, how do we make BizTalk Server's external interface as interoperable as possible in order to support the widest range of client types? Let's evaluate BizTalk's interoperability support in the four areas outlined previously.
Deciding upon an on-ramp technology for the service bus is a critical task. Do we expose a FILE-based interface that supports legacy applications? How about a very simple HTTP interface that is sure to please basic web service clients? Each choice has tradeoffs. Fortunately for BizTalk architects, this needn't be such a gut-wrenching decision. BizTalk Server walls off the interface from the implementation logic in a very loosely-coupled fashion making it possible to support a mix of inbound channel technologies. Remember that the logical ports in an orchestration are not associated with a specific technology during design time. Also recall that even when an orchestration is bound to a physical messaging artifact at runtime, it is not bound to an individual receive location, but rather to the more encompassing receive port. A single receive port can contain countless receive locations which all accept data via different channels. As a result, we should carefully consider our service audience, and based on that assessment, configure the acceptable number of endpoints that accommodate our primary consumers. If we plan on building a very accessible service which also provides advanced capabilities for modern users, then a receive port filled with receive locations for both WCF-BasicHttp and WCF-WSHttp adapters makes sense. This way, our simple clients can still access the service using classic SOAP capabilities, while our forward-thinking clients can engage in a more feature-rich service conversation with us. If we later discover that we have service consumers who cannot speak HTTP at all, then BizTalk Server still affords us the opportunity to reveal more traditional endpoints such as FILE or FTP.
One place that interoperability between systems can subtly fail is when the data itself is transferred between endpoints. How one platform serializes a particular data type may be fundamentally different on an alternate platform. For instance, be sure that if you've defined a field as nullable that a standard mix of consumers can indeed accept a null value in that data type. Note that the float and decimal data type may have different levels of precision based on the platform so you could encounter unexpected rounding of numerical values. Also consider the handling of datetime values across environments. While the XSD datetime data type is quite rigid in format, you may choose to use an alternate date format embedded in a string data type instead. If you do so, you must ensure that your target service consumers know how to handle a datetime in that format. In general, a reliance on simpler data types is going to go a long way towards support for the widest variety of platforms. You can stay focused on this concept by building your XSD schema first (and complying with known types) prior to building a service that adheres to the types in the schema. Fortunately for us BizTalk developers, we're used to building the contract first.
Alongside the data structure itself, a service is more interoperable when the service contract is not needlessly complicated. A complicated WSDL definition would describe an XSD contract that possessed numerous nested, imported schemas with a distinct set of namespaces. You may find that some SOAP toolkits do not properly read WSDL files with these types of characteristics. While it can initially be seen as a huge timesaver that application platforms will auto-magically generate a WSDL from a service, you are often better off creating your own WSDL file that simplifies the portrayal of the service. Fortunately for us, both WCF and BizTalk Server support the usage of externally defined WSDL files as replacements for framework-generated ones.
Service security is a tricky concept due to the fact that support for cross-platform security technology has yet to extend into all major software platforms. WCF (and thus BizTalk Server) exploits the WS-Security set of standards, which offer platform-neutral security schemes, but, few vendors have offerings that fully support this standard. So, when architecting service security, you can either implement modern security schemes supported through WS-I standards, or, go the more traditional route of securing the transmission channel with Secure Sockets Layer (SSL) and/or securing the data throughout its journey by applying X.509 certificates and encrypting the payload.
The embrace of the service transaction standard is also slow in coming. WCF incorporates the WS-AtomicTransaction and WS-ReliableMessaging standards, but note that BizTalk Server only explicitly supports WS-AtomicTransaction. Be aware that you can make a BizTalk WCF adapter use WS-ReliableMessaging by manually constructing the binding in the WCF-Custom adapter. Also, BizTalk's support for service transactions only extends to the point of publication to the MessageBox, and the distribution of messages from the MessageBox.
To design a BizTalk service to be interoperable, with both security and transaction concepts in mind, you may be forced to implement the security specifications available by the WS-I organization and educate service clients as to the types of frameworks and libraries they need to properly engage these advanced service capabilities.
In my humble opinion, the principle of reusability is the most important aspect of a service-oriented architecture. I consider reusability to be a design time objective while reuse is an inadequate runtime success metric. In essence, reusability is all about effectively segmenting functionality into services which are capable of being used by others outside the scope of your immediate effort. Note the word capable in the previous sentence. Unless you can predict the future, it's hard to guarantee that a service module built today will satisfy all the future needs for similar capabilities. Even if no additional consumers decide that a service is of use to them, this doesn't mean that the service is a failure. By itself, the forethought and decisions made to make a service reusable makes the construction of the service a worthwhile effort.
Why does reusability matter? The answers may seem obvious, but I'll call out three explicit benefits:
- Future applications can harvest the functionality of the original service and accelerate their solution development while encouraging the adoption of composite applications. Some SOA advocates foresee a world where many applications consist of very little original functionality but rather, are simply aggregations of existing services exposed in the enterprise.
- A heavily reused service affords an organization the opportunity to make solitary changes that cascade to all consumers of that functionality. Let's say we have a service, which aggregates data from multiple underlying systems and returns a single, unified view of a customer entity. Assuming that most major applications in our enterprise use this service to get information about our customers, we can change the implementation (swap out data sources, add new sources, change logic) of this service and each consumer instantly gets the benefits.
- The architectural choices made in designing a reusable service will inevitably encourage the implementation of the other mentioned SOA principles such as loose-coupling, abstraction, and interoperability in addition to other core principles such as composability, encapsulation, and discoverability.
A reusable service can be of many diverse shapes and sizes. First of all, such a service could exhibit a coarse-grained interface that employs a static contract while supplying a distinct business function. For instance, a service with an operation named PublishAdverseEvent (which takes reports of patients experiencing negative effects from a medication) can be used by every system or business process that might produce this sort of data. This service takes a very specific payload, but it can be reused by the multiple systems that encounter this category of input data. Conversely, we might define a utilitarian service that archives information to a database through a loose contract that accepts any structured data as a parameter. This service also offers a reusable interface that can be applied to a varied set of use cases. Reusable services may have very generic logic or very specific logic, flexible contracts or rigid ones, and may be business-oriented or cross-cutting functional services. A key aspect of reuse is to define the service in such a way that it can be useful to those outside of your immediate project scope.
Completely non-technical reusability example
An intelligent restaurant owner doesn't hire a chef who is only capable of preparing grilled cheese sandwiches. Instead, they seek out chefs who are adept at not only repeatedly assembling the same meal, but also skilled at delivering a wide variety of different meals. The service offered by the chef, "preparing food", is a reusable service that accepts multiple inputs and produces an output based on the request made.
How does this apply to BizTalk Server solutions?
Virtually every component that comprises a BizTalk solution can be constructed in a reusable fashion. Take schemas for example. A single schema may be aggregated into other schemas, or simply applied to multiple different projects. For instance, a schema describing a standard Address node might be deemed an enterprise standard. Every subsequent schema that must contain an address can import that standard Address element. That's an example of an incomplete "part" that can only be useful as a component of another schema. You may also define an inclusive schema that depicts a standard enterprise entity such as a Product. Any ensuing project that requires processing on a Product would reference and reuse this pre-defined schema. Look for opportunities in your schemas to harvest enterprise entities and elements that may prove useful to those that follow you. When doing so, consider establishing and applying a project-neutral namespace that highlights those artifacts as multipurpose instead of project-specific.
Consider your experience when building BizTalk maps. In the development palette, you get access to 80+ functoids that provide a repeatable, consistent way to perform small fine-grained activities. When you encounter a situation where an out-of-the-box functoid won't suffice, BizTalk permits you to either build your own custom functoids, or, simply reference an external (reusable) component that holds the functionality you crave.
While the BizTalk Scripting functoid does allow you to embed isolated code directly into the map, the window for doing so is quite small and devoid of familiar code writing comforts such as Intellisense and debugging. This is a polite way of telling you that you should only embed simplistic code snippets in the map directly and leave complex or weighty logic to be written in externally maintained (and hopefully reusable) assemblies.
What about BizTalk pipelines and pipeline components? By nature, most pipeline components are built to serve a universal purpose well beyond the demands of a single consumer. Surely, you could choose to write an archive receive pipeline component that acted in a very specific way for a very specific message, but that would be bad form. Instead, a well-written archive component would accept any content and use configuration attributes to decide where to publish the archive log. When designing custom pipeline components, consider first writing all the code necessary to perform the desired function, and then scan your project for hard-coded references to aspects that are project-specific (such as Xpath statements, file path directives). Take those references and turn them into configuration properties that can be substituted by other applications at a later time.
WCF behaviors are now an asset to be reckoned with in a BizTalk environment. They serve a similar function to pipeline components in that they process the raw message as it travels in and out of the BizTalk bus. Reusable WCF behaviors can be written for message logging, caching, error handling, authorization, and more. What's more, WCF behaviors can be shared between BizTalk applications and standalone WCF services. This means that a well-written enterprise service behavior does not need to be duplicated just to be used in BizTalk Server.
When should you use WCF behaviors versus BizTalk pipelines? They can both perform similar actions on the stream of data passing through BizTalk. However, BizTalk pipelines offer the advantage of knowing about the BizTalk message type and thus have clearly defined ways to deal with batching/de-batching and possess full control over creating or changing the full BizTalk message context including promoted properties. That said, the continued focus by Microsoft on WCF technology, and the ability to share WCF behaviors between BizTalk applications and standard WCF services means that where possible, you should strongly consider putting generic data processing logic into WCF behaviors instead of pipelines.
How about orchestration? On the surface, it might appear that orchestrations only serve distinct purposes and are lousy candidates for reuse. While it's true that many workflow processes are targeted to specific projects, there are clear ways to enjoy the benefits of reuse here. To begin with, consider the means by which a message enters the orchestration. It's very convenient to define a "specify later" orchestration port on the orchestration that is inevitably bound to a physical receive port. However, this type of port tightly couples itself to the receive port and thus reduces its potential for reuse. Wherever possible, look at the Direct Binding option and move your tight coupling to the MessageBox instead of a specific receive port. With direct binding, the orchestration simply subscribes directly on the MessageBox, so any publisher, whether a receive port or another orchestration, can flow messages into this orchestration.
We can also choose to perform orchestration decomposition and seek out reusable aspects of our orchestration that may serve other functions. For example, you may decide that every exception encountered across orchestrations should all be handled in the same fashion. Why build that same processing logic into each and every orchestration? Instead, you can define a single orchestration which accepts messages from any orchestration and logs the pertinent details to an exception log and optionally sends exception notifications to administrators. Our communal orchestration might accept any content and merely append the data blob to a common registry. Otherwise, the orchestration could accept a pre-defined OrchestrationException schema which all upstream orchestrations inflate prior to publishing their exception to the MessageBox. Seek out common processing logic and universal functionality that can be re-factored into a shared assembly and used across organizational projects.
Finally, let's talk about reuse in the BizTalk messaging layer. On the message receipt side, receive locations are quite multipurpose and compel no specific data format on the messages they absorb. If I define a FILE receive location, there is absolutely no reason that such a location couldn't be used to take in a broad mix of message types. However, let's be realistic and consider a case where a particular receive port is bound to a specific orchestration. This orchestration processes adverse events that have occurred with our medical products. The orchestration expects a very specific format which fortunately, the initial service consumer adheres to. Inevitably, the next consumer isn't so accommodating and can only publish a message shaped differently than what the orchestration expects. Do we need to start over with a new orchestration? Absolutely not. Instead, we can reuse the exact same receive port, and even offer to add a new receive location if the existing service endpoint is inaccessible to the new client. To support the incompatible data structure, a new map which converts the client format to the orchestration format can be added to the receive port. In this scenario, the orchestration was completely reusable, the receive port was reused, and optionally, the single receive location may have been reused.
On the message transmission side, BizTalk send ports also offer opportunities for reuse. First off, send port maps allow for a mismatched collection of messages to funnel through a single endpoint to a destination system. Let's say I have a solitary send port that updates a company's social events calendar through a service interface. Even though party notices come from varied upstream systems, we can flow all of them through this sole send port by continually affixing new maps to the send port. We don't need a new send port for each slightly different message containing the same underlying data, but rather, can aggressively reuse existing ports by simply reshaping the message into an acceptable structure. Secondly, BizTalk allows us to define dynamic ports, which rely on upstream processes to dictate the adapter and endpoint address for the port. A single dynamic port might be used by countless consumers who rely on runtime business logic to determine where to transmit the data at hand. Instead of creating dozens of static send ports, which are solely used to relay information (i.e, no mapping), we can repeatedly reuse a single dynamic send port.
In this article we covered the core principles of a service-oriented architecture. In the next article we will learn about standard message exchange patterns for services and which types of services can be exposed.
If you have read this article you may be interested to view :
- BizTalk Server: Standard Message Exchange Patterns and Types of Service
- New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter
- Consuming the Adapter from outside BizTalk Server
- New SOA Capabilities in BizTalk Server 2009: UDDI Services