Your message has been sent.
This article has been saved to your account.
Go to my account
This article has been emailed to your Kindle.
Send this article
The re-architecture approach reduces the mainframe costs and legacy risks by migrating the application off the mainframe and re-structuring it using all the modern software tools and capabilities at our disposal. However, this very process of re-structuring the application, and essentially re-building it using knowledge and business rules mined from existing code, introduces certain risks. How can we ensure that the new application maintains functional equivalence, and operational characteristics of the original? Can we meet the performance and scalability requirements not only of the current environment, but future growth needs as well? Can we deliver the new application within the time and budget constraints agreed to at the beginning of the project? The older the application, the larger its scope and volume of code, and the fewer original developers available, the higher these risks may be.
This article by Jason Williamson, Tom Laszewski and Mark Rakhmilevich, takes a look at an alternative approach that attempts to balance these risks in a different way. Re-host-based modernization approach is focused on migrating the application off the mainframe to a compatible software stack on an open-systems platform, preserving the language and middleware services on which the application has been built. It protects legacy investment by relying on a mainframe-compatible software stack to minimize any changes in the core application, and preserve the application's business logic intact, while running it on an open-system platform using more flexible and less expensive system infrastructure. It keeps open the customer's options for SOA enablement and re-architecture, by using an SOA-ready middleware stack to support Web services and ESB interfaces for re-hosted components. And using an extensible platform with transparent integration to J2EE components, BPM-based processes, and other key tools of the re-architecture approach means you can start to re-architect selected components at will, without requiring changes to the re-hosted services running the remainder of the business logic.
SOA enablement wraps key application interfaces in services, and integrates it into the SOA. This largely leaves the existing application logic intact, minimizing changes and adding risk only to those components that needed restructuring work to become SOA-ready. While the interfaces are modernized, without subjecting the core application components to a lot of change, the high costs and the various legacy risks associated with the mainframe platform remain. In addition, the performance and scalability of the new interfaces needs to be well-specified and tested, and the additional load they place on the system should be included in any planned capacity upgrades, potentially increasing the overall costs.
Reducing or eliminating the legacy mainframe costs and risks via re-host based modernization also helps customers to fund SOA enablement, and the re-architecture phases of legacy modernization, and lay the groundwork for these steps. SOA-enabling a re-hosted application is a much easier process on an open-systems-based, SOA-ready software stack, and a more efficient one as well in terms of system resource utilization and cost. Re-architecting selected components of a re-hosted application based on specific business needs is a lower risk approach than re-architecting the entire applications en masse, and the risk can be further reduced by ensuring that target re-hosting stack provides rugged and transparent integration between re-hosted services and new components.
Keeping It Real: Selective re-architecture is all about maximizing ROI by focusing re-architecture investment in the areas with the best pay-off. Undertaking a change from one language or development paradigm to another shouldn't be undertaken lightly—the investment and risks need to be well understood and justified. It is the right investment for components that require frequent maintenance changes but are difficult to maintain, because of poor /structure and layered changes. The payback on re-architecture investment will come from reducing the cost of future maintenance. Similarly, components that need significant functional changes to meet new business requirements can benefit from substantial productivity increase after re-architecture to a more modern development framework with richer tools to support future changes. The payback comes from greater business agility and time-to-market improvements. On the other hand, well-structured and maintainable COBOL components that do not need extensive changes to meet business needs will have very little return to show for the significant re-architecture investment. Leaving them in COBOL on a modern, extensible platform saves significant re-architecture costs that can be invested elsewhere, reduces risk, and shortens payback time. These considerations can help to optimize ROI for medium to large modernization projects where components measure in hundreds or thousands and contain millions or tens of millions lines of code.
Re-Hosting Based Modernization
For many organizations, mainframe modernization has become a matter of 'how', and not 'if'. Numerous enterprises and public sector organizations choose re-hosting as the first tangible step in their legacy modernization program precisely because it delivers the best ROI in the fastest possible manner, and accelerates the move to SOA enablement and selective re-architecture. Oracle together with our services partners provides a comprehensive re-hosting-based modernization solution that many customers have leveraged for a successful migration of selected applications or complete mainframe environments ranging from a few hundred MIPS to well over 10,000 MIPS.
Two key pillars support successful re-hosting projects:
- Optimal target environment that lowers the Total Cost of Ownership (TCO) by 50–80 percent and maintains mainframe-class Quality of Service (QoS) using open, extensible, SOA-ready, future-proof architecture
- Predictable, efficient projects delivered by our SI partners with proven methodologies and automated tools
Optimal target environment provided by Oracle is powered by proven open systems software stack leveraging Oracle Database and Oracle Tuxedo for a rock-solid, mainframe-class transaction processing (TP) infrastructure closely matching mainframe requirements for online applications.
Mainframe-compatible Transaction Processing: Support for IBM CICS or IMS TM applications in native COBOL or C/C++ language containers with mainframe-compatible TP features.
RASP: Mainframe-class performance, reliability, and scalability provided by Oracle Real Application Clusters (RAC) and Tuxedo multi-node and multi-domain clustering for load-balancing and high availability despite failure of individual nodes or network links.
Workload and System Management: End-to-end transaction and service monitoring to support 24X7 operations management provided by Oracle's Enterprise Manager Grid Control and Tuxedo System and Application Monitor.
SOA Enablement and Integration: Extensibility with Web services using Oracle Services Architecture Leveraging Tuxedo (SALT), J2EE integration (using WebLogic-Tuxedo Connector (WTC), Enterprise Service Bus (ESB), Portal, and BPM technologies to enable easy integration of re-hosted applications into modern Service-Oriented Architectures (SOAs).
Scalable Platforms and Commodity Hardware: Scalable, Linux/UNIX-based open systems from HP, Dell, Sun, and IBM, providing:
- Performance on a par with mainframe systems for most workloads at significantly reduced TCO
- Reliability and workload management similar to mainframe installations, including physical and logical partitioning
- Robust clustering technologies for high availability and fail-over capabilities within a data center or across the world
The diagram below shows conceptual mapping of mainframe environment to compatible open systems infrastructure:
Predictable, efficient projects delivered by leading SIs and key modernization specialists use risk-mitigation methodologies, and automated tools honed over numerous projects to address a complete range of Online, Batch, and Data architectures, and the various technologies used in them. These project methodologies and automated tools that support them encompass all phases of a migration project:
- Preliminary Assessment Study
- Application Asset Discovery and Analysis
- Application and Data Conversion (pilot or entire application portfolio)
- System and Application Integration
- Test Engineering
- Regression and Performance Testing
- Education and Training
- Operations Migration
Combining a proven target architecture stack that is well-matched to the needs of mainframe applications with mature methodologies supported by automated tools has led to a large and growing number of successful re-hosting projects. There is a rising interest to leverage the re-hosting approach to mainframe application modernization, as a way to get off a mainframe fast, and with minimal risk, in a more predictable manner for large, business-critical applications evolved over a long term and multiple development teams. Re-hosting based modernization approach preserves an organizations long term investment in critical business logic and data without risking business operations or sacrificing the QoS, while enabling customers to:
- Reduce or eliminate mainframe maintenance costs, and/or defer upgrade costs, saving customers 50–80 percent of their annual maintenance and operations budget
- Increase productivity and flexibility in IT development and operations, protecting long-term investment through application modernization
- Speed up and simplify application integration via SOA, without losing transactional integrity and the high performance expected by the users
The rest of this article explores the critical success factors and proven transformation architecture for re-hosting legacy applications and data, describes SOA integration options and considerations when SOA-enabling re-hosted applications, highlights key risk mitigation methodologies, and provides a foundation for the financial analysis and ROI model derived from over a hundred, mainframe re-hosting projects.
Critical Success Factors in Mainframe Re-Hosting
Companies considering a re-hosting-based modernization strategy that involves migrating some applications off the mainframe have to address a range of concerns, which can be summarized by the following questions:
- How to preserve the business logic of these applications and their valuable data?
- How to ensure that migrated applications continue to meet performance requirements?
- How to maintain scalability, reliability, transactional integrity, and other QoS attributes in an open system environment?
- How to migrate in phases, maintaining robust integration links between migrated and mainframe applications?
- How to achieve predictable, cost-effective results and ensure a low-risk project?
Meeting these challenges requires a versatile and powerful application infrastructure—one that natively supports key mainframe languages and services, enables automated adaptation of application code, and delivers proven, mainframe-like QoS on open system platforms. For re-hosting to enable broader aspects of the modernization strategy, this infrastructure must also provide native Web services and ESB capabilities to rapidly integrate re-hosted applications as first-class services in an SOA.
Equally important is a proven, risk-mitigation methodology, automated tools, and project services specifically honed to address automated conversion and adaptation of application code and data, supported by cross-platform test engineering and execution methodology, strong system and application integration expertise, and deep experience with operations migration and switch-over.
Preserving Application Logic and Data
The re-hosting approach depends on a mainframe-compatible transaction processing and application services platform supporting common mainframe languages such as COBOL and C, which preserves the original business logic and data for the majority of mainframe applications and avoids the risks and uncertainties of a re-write. A complete re-hosting solution provides native support for TP and Batch programs, leveraging an application server-based platform that provides container-based support for COBOL and C/C++ application services, and TP APIs similar to IBM CICS, IMS TM, or other mainframe TP monitors.
Online Transaction Processing Environment
Oracle Tuxedo is the most popular TP platform for open systems, as well as leading re-hosting platform that can run most of mainframe COBOL and C applications unchanged in container-based framework that combines common application server features, including health monitoring, fail-over, service virtualization, and dynamic load balancing critical to large-scale OLTP applications together with standard TP features, including transaction management and reliable coordination of distributed transactions (a.k.a. Two-Phase Commit or XA standard). It provides the highest possible performance and scalability, and has been recently benchmarked against a mainframe at over 100,000 transactions per second, with sub-second response time.
Oracle Tuxedo supports common mainframe programming languages, that is, COBOL and C, and provides comprehensive TP features compatible with CICS and IMS TM, which makes it a preferred application platform choice for re-hosting CICS or IMS TM applications with minimal changes and risks. In the Tuxedo environment, COBOL or C business logic remains unchanged. The only adaptation required is automated mapping of CICS APIs (CICS EXEC calls) to equivalent Tuxedo API functions.
This mapping typically leverages a pre-processor and a mapping library implemented on Tuxedo platform, and using a full range of Tuxedo APIs. The automated nature of pre-processing and comprehensive coverage provided by the library ensures that most CICS COBOL or C programs are easily transformed into Tuxedo services. Unlike other solutions that embed this transformation in their compiler coupled with a proprietary emulation run-time, Tuxedo-based solution provides this mapping as a compiler-independent source module, which can be easily extended as needed. The resultant code uses Tuxedo API at native speed, allowing it to reach tens of thousands of transactions per second, while taking advantage of all Tuxedo facilities. In a re-hosted application CICS transactions become Tuxedo services, registered for processing by Tuxedo server processes. These services can be deployed in a single machine or across multiple machines in a Tuxedo domain (SYSPLEX-like cluster.). The services are called by front-end Java, .Net, or Tuxedo/WS clients, or UI components (tn3270 or web-based converted 3270/BMS screens), or by other services in case of transaction linking. Deferred transactions are handled by Tuxedo's/Q component, which provides in-memory and persistent queuing services.
The diagram below shows Oracle Tuxedo and its surrounding ecosystem of SOA, J2EE, ESB, CORBA, MQ, and Mainframe integration components:
User Interface Migration
The diagram on the next page depicts a target re-hosting architecture for a typical mainframe OLTP application. The architecture uses Tuxedo services to run re-hosted CICS programs and a web application server to run re-hosted BMS UI. The servlets or JSPs containing the HTML that defines the screens, connect with Tuxedo services via Oracle Jolt, WTC, or SALT.
Customers using mainframe 4GLs or languages such as PL/I or Assembler frequently choose to convert these applications to COBOL or C/C++. The adaptation of CICS or IMS TM API calls is automated through a mapping layer, which minimizes overall changes for the development team and allows them to maintain the familiar applications. For more significant extensions and new capabilities, customers incrementally leverage Tuxedo's own APIs and facilities, or leverage a tightly-linked J2EE environment provided by the WebLogic Server, and even transparently make Web services calls. The optimal extensibility options depend on application needs, availability of Java or C/COBOL skills, and other factors.
Feature or Action
READQ / WRITEQ TD,TS
/Q tpenqueue / tpdequeue
Begin new transaction
/Q and TMQFORWARD
Commit or Rollback
SYNCPOINT / SYNCPOINT
tpcommit / tpabort
Keeping it Real:For those familiar with CICS, this is a very short example of the CICS verbs. CICS has many functions, most of which either map natively to a similar Tuxedo API or are provided by migration specialists based on their extensive experience with such migrations.
In summary, Tuxedo provides a popular platform for deploying, executing, and managing COBOL and C re-hosted transactional applications requiring any of the following OLTP and infrastructure services:
- Native, compiler-independent support for COBOL, C, or C++
- Rich set of infrastructure services for managing and scaling diverse workloads
- Feature-set compatibility and inter-operability with IBM CICS and IMS/TM
- Two-Phase Commit (2PC) for managing transactions across multiple application domains and XA-compliant resource managers (databases, message queues)
- Guaranteed inter-application messaging and transactional queuing
- Transactional data access (using XA-compliant resource managers) with ACID qualities
- Services virtualization and dynamic load balancing
- Centralized management of multiple nodes in a domain, and across multiple domains
- Communications gateways for multiple traditional and modern communication protocols
- SOA Enablement through native Web services and ESB integration
Workload Monitoring and Management
An important aspect of the mainframe environment is workload monitoring and management, which provides information for effective performance analysis and capabilities that enable mainframe systems to achieve better throughput and responsiveness. Oracle's Tuxedo System and Application Monitor (TSAM) provides similar capabilities too.
- Define monitoring policies and patterns based on application requests, services, system servers such as gateways, bridges, and XA-defined stages of a distributed transaction
- Define SLA thresholds that can trigger a variety of events within Tuxedo event services including notifications, and instantiation of additional servers
- Monitor transactions on an end-to-end basis from a client call through all services across all domains involved in a client request
- Collect service statistics for all infrastructure components such as servers and gateways
- Detail time spent on IPC queues, waiting on network links, and time spent on subordinate services
TSAM provides a built-in, central, web-based management and monitoring console, and an open framework for integration with third-party performance management tools.
Mainframe batch jobs are a response to a human 24-hour clock on which many businesses run. It includes beginning-of-period or end-of-period (day, week, month, quarter) processing for batched updates, reconciliation, reporting, statement generation, and similar applications. In some industries, external events tied to a fixed schedule such as intra-day, opening or closing trade in a stock exchange, drive specific processing needs. Batch applications are an equally important asset, and often need to be preserved and migrated as well. The batch environment uses Job Control Language (JCL) jobs managed and monitored by JES2 or JES3 (Job Entry System), which invoke one or more programs, access and manipulate large datasets and databases using sort and other specialized utilities, and often run under the control of a job scheduler such as CA-7/CA-11.
JCL defines a series of job steps—a sequence of programs and utilities, specifies input and output files, and provides exception handling. Automated parsing and translation of JCL jobs to UNIX scripts such as Korn shell (ksh) or Perl, enables the overall structure of the job to remain the same, including job steps, classes, and exception handling. Standard shell processing is supplemented with required utilities such as SyncSort, and support for Generation Data Group (GDG) files. REXX/CLIST/PROC scripting environments on the mainframe are similarly converted to ksh or other scripting languages.
Integration with Oracle Scheduler, or other job schedulers running in UNIX/Linux or Windows provides a rich set of calendar and event-based scheduling capabilities as well as dependency management similar to mainframe schedulers. In some cases, reporting done via batch jobs can be replaced using standard reporting packages such as Oracle BI Publisher.
The diagram below shows a typical target re-hosting architecture for batch. It includes a scheduler to control and trigger batch jobs, scripting framework to support individual job scripts, and an application server execution framework for the batch COBOL or C programs. Unlike other solutions that run these programs directly as OS processes without the benefit of application server middleware, Oracle recommends using container-based middleware to provide higher reliability, availability, and monitoring to the batch programs.
The target batch programs invoked by the scripts can also run directly as OS processes, but if mainframe-class management and monitoring similar to JES2 or JES3 environment is a requirement, these programs can run as services under Tuxedo, benefiting from the health monitoring, fail-over, load balancing, and other application server-like features it provides.
Files and Databases
When moving platforms (mainframe to open systems), the application and data have to be moved together. Data schemas and data stores need to be moved in a re-hosted mainframe modernization project just as with a re-architecture. The approach taken depends on the source data store. DB2 is the most straightforward, since DB2 and Oracle are both relational databases. In addition to migrating the data, customers sometimes choose to perform data cleansing, field extensions, merge columns, or other data maintenance practices leveraging the automated tooling that synchronizes all data changes with changes to the application's data access code.
DB2 is a predominant relational database on IBM mainframes. When migrating to Oracle Database, the migration approach is highly automated, and resolves all discrepancies between the two RDBMS in terms of field formats as well as error codes returned to applications, so as to maintain application behavior unchanged, including stored procedures if any.
IMS/DB (also known as DL/1) is a popular hierarchical database for older applications. Creating appropriate relational data schema for this data requires an understanding of the application access patterns so as to optimize the schema for best performance based on the most frequent access paths. To minimize code impact, a translation layer can be used at run-time to support IMS DB style data access from the application, and map it to appropriate SQL calls. This allows the applications to interface with the segments, now translated as DB2 UDB or ORACLE tables, without impacting application code and maintenance.
VSAM files are used for keyed-sequential data access, and can be readily migrated to ISAM files or to Oracle Database tables wherever transactional integrity is required (XA features). Some customers also choose to migrate VSAM files to Oracle Database to provide accessibility from other distributed applications, or to simplify the re-engineering required to extend certain data fields or merge multiple data sources.
Meeting Performance and Other QoS Requirements
The mainframe's performance, reliability, scalability, manageability, and other QoS attributes have earned it pre-eminence for business-critical applications. How well do re-hosting solutions measure up against these characteristics? Earlier solutions based on IBM CICS emulators derived from development tools often did not measure up to the demands of mainframe workloads since they were never intended for true production environment and have not been exposed to large-scale applications. As a result, they have only been used for re-hosting small systems under 300 MIPS and not requiring any clustering or distributed workload handling.
Oracle Tuxedo was built to scale ground up, to support high performance telecommunications operations. It has the distinction of being the only non-mainframe TP solution recognized for its mainframe-like performance, reliability, and QoS characteristics. Most large enterprise customers requiring such capabilities in distributed systems have traditionally relied on Tuxedo. Consistently rated by IDC and Gartner as the market leader, and predominant in non-mainframe OLTP applications, it has also become the preferred COBOL/C application platform and transaction engine for re-hosted mainframe applications requiring high performance and/or mission-critical availability and reliability.
Reasons for the broad recognition of Tuxedo as the only mainframe-class application platform and transaction engine for distributed systems are based on mainframe-class performance, scalability, reliability, availability, and other QoS attributes proven in multiple customer deployments. The following table highlights some of these capabilities:
Guaranteed messaging and transactional
Hardened code from 25 years of use
in the world's largest transaction
Transaction integrity across systems and
domains through a two phase commit
(XA) for all resources such as databases,
queues, and so on.
Proven in mainframe-to-mainframe
transactions and messaging
No single point of failure, 99.999% uptime with
Application services upgradeable in operation
Self-monitoring, automated fail-over, datadriven
routing for super high availability
Centralized monitoring and management
with clustered domains; automated, lights-out
Performance and Scalability
Resource management and prioritization
across Tuxedo services
Dynamic load balancing across domains
based on load conditions
Data-driven routing enables horizontally
distributed database grids and
End-to-end monitoring of Tuxedo system
and application services enables SLA
Virtualization support enables spawning
of Tuxedo servers on demand
Parallel processing to maximize resource
utilization with low latency code paths that
provide sub-second response at any load
Horizontal and vertical scaling of system
resources yields linear performance increases
Request multiplexing (synchronous and
asynchronous) maximizes CPU utilization
Proven in credit card authorizations at over
13.5K tps, and in telco billing at over 56K tps.
Middleware of choice in HP, Fujitsu, Sun,
IBM, and NEC TPC-C benchmarks
eBook Price: £22.99
Book Price: £36.99
As it delivers mainframe-like performance, reliability, and scalability in most demanding environments, Tuxedo is used by some of the largest TP applications worldwide to deliver over tens of thousands of transactions per second in real-world production applications such as funds transfer, credit card authorizations, mobile billing, and reservations systems of major transportation vendors. It's no surprise that most of the large re-hosted mainframe applications in the 500 to 10,000-plus MIPS range are running on Tuxedo as well.
In larger deployments, Oracle Real Application Clustering is used to support the deployment of a single database across a cluster of servers, providing unbeatable fault tolerance, performance, and scalability with no need for application changes. Oracle RAC supports usage of multiple individual systems as one clustered, virtual database server. It provides transparent synchronization of read and write accesses to databases shared by all nodes in the cluster, dynamic distribution of database workload, and transparent protection against systems failures.
Oracle's key innovation is a RAC technology called cache fusion. Cache fusion enables nodes on a cluster to synchronize their memory caches efficiently using a high-speed cluster interconnect so that disk I/O is minimized. The key, though, is that cache fusion enables shared access to all the data on disk by all the nodes on the cluster. Data does not need to be partitioned among the nodes. Oracle RACs cache fusion technology provides the highest levels of availability and scalability. Oracle RAC dramatically reduces operational costs and provides new levels of flexibility so that systems become more adaptive, proactive, and agile. Dynamic provisioning of nodes, storage, CPUs, and memory allow service levels to be easily and efficiently maintained while lowering cost still further through improved utilization. Customers today run clusters that range from a few dual-CPU commodity servers, to an enterprise grid with dozens of small servers to clusters, where servers are large SMP systems with 32 or 64 CPUs each.
A further example of the benefit from integrated re-hosting architecture using Oracle Database, RAC, and Tuxedo is the Fast Application Notification (FAN) feature. FAN provides integration between the RAC database and Tuxedo. It allows Tuxedo to be aware of the current configuration of the cluster at any given time so that application connections are made only to instances that are currently able to respond to the application requests. The Oracle RAC HA framework posts a FAN event immediately when a state change occurs within the cluster. For example, for down events, Tuxedo can initiate transaction recovery.
In addition to delivering innovative and proven functionality in its own products, Oracle works closely with leading open-systems platform vendors to take full advantage of their highly scalable systems with massive processor, I/O, and memory scalability, a complete resource partitioning continuum, granular workload management, and sophisticated operating system and system management capabilities. Combining these systems with proven Oracle Database's RAC and built-in grid capabilities of Oracle Tuxedo and WebLogic Server enables robust clustering with high availability, fast failover, dynamic load balancing, and unlimited scalability at much lower TCO than the mainframe. Oracle customers running mission-critical applications in the open-systems environment experience a QoS as high as, and in many scenarios better than, the same applications provided on the mainframe. Customer benchmarks comparing re-hosted and original applications have repeatedly demonstrated equal or greater transaction throughput on Tuxedo running on leading UNIX systems as compared to the original application running on CICS or similar environments. In recent customer benchmarks of eXtreme Transaction Processing (XTP) applications comparing Tuxedo performance against the mainframe, Tuxedo has pushed the envelope beyond 100,000 transactions per second with application transactions that include computation and database I/O.
Phased Migration and Mainframe Integration
Some mainframe migrations are partial, often done to free up some needed mainframe capacity for other applications and avoid an expensive upgrade. And many full migrations are done in multiple phases. In both cases, integration with remaining mainframe applications and mainframe-resident data is a critical consideration. Tuxedo Mainframe Adapters (TMA) provides this capability in mainframe re-hosting projects as well as native Tuxedo applications, when Tuxedo is used to run distributed services, and co-ordinate access to mainframe applications and data for multiple front-end applications. TMA is available for TCP/IP, SNA, and OSI/TP networks (the latter used with Unisys mainframes) to deliver high-performance, bidirectional interoperability between applications runningon Oracle Tuxedo and mainframe TP platforms such IBM CICS or IMS TM.
CICS uses a set of InterSystem Communications (ISC) protocols for distributed transaction execution across multiple CICS regions. Tuxedo supports CICS ISC, and supplies equivalent capabilities:
- Dynamic transaction routing that is data-driven, or based on load management policies
- Asynchronous processing to allow transaction execution to be started asynchronously from an invoking transaction, leveraging in-memory and persistent queuing functions
- CICS Distributed Program Link (DPL)/Distributed Transaction Processing (DTP) functions provided by TMA, which supports transparent, bidirectional integration, global transaction coordination between mainframe CICS and IMS/TM applications and re-hosted application on Tuxedo. This allows mainframe transactions to view Tuxedo as a remote CICS region virtually connected via APPC/LU6.2
- Event-driven services infrastructure supporting a Publish/Subscribe model
- DOMAINS functionality providing full bi-directional connectivity and programming model across multiple Tuxedo application domains (similar to CICS regions)
Tuxedo Mainframe Adaptors provide bidirectional connections with full buffer mapping, and propagate transaction context, including user ID for security. Support for CICS DPL (CICS EXEC Link) and DTP (CICS EXEC verbs for LU6.2/APPC commands) facilities makes Tuxedo domains appear as another CICS region. This integration is provided over TCP/IP stack, allowing TCP/IP network connections to the mainframe while locally using SNA LU6.2 to connect directly from Tuxedo's Communications Resource Manager (CRM) to CICS, or IMS TM. Additionally, support for SNA connections allows you to use TMA without installing any new components on the mainframe.
In certain cases of partial migrations, customers want to rely on a centralized security model using mainframe security solution even for components re-hosted to Tuxedo. This can be supported with mainframe security systems such as RACF, by using Tuxedo's LDAP-based authentication via z/OS LDAP server configured for native authentication using RACF or another security solution.
SOA Enabling Re-Hosted Applications
To leverage and extend the value inherent in mainframe applications, re-hosting is often followed by service enablement for integration into an SOA framework. Integrating re-hosted applications into the SOA framework provides key benefits:
- Improves productivity, agility, and speed for both business and IT
- Allows IT to deliver services faster, and aligns closer with business
- Allows the business to respond quicker and deliver optimal user experience
- Masks the underlying technical complexity of the IT environment
When business value of re-hosted applications motivates integration into a corporate SOA, the integration approach must maintain the applications' QoS attributes. Key considerations for integrating re-hosted mainframe applications into an SOA include
- Defining expected response time, throughput, and scalability
- Understanding requirements for transactional integrity and reliability
- Ensuring end-to-end messaging security, including security policies and AAA
- Providing support for heterogeneous client connectivity
- Achieving appropriate services granularity
- Leveraging service orchestration and BPM integration
- Enabling SLA management and SOA governance
Oracle offers strong SOA integration capabilities shown in the following table:
Inbound and Outbound WS:
Extensible XML Data Mapping
WSDL Creation and Publishing via
Lower integration cost
Avoids the need to
re-write in Java or .Net
Adaptive, heterogeneous messaging
for Web services, EJB/RMI, JMS,
Oracle Tuxedo, IBM MQ, SAP,
SWIFT, FTP, etc.
Extensible message brokering
Dynamic routing with multiple
transports and transformations
Transactional support (XA)
Embedded management with
monitoring and reporting
Runtime policy enforcement
Faster deployment and
simpler management of
shared services across
Service life cycle
and life cycle
Metrics & Analytics
Visibility, traceability of
Greater re-use of
Life cycle governance
The capabilities provided to re-hosted applications through Oracle SALT, Oracle Service Bus, and Oracle Enterprise Repository can be leveraged in phases, or as part of a single integrated initiative. Extending the re-hosted applications through SALT for Web services integration, or through Oracle Service Bus for heterogeneous service messaging is a simple initial step on the road to SOA enablement. Customers use SALT for its complete open-standards Web services capabilities that easily integrate with any Web services environment. This approach provides a powerful way to extend and integrate re-hosted applications with heterogeneous messaging beyond SOAP/HTTP, or if you require global transaction coordination with other XA-enabled components, or need strong transformation, orchestration, and management capabilities provided by Oracle Service Bus.
Following the initial integration steps, or in parallel with them, customers can leverage Oracle Enterprise Repository to provide a single meta-data repository populated with services information for governing the discovery, deployment, and full life cycle of these re-hosted application services, enabling greater leverage of these key assets throughout the enterprise.
A further step in SOA-enabling and modernization of the legacy applications is to begin leveraging re-hosted services in BPM-driven dynamic business processes. Re-using re-hosted legacy application services via BPM, such as Oracle BPEL Process Manager, unlocks the siloed logic, and puts it to use as a strategic enterprise asset. With services exposed via Oracle SALT or Oracle Service Bus, and services metadata published in Oracle Enterprise Repository, BPM design-time tools can easily discover the available services. Connecting to the service via Web service interface (provided by SALT), or an ESB proxy service (provided by Oracle Service Bus), provides run-time binding and access as depicted in the following diagram, which shows how re-hosted legacy services can be leveraged by a BPM framework, like Oracle BPEL Process Manager.
Further Re-Architecture of Re-Hosted Applications
Re-hosting application logic intact helps to preserve the investment in the code, and simplifies ongoing maintainability leveraging current resources. But what if the original code doesn't meet business needs or needs significant re-work because of poor maintainability? It's worthwhile to address these questions at the beginning of the project, and try to determine which components and modules have maintainability issues (leading APM tools can scan the code and derive various maintainability-related metrics) and/or face significant re-work driven by changing business requirements or external drivers such as compliance regulations.
In an SOA environment of Tuxedo application services, isolating individual components and replacing them with calls to re-architected Java services is pretty straightforward and transparent. This transparent integration between Tuxedo's COBOL, C, C++ services, and J2EE/Java services is provided through WebLogic-Tuxedo Connector (WTC), a Tuxedo Domain-based gateway that provides bidirectional transaction and security propagation. Additionally, Oracle SALTs Web services interfaces provide equally transparent interface to/from re-architected components, enabling transparent linkage to the re-architected components running outside of Tuxedo. Transparent integration between re-hosted and re-architected components further extends the benefits from leveraging existing application assets.
Making the move off the mainframe via mainframe re-hosting has enabled numerous Oracle customers to lower costs, and helped IT deliver greater business impact due to flexibility of the open systems environment and greater agility of the application services re-hosted to an extensible, SOA-ready platform. Time and again, their success has been described in the following terms:
Re-hosting Most-Often-Heard Customer Feedback
Distributed computing TCO is a fraction of what customers are used
IT organizations have significantly lowered the cost of operations and
annual IBM maintenance
Many customers have already migrated to open systems using Tuxedo
Risk is low-application remains in its native language
Tuxedo is a CICS-compatible, mainframe-class application platform
Oracle and its SI partners have developed a proven migration strategy
used in over 120 projects
Oracle's many references in this area were compelling, and relevant to
the customer's business
Re-hosted applications have become highly extensible to SOA
IT Organizations are now faster, more flexible, and more involved in the
Enterprises don't have to remain locked-into with IBM anymore
Oracle's Re-hosting-based Modernization solution provides a proven, low risk,and highly efficient way for enterprises to migrate critical Online and Batch applications from the mainframe to lower-cost open systems environment powered by Oracle Database and Oracle Tuxedo. The key to a successful re-hosting project is the application platform that combines proven ability to run mainframe applications without requiring invasive changes with the required performance, availability, and reliability, while making it simpler to SOA–enable, and/or re-architect selected components following re-hosting. This approach can accelerate the cost savings, shorten the time to realize meaningful ROI, and substantially reduce legacy modernization risks. It balances the benefits of the powerful open-systems-based platform and software stack with the highly automated, low risk, noninvasive migration of the invaluable application logic and data. For many customers, this represented an optimal modernization approach—an approach that generated positive returns within two years, and put them on a solid financial and technological footing for further modernization investments.
eBook Price: £22.99
Book Price: £36.99
About the Author :
As Product Manager of Modernization Solutions at Oracle, Jason Williamson is part of the team responsible for developing and implementing Oracle's Modernization strategy, cultivating a partner ecosystem for implementing solutions that modernize to Oracle products. Mr. Williamson works with product management within Oracle and its partners to drive integration, innovation and adoption for modernization to open systems.
Mr. Williamson has over 15 years experience in the software industry and has extensive knowledge in legacy modernization techniques and commercial software development. Prior to joining Oracle, Mr. Williamson was Global Product Management for BluePhoenix Solutions, where he was responsible for providing technical leadership and new product development within the legacy modernization space. Mr. Williamson also worked for Relativity Technologies assisting the modernization efforts of companies around the world. He has also successfully founded and launched a commercial software company, leveraging emerging technologies as well as creating and managing strategic partnerships key to the company's success. In addition to his work within the technology sector Mr. Williamson also served in the United State Marine Corps.
Mr. Williamson has a BSc MIS from the Virginia Commonwealth University and is the father of four children and happily married for 13 years.
Tom Laszewski has over 20 years experience in databases, middleware, software development, and building strong technical partnerships. He is currently the Technical Director of the Oracle Modernization Solutions team. He established the initial business and technical relationships with Oracle's modernization SIs and tools partners (the Oracle Modernization Ecosystem). His main responsibility is successful completion of all modernization projects initiated trough the partner ecosystem. Tom works on a daily basis with EDS and HP alliances, technical architectures, and account managers to ensure the success of joint modernization projects. He is also responsible for Oracle Modernization customer assessments and workshops, modernization reference architectures and modernization best practices.
Before Oracle, Tom held technical and project management positions at Sybase and EDS. Tom holds a Master of Science in Computer Information Systems from Boston University.
Mark Rakhmilevich is a Director of Product Management for MainframeRe-hosting and Modernization at Oracle. In this role he focuses on bringing togetherOracle and partner solutions that help customers cut costs and modernize legacyapplications by migrating mainframe applications to Oracle Database and Oracle'sTuxedo and WebLogic middleware platforms and extending them to SOA and XTPwith ESB, BPM, and Business Rules solutions. Mark works with Oracle engineeringto address the needs of extreme transaction processing for mainframe-classapplications, and to provide mainframe extension and integration capabilities as partof Oracle Fusion Middleware. He supports marketing and sales, delivers customerseminars on Mainframe Modernization, and works closely with customers in US,Europe, and Asia. Mark has worked with platform vendors, global and regionalSIs, banking and insurance ISVs, and key technology vendors to provide a completesolution that helps mainframe customers cut loose from the mainframes and/orextend their mainframe applications to SOA.
Prior to Oracle, Mark held senior engineering, management, product managementand marketing roles at IBM, Compuware, Tandem, Valicert, Tumbleweed,Chordiant, and BEA. He has extensive experience in system design, software andsystems architecture, program and product management, marketing and businessdevelopment. Mark worked extensively with banks, insurers, payment networks,healthcare, and government organizations focusing on enterprise middlewareand SOA platforms, mainframe application modernization, Web services security,financial messaging, PKI, B2B integration, and BPM-driven CRM solutions forcustomer servicing and selling across multiple channels. His early work at IBM onmainframe operating systems architecture for VM and MVS led to a patent awardfor "Logical Resource Partitioning of a Data Processing System"—the basis of theIBM LPAR technology for partitioning mainframes and UNIX servers into multiplevirtual machines. He also led development and product management for severalversions of IBM mainframe UNIX OSes and their clustering capabilities.
Mark has an MS in Computer Science from State University of New York at Albany,and BS in Computer Science from Lehigh University where he was elected to PhiBeta Kappa.
Books From Packt