2014-04-29

The result cache is a very cool functionality introduced in Oracle Service Bus to allow ESB developers to automatically cache responses from a external service in OSB's built-in in-memory data grid caching system, which is Oracle Coherence. No matter which external service you are planning dealing with, an web service, an REST API, an directory in the file system or an CICS transaction, if the result cache functionality is activated for that external service, the response payload of an specific request message will be putted in the caching system for future reuse if the same request message is received again. The result cache functionality also allows you to define a expiration criteria, so the response payloads entries can eventually expire.



ESB developers will activate this functionality in OSB neither to protect critical back-end external services, to offload it or to short its response time. In the scenario that wants to protect back-end external services, perhaps those services have some cost associate every time you send a message for them. This cost would have various meanings, like per-request-basis (a paid external service that allows customer's credit history querying), IT budget (an CICS transaction service in which each call consumes MIPS) or even performance costs. In the case of performance costs is that when we start thinking in offloading. When services are originally designed, we measure some approximate throughput and average latency, and we put enough hardware resources to sustain that measure. When a ESB is situated in front of those services, you are enabling more channels to interact with that service and maybe the new amount of channels can be too high for the existing hardware resources. Finally, you could enable this functionality to short the response time of some services. If some services are sensitive in terms of response time latency, so the result cache is a must have.

A common practice used by customers around the world is to have replicas of their system architecture in different data centers, allowing them to survive in case of catastrophes. But only having a replica of their system architecture in different data centers is not enough. There is a need to provide business continuity, which means that every single detail of the system architecture should be constantly synchronized between the data centers, so when a backup data center take place in a catastrophe scenario, the down time should be minimal. There is also scenarios when even small periods of down time are not acceptable. All the data centers should be in stand-by/active mode to take over the entire processing in any moment. The challenge here is to keep two types of things synchronized: system architecture artifacts and system transactions. System architecture artifacts are any piece of data that the run-time system architecture needs to properly work. Common examples of artifacts are XML configuration files, applications, log files, data files and storage. System transactions are a unit-of-work of a business transaction. A business transaction represents a single or multiple business processes of the organization, and most of the times a business transaction are associated to a monetary need. E-commerce sites for instance are good examples of business transactions that are associated to a monetary need. If the site loses a single transaction, that lost represents less incoming money. And that is a situation that no CFO/CEO likes to tolerate.

Back to the result cache functionality, imagine that you have OSB deployed in two or more data centers operating in active-active mode. A corporate load balancer distributes load across each data center though its exposed services. When a request arrives in one data center, OSB take that request and start processing it, causing one or more entries to be stored in the result cache for future reuse. If the same request arrives in another data center, the desire is that OSB pick the already processed result from the result cache instead of processing it again. This is true because from the customer/client point of view, it is the same service and invocation request. But what will really happen is that the request will be processed again since result cache by default do not replicate entries across data centers, only across clusters in the same local network. So the challenge here is to find a way to enable entries being replicated from one local network (a.k.a "LAN") to a remote network (a.k.a, "WAN") even if this remote network is geographically distant.

In this article, I will show step by step how to enable result cache data replication across different data centers connected through a WAN. Thanks to OSB's great product architecture, this configuration is very straightforward and you will not have to change nothing in your SOA services, neither even in the OSB deployment. Everything is done out-of-the-box by Oracle Coherence. This article will help you even if WAN replication is not your primary objective. If you have different OSB domains (in the same or different networks) in which some services are exactly the same in those domains, the same technique should apply. All the examples created in this article were based on Oracle Service Bus 11gR1 default installation, which comprises WebLogic 10.3.6, Coherence 3.7.1.1 and Service Bus 11.1.1.7.

Patching Oracle Coherence from Middleware's Home

Before starting using the Push Replication Pattern feature available in Coherence Incubator (it will be explained in the next topic) we need to patch the Coherence installation that come with WebLogic. When you install the WebLogic pre-requisite for OSB which is the WebLogic 11gR1 + Coherence package installer, the Coherence 3.7.1.1 version is installed in the middleware home location. We need to patch this Coherence installation so we can take advantage of the latest features of the Push Replication Pattern.

Update Coherence to the 3.7.1.11 version. You can get access to this version in the Oracle Support website. After logged in the Oracle Support Self-Service portal, go to the "Patches and Updates" tab and search for the following patch number: 17897749. Download this patch and update the Coherence installation according to the instructions available inside of the patch file.

Installing the Oracle Coherence Push Replication Pattern

The Push Replication Pattern is a extension for the Oracle Coherence product to allow remote clusters to exchange data across WAN networks. It is part of the Coherence Incubator project, an very cool initiative to enhance the Coherence product through community based feedback. It hosts a collection of projects with implementations of real world needs, in a form of design patterns. Even being open in terms of source code access, it is responsibility of Oracle engineers to provide new features, correction of bugs and documentation.

You need to download a compatible version of Coherence Incubator to the Coherence 3.7.1.11 release. Use the following link to get instructions about how to download the source code. After downloading the source code, you need to compile and build the run-time packages. To accomplish that, you will need the Apache Maven project management tool. With Apache Maven properly installed, follow the instructions of this link to compile and build the Coherence Incubator run-time packages.

Setting Up a Coherence Cluster with WAN Replication Support

Let's set up a Coherence cluster that allows data replication across a WAN network. The first thing to do is the definition of cache configuration files for both sites. The idea for those cache configuration files is that it should contains definitions for publishing and receiving endpoints. That means that one site should expose one or more endpoints to receive events from the other site and also define a remove invocation service to connect to the other site to publish events. It is a bi-directional communication across the sites in which the Push Replication Pattern takes care about when to publish/receive events using the endpoints. The listing code below shows the cache configuration for the site-01:

Save this cache configuration file as coherence-cache-config-site-01.xml. Before we continue, let's spend some time understanding the code. If you look at the top of the configuration file you will see the mapping for the cache /osb/services/ResultCache. This cache name matches with the one the come bundled with OSB. Also in the cache mapping, you will see a section that starts with the tag event:distributor. This XML tag is part of the Coherence Incubator implementation as you probably have seen in the namespaces declaration section. The event:distributor section basically states for declaring which remote sites should receive events from created, modified, removed or expired entries of the local cache. In the declaration, it is defined that the site-02 will be updated through a remote invocation service declared as site-02-sync-proxy-service later in the configuration file.

Special attemption for the event:conflict-resolver-scheme section. This should be used when you are expecting that entries from one site conflicts with entries of another site, most of the time because synchronization failures due unstable network links. Using this section, you can plug custom implementations that would decide which entry should be considered. The BruteForceConflictResolver class used in this example is a out-of-the-box implementation that came with the Event Distribution Pattern, another pattern that is part of the Coherence Incubator project.

Finally, you also have two proxy-scheme declarations in the configuration file. The purpose of the site-01-trans-proxy-service is for receiving local events from the same site. As for the site-01-sync-proxy-service, it is used to receive remote events from the foreign sites. Using two different proxies, one for transaction and another for synchronization gives you the ability to fine tune each proxy throughput independently, configuring for instance a different pool of threads for each one. In theory, you should balance the same number of threads for both proxies to ensure a well synchronized cluster. The Push Replication Pattern executes its synchronization job between sites completely asynchronous, meaning that the thread that updates the local cache does not have to wait the thread the replicates the entry for a remote site. That is the reason why is so important have different proxies.

Now let's create the cache configuration file for the site-02. The listing code below is almost identical to the previous listing, except from the fact that this time we are defining how site-02 will synchronize with site-01:

Save this cache configuration file as coherence-cache-config-site-02.xml. Now that we have cache configuration files from both sites in place, we can set up the Coherence cluster that will hold the WAN replication enabled caches. For the site-01, create one shell script file named coherence-cache-server-site-01.sh and write the following code:

The given shell script code is self explanatory, so I will not enter in too much details. Just keep in mind that this type of cluster was designed to scale out, so if you need more storage capacity in the Coherence layer, just raise up more JVM nodes with the same configuration. Since there are no cluster defined, each JVM node that come up with will join the cluster automatically. Also, adjust the minimum and maximum heap sizes accordingly to suit your needs. Not to mention that you will need to adjust the global variables to your specific path needs.

For the site-02, create one shell script file named coherence-cache-server-site-02.sh and write the following code:

Execute each script on its respective site. Keep they up and running while we start the configuration of how each local OSB will connect to those clusters to delegate its caching needs.

Changing Oracle Service Bus Default Caching Configuration

The last part of the configuration is both the most simple and important one. We need to teach OSB about how to connect to a external cluster (created and configured in the previous topic) instead of using its built-in Coherence cluster. Let's start with the site-01. Edit the internal Coherence cache configuration file used by OSB located in the following folder: <DOMAIN_HOME>/config/osb/coherence/osb-coherence-cache-config.xml. You will need to change the contents of the original file with the contents of the following list below:

Let's understand what is being done here. Internally, OSB was built to invoke a cache named /osb/services/ResultCache when the result cache functionality is activated for a business service. Since we have changed its caching scheme, now when the cache is accessed, it will trigger remote invocations over TCP to the distributed cache available in the 20001 port. With the usage of a near-scheme type of cache, OSB can benefit from the best of worlds: part of the most recently data stored on its heap for rapid retrieval and the other part stored in a remote distributed cache. This configuration provides both high performance and scalability with the plus of easy administration, since all the data is stored in a cluster separated of OSB.

Here is the OSB cache configuration file for site-02:

As you can see, it is the same code with the same techniques. The only difference is that instead of pointing to the Coherence cluster of site-01 on port 20001, it points to the Coherence cluster of site-02 on port 20002. That's all what we need to have OSB delegating its caching needs to a remote cluster. Start OSB from both sites and let's start doing some tests.

Testing the WAN Replication Behavior in Oracle Service Bus

In order to test the WAN replication behavior, I have developed a simple web service which takes ten seconds to complete each request. The idea is to have this web service as a OSB business service with result cache activated. Then, you need to create a proxy service in which its only job is to route its requests to the business service. Both the proxy service and the business service should be deployed at all the sites, along with the Web Service deployment. Here is the snippet code from the web service implementation:

A simple battery of tests to validate if everything is working should be:

Using the proxy service from site-01, make a request with "123456789" as the value of the SSN parameter. That request should take ~10 seconds to complete.

Using the proxy service from site-02, make a request with "123456789" as the value of the SSN parameter. That request should take ~01 second or less to complete.

Using the proxy service from site-02, make a request with "987654321" as the value of the SSN parameter. That request should take ~10 seconds to complete.

Using the proxy service from site-01, make a request with "987654321" as the value of the SSN parameter. That request should take ~01 second or less to complete.

Using the proxy service from site-01, make a request with "111111111" as the value of the SSN. Wait for the expiration of that entry in site-01. When it expires, check in the site-02 if the entry also expired.

Thinking in making things easier for you, I have made available all the project artifacts and OSB projects. Click in the links below to download them.

OEPE project with the implementation of the JAX-WS Web Service

WAR file with the compiled and ready to run JAX-WS Web Service

Export from site-01 OSB containing Schemas, WSDL, Business and Proxy Service

Export from site-02 OSB containing Schemas, WSDL, Business and Proxy Service

Show more