Installing Coherence 3.5 and Accessing the Data Grid: Part 2

Exclusive offer: get 50% off this eBook here
Oracle Coherence 3.5

Oracle Coherence 3.5 — Save 50%

Create Internet-scale applications using Oracle’s Coherence high-performance data grid with this book and eBook

$35.99    $18.00
by Aleksandar Seovic Mark Falco Patrick Peralta | March 2010 | Oracle

Read Part One of Installing Coherence 3.5 and Accessing the Data Grid here.

Using the Coherence API

One of the great things about Coherence is that it has a very simple and intuitive API that hides most of the complexity that is happening behind the scenes to distribute your objects. If you know how to use a standard Map interface in Java, you already know how to perform basic tasks with Coherence.

In this section, we will first cover the basics by looking at some of the foundational interfaces and classes in Coherence. We will then proceed to do something more interesting by implementing a simple tool that allows us to load data into Coherence from CSV files, which will become very useful during testing.

The basics: NamedCache and CacheFactory

As I have briefly mentioned earlier, Coherence revolves around the concept of named caches. Each named cache can be configured differently, and it will typically be used to store objects of a particular type. For example, if you need to store employees, trade orders, portfolio positions, or shopping carts in the grid, each of those types will likely map to a separate named cache.

The first thing you need to do in your code when working with Coherence is to obtain a reference to a named cache you want to work with. In order to do this, you need to use the CacheFactory class, which exposes the getCache method as one of its public members. For example, if you wanted to get a reference to the countries cache that we created and used in the console example, you would do the following:

NamedCache countries = CacheFactory.getCache("countries");

Once you have a reference to a named cache, you can use it to put data into that cache or to retrieve data from it. Doing so is as simple as doing gets and puts on a standard Java Map:

countries.put("SRB", "Serbia");
String countryName = (String) countries.get("SRB");

As a matter of fact, NamedCache is an interface that extends Java's Map interface, so you will be immediately familiar not only with get and put methods, but also with other methods from the Map interface, such as clear, remove, putAll, size, and so on.

The nicest thing about the Coherence API is that it works in exactly the same way, regardless of the cache topology you use. For now let's just say that you can configure Coherence to replicate or partition your data across the grid. The difference between the two is that in the former case all of your data exists on each node in the grid, while in the latter only 1/n of the data exists on each individual node, where n is the number of nodes in the grid.

Regardless of how your data is stored physically within the grid, the NamedCache interface provides a standard API that allows you to access it. This makes it very simple to change cache topology during development if you realize that a different topology would be a better fit, without having to modify a single line in your code.

In addition to the Map interface, NamedCache extends a number of lower-level Coherence interfaces. The following table provides a quick overview of these interfaces and the functionality they provide:

The "Hello World" example

In this section we will implement a complete example that achieves programmatically what we have done earlier using Coherence console—we'll put a few countries in the cache, list cache contents, remove items, and so on.

To make things more interesting, instead of using country names as cache values, we will use proper objects this time. That means that we need a class to represent a country, so let's start there:

public class Country implements Serializable, Comparable {
private String code;
private String name;
private String capital;
private String currencySymbol;
private String currencyName;
public Country() {
}
public Country(String code, String name, String capital,
String currencySymbol, String currencyName) {
this.code = code;
this.name = name;
this.capital = capital;
this.currencySymbol = currencySymbol;
this.currencyName = currencyName;
}
public String getCode() {
return code;
}
public void setCode(String code) {
this.code = code;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getCapital() {
return capital;
}
public void setCapital(String capital) {
this.capital = capital;
}
public String getCurrencySymbol() {
return currencySymbol;
}
public void setCurrencySymbol(String currencySymbol) {
this.currencySymbol = currencySymbol;
}
public String getCurrencyName() {
return currencyName;
}
public void setCurrencyName(String currencyName) {
this.currencyName = currencyName;
}
public String toString() {
return "Country(" +
"Code = " + code + ", " +
"Name = " + name + ", " +
"Capital = " + capital + ", " +
"CurrencySymbol = " + currencySymbol + ", " +
"CurrencyName = " + currencyName + ")";
}
public int compareTo(Object o) {
Country other = (Country) o;
return name.compareTo(other.name);
}
}

There are several things to note about the Country class, which also apply to other classes that you want to store in Coherence:

  1. Because the objects needs to be moved across the network, classes that are stored within the data grid need to be serializable. In this case we have opted for the simplest solution and made the class implement the java.io.Serializable interface. This is not optimal, both from performance and memory utilization perspective, and Coherence provides several more suitable approaches to serialization.
  2. We have implemented the toString method that prints out an object's state in a friendly format. While this is not a Coherence requirement, implementing toString properly for both keys and values that you put into the cache will help a lot when debugging, so you should get into a habit of implementing it for your own classes.
  3. Finally, we have also implemented the Comparable interface. This is also not a requirement, but it will come in handy in a moment to allow us to print out a list of countries sorted by name.

Now that we have the class that represents the values we want to cache, it is time to write an example that uses it:

import com.tangosol.net.NamedCache;
import com.tangosol.net.CacheFactory;
import ch02.Country;
import java.util.Set;
import java.util.Map;
public class CoherenceHelloWorld {
public static void main(String[] args) {
NamedCache countries = CacheFactory.getCache("countries");
// first, we need to put some countries into the cache
countries.put("USA", new Country("USA", "United States",
"Washington", "USD", "Dollar"));
countries.put("GBR", new Country("GBR", "United Kingdom",
"London", "GBP", "Pound"));
countries.put("RUS", new Country("RUS", "Russia", "Moscow",
"RUB", "Ruble"));
countries.put("CHN", new Country("CHN", "China", "Beijing",
"CNY", "Yuan"));
countries.put("JPN", new Country("JPN", "Japan", "Tokyo",
"JPY", "Yen"));
countries.put("DEU", new Country("DEU", "Germany", "Berlin",
"EUR", "Euro"));
countries.put("FRA", new Country("FRA", "France", "Paris",
"EUR", "Euro"));
countries.put("ITA", new Country("ITA", "Italy", "Rome",
"EUR", "Euro"));
countries.put("SRB", new Country("SRB", "Serbia", "Belgrade",
"RSD", "Dinar"));
assert countries.containsKey("JPN")
: "Japan is not in the cache";
// get and print a single country
System.out.println("get(SRB) = " + countries.get("SRB"));
// remove Italy from the cache
int size = countries.size();
System.out.println("remove(ITA) = " + countries.remove("ITA"));
assert countries.size() == size - 1
: "Italy was not removed";
// list all cache entries
Set<Map.Entry> entries = countries.entrySet(null, null);
for (Map.Entry entry : entries) {
System.out.println(entry.getKey() + " = " + entry.getValue());
}
}
}

Let's go through this code section by section.

At the very top, you can see import statements for NamedCache and CacheFactory, which are the only Coherence classes we need for this simple example. We have also imported our Country class, as well as Java's standard Map and Set interfaces.

The first thing we need to do within the main method is to obtain a reference to the countries cache using the CacheFactory.getCache method. Once we have the cache reference, we can add some countries to it using the same old Map.put method you are familiar with.

We then proceed to get a single object from the cache using the Map.get method , and to remove one using Map.remove. Notice that the NamedCache implementation fully complies with the Map.remove contract and returns the removed object.

Finally, we list all the countries by iterating over the set returned by the entrySet method. Notice that Coherence cache entries implement the standard Map.Entry interface.

Overall, if it wasn't for a few minor differences, it would be impossible to tell whether the preceding code uses Coherence or any of the standard Map implementations. The first telltale sign is the call to the CacheFactory.getCache at the very beginning, and the second one is the call to entrySet method with two null arguments. We have already discussed the former, but where did the latter come from?

The answer is that Coherence QueryMap interface extends Java Map by adding methods that allow you to filter and sort the entry set. The first argument in our example is an instance of Coherence Filter interface. In this case, we want all the entries, so we simply pass null as a filter.

The second argument, however, is more interesting in this particular example. It represents the java.util.Comparator that should be used to sort the results. If the values stored in the cache implement the Comparable interface, you can pass null instead of the actual Comparator instance as this argument, in which case the results will be sorted using their natural ordering (as defined by Comparable.compareTo implementation).

That means that when you run the previous example, you should see the following output:

get(SRB) = Country(Code = SRB, Name = Serbia, Capital = Belgrade,
CurrencySymbol = RSD, CurrencyName = Dinar)
remove(ITA) = Country(Code = ITA, Name = Italy, Capital = Rome,
CurrencySymbol = EUR, CurrencyName = Euro)
CHN = Country(Code = CHN, Name = China, Capital = Beijing, CurrencySymbol
= CNY, CurrencyName = Yuan)
FRA = Country(Code = FRA, Name = France, Capital = Paris, CurrencySymbol
= EUR, CurrencyName = Euro)
DEU = Country(Code = DEU, Name = Germany, Capital = Berlin,
CurrencySymbol = EUR, CurrencyName = Euro)
JPN = Country(Code = JPN, Name = Japan, Capital = Tokyo, CurrencySymbol =
JPY, CurrencyName = Yen)
RUS = Country(Code = RUS, Name = Russia, Capital = Moscow, CurrencySymbol
= RUB, CurrencyName = Ruble)
SRB = Country(Code = SRB, Name = Serbia, Capital = Belgrade,
CurrencySymbol = RSD, CurrencyName = Dinar)
GBR = Country(Code = GBR, Name = United Kingdom, Capital = London,
CurrencySymbol = GBP, CurrencyName = Pound)
USA = Country(Code = USA, Name = United States, Capital = Washington,
CurrencySymbol = USD, CurrencyName = Dollar)

As you can see, the countries in the list are sorted by name, as defined by our Country.compareTo implementation. Feel free to experiment by passing a custom Comparator as the second argument to the entrySet method, or by removing both arguments, and see how that affects result ordering.

If you are feeling really adventurous and can't wait to learn about Coherence queries, take a sneak peek by changing the line that returns the entry set to:

Set<Map.Entry> entries = countries.entrySet(
new LikeFilter("getName", "United%"), null);

As a final note, you might have also noticed that I used Java assertions in the previous example to check that the reality matches my expectations (well, more to demonstrate a few other methods in the API, but that's beyond the point). Make sure that you specify the -ea JVM argument when running the example if you want the assertions to be enabled, or use the run-helloworld target in the included Ant build file, which configures everything properly for you.

That concludes the implementation of our first Coherence application. One thing you might notice is that the CoherenceHelloWorld application will run just fine even if you don't have any Coherence nodes started, and you might be wondering how that is possible.

The truth is that there is one Coherence node—the CoherenceHelloWorld application. As soon as the CacheFactory.getCache method gets invoked, Coherence services will start within the application's JVM and it will either join the existing cluster or create a new one, if there are no other nodes on the network. If you don't believe me, look at the log messages printed by the application and you will see that this is indeed the case.

Now that you know the basics, let's move on and build something slightly more exciting, and much more useful.

Oracle Coherence 3.5 Create Internet-scale applications using Oracle’s Coherence high-performance data grid with this book and eBook
Published: March 2010
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

Coherence API in action: Implementing the cache loader

The need to load the data into the Coherence cache from an external data source is ever-present. Many Coherence applications warm up the caches by pre-loading data from a relational database or other external data sources.

In this section, we will focus on a somewhat simpler scenario and write a utility that allows us to load objects into the cache from a comma-separated (CSV) file. This type of utility is very useful during development and testing, as it allows us to easily load test data into Coherence.

Loader design

If we forget for a moment about the technologies we are using and think about moving data from one data store to another at a higher level of abstraction, the solution is quite simple, as the following pseudo-code demonstrates:

for each item in source
add item to target
end

That's really all there is to it—we need to be able to iterate over the source data store, retrieve items from it, and import them into the target data store.

One thing we need to decide is how the individual items are going to be represented. While in a general case an item can be any object, in order to simplify things a bit for this particular example we will use a Java Map to represent an item. This map will contain property values for an item, keyed by property name.

Based on the given information, we can define the interfaces for source and target:

public interface Source extends Iterable<Map<String, ?>> {
void beginExport();
void endExport();
}

The Target interface is just as simple:

public interface Target {
void beginImport();
void endImport();
void importItem(Map<String, ?> item);
}

One thing you will notice in the previous interfaces is that there are matching pairs of begin/end methods. These are lifecycle methods that are used to initialize source and target and to perform any necessary cleanup.

Now that we have Source and Target interfaces defined, we can use them in the implementation of our Loader class:

public class Loader {
private Source source;
private Target target;
public Loader(Source source, Target target) {
this.source = source;
this.target = target;
}
public void load() {
source.beginExport();
target.beginImport();
for (Map<String, ?> sourceItem : source) {
target.importItem(sourceItem);
}
source.endExport();
target.endImport();
}
}

As you can see, the actual Java implementation is almost as simple as the pseudo-code on the previous page, which is a good thing.

However, that does imply that all the complexity and the actual heavy lifting are pushed down into our Source and Target implementations, so let's look at those.

Implementing CsvSource

On the surface, implementing a class that reads a text file line by line, splits each line into fields and creates a property map based on the header row and corresponding field values couldn't be any simpler. However, as with any other problem, there are subtle nuances that complicate the task.

For example, even though comma is used to separate the fields in each row, it could also appear within the content of individual fields, in which case the field as a whole needs to be enclosed in quotation marks.

This complicates the parsing quite a bit, as we cannot simply use String.split to convert a single row from a file into an array of individual fields. While writing a parser by hand wouldn't be too difficult, writing code that someone else has already written is not one of my favorite pastimes.

Super CSV (http://supercsv.sourceforge.net), written by Kasper B. Graversen, is an open source library licensed under the Apache 2.0 license that does everything we need and much more, and I strongly suggest that you take a look at it before writing any custom code that reads or writes CSV files.

Among other things, Super CSV provides the CsvMapReader class, which does exactly what we need—it returns a map of header names to field values for each line read from the CSV file. That makes the implementation of CsvSource quite simple:

public class CsvSource implements Source {
private ICsvMapReader reader;
private String[] header;
public CsvSource(String name) {
this(new InputStreamReader(
CsvSource.class.getClassLoader().getResourceAsStream(name)));
}
public CsvSource(Reader reader) {
this.reader =
new CsvMapReader(reader, CsvPreference.STANDARD_PREFERENCE);
}
public void beginExport() {
try {
this.header = reader.getCSVHeader(false);
}
catch (IOException e) {
throw new RuntimeException(e);
}
}
public void endExport() {
try {
reader.close();
}
catch (IOException e) {
throw new RuntimeException(e);
}
}
public Iterator<Map<String, ?>> iterator() {
return new CsvIterator();
}
}

As you can see CsvSource accepts a java.io.Reader instance as a constructor argument and wraps it with a CsvMapReader. There is also a convenience constructor that will create a CsvSource instance for any CSV file in a classpath, which is the most likely scenario for testing.

We use the beginExport lifecycle method to read the header row and initialize the header field, which will later be used by the CsvMapReader when reading individual data rows from the file and converting them to a map. In a similar fashion, we use the endExport method to close the reader properly and free the resources associated with it.

Finally, we implement the Iterable interface by returning an instance of the inner CsvIterator class from the iterator method. The CsvIterator inner class implements the necessary iteration logic for our source:

private class CsvIterator implements Iterator<Map<String, ?>> {
private Map<String, String> item;
public boolean hasNext() {
try {
item = reader.read(header);
}
catch (IOException e) {
throw new RuntimeException(e);
}
return item != null;
}
public Map<String, ?> next() {
return item;
}
public void remove() {
throw new UnsupportedOperationException(
"CsvIterator does not support remove operation");
}
}

Thanks to the CsvMapReader, the implementation is quite simple. We read the next line from the file whenever the hasNext method is called, and store the result in the item field. The next method simply returns the item read by the previous call to hasNext.

That completes the implementation of CsvSource, and allows us to shift our focus back to Coherence.

Implementing CoherenceTarget

The last thing we need to do to complete the example is to create a Target interface implementation that will import items read from the CSV file into Coherence.

One of the things we will need to do is to convert the generic item representation from a Map into an instance of a class that represents the value we want to put into the cache. The naïve approach is to use Java reflection to create an instance of a class and set property values, but just as with the CSV parsing, the devil is in the details.

The property values read from the CSV files are all strings, but the properties of the target object might not be. That means that we need to perform type conversion as appropriate when setting property values on a target object.

Fortunately, we don't need to reinvent the wheel to do this. If you are familiar with the Spring Framework, you already know that property values, specified as strings within the XML configuration file, are automatically converted to appropriate type before they are injected into your objects. What you might not know is that this feature is easily accessible outside of Spring as well, in the form of the BeanWrapper interface and BeanWrapperImpl class.

Another problem we need to solve is key generation—when we put objects into a Coherence cache, we need to specify both the key and the value. The simplest option, and the one we will use for this example, is to extract the key from the target object itself. This will often be all we need, as most entities already have a field representing their identity, which is the ideal candidate for a cache key. In the case of our Country class, we will use the value of the code property as a cache key.

Finally, while we could insert every item into the cache using individual put calls, this is not the most efficient way to perform bulk loading of the data. Each call to the put method is potentially a network call, and as such introduces some latency. A significantly better approach from a performance perspective is to batch multiple items and insert them into the cache all at once by calling the putAll method.

So, with the design considerations out of the way, let's look at the implementation of the CoherenceTarget class:

public class CoherenceTarget implements Target {
public static final int DEFAULT_BATCH_SIZE = 1000;
private NamedCache cache;
private Class itemClass;
private String idProperty;
private Map batch;
private int batchSize = DEFAULT_BATCH_SIZE;
public CoherenceTarget(String cacheName, Class itemClass,
String idProperty) {
this.cache = CacheFactory.getCache(cacheName);
this.itemClass = itemClass;
this.idProperty = idProperty;
}
public void setBatchSize(int batchSize) {
this.batchSize = batchSize;
}
public void beginImport() {
batch = new HashMap();
}
public void importItem(Map<String, ?> sourceItem) {
BeanWrapper targetItem = new BeanWrapperImpl(itemClass);
for (Map.Entry<String, ?> property : sourceItem.entrySet()) {
targetItem.setPropertyValue(property.getKey(),
property.getValue());
}
Object id = targetItem.getPropertyValue(idProperty);
batch.put(id, targetItem.getWrappedInstance());
if (batch.size() % batchSize == 0) {
cache.putAll(batch);
batch.clear();
}
}
public void endImport() {
if (!batch.isEmpty()) {
cache.putAll(batch);
}
}
}

The constructor accepts three arguments: the name of the cache to import objects into, the class of cache items, and the name of the property of that class that should be used as a cache key. We initialize batch size to 1000 items by default, but the value can be easily overridden by calling the setBatchSize method.

The beginImport lifecycle method initializes the map representing a batch of items that need to be inserted, while the endImport method ensures that the last, potentially incomplete, batch is also inserted into the cache.

The real meat is in the importItem method, which creates a Spring BeanWrapper instance for the specified item class and sets its properties based on the entries in the sourceItem map. Once the target item is fully initialized, we use BeanWrapper again to extract the cache key from it and add the item to the batch.

Finally, whenever the batch size reaches the specified limit, we insert all items from the batch into the cache and clear the batch.

With this last piece in place, we are ready to test the loader.

Cache loader on steroids
The implementation of the cache loader presented in this article is a simplified version of the loader my colleague, Ivan Cikic, and I have implemented as part of the Coherence Tools project, which is available at http://code.google.com/p/coherence-tools/.
The full version has many additional features and removes some limitations of the implementation. For example, it provides flexible mapping and transformation features, as well as multiple key generation strategies.
It also contains both Source and Target implementations for CSV files, XML files, and Coherence, allowing you to import data from the existing CSV and XML files into Coherence, and to export contents of a Coherence cache into one of these formats.
In any case, if you like the idea behind the Cache Loader but need more capabilities than presented in this article, check out the latest Coherence Tools release.

Testing the Cache loader

In order to test the loader, we will use a countries.csv file containing a list of most countries in the world (I'd say all, but being from Serbia I know firsthand that a list of countries in the world is anything but static ).

The file header matches exactly the property names in the Country class defined earlier, which is the requirement imposed by our loader implementation:

code,name,capital,currencySymbol,currencyName
AFG,Afghanistan,Kabul,AFN,Afghani
ALB,Albania,Tirana,ALL,Lek
DZA,Algeria,Algiers,DZD,Dinar
AND,Andorra,Andorra la Vella,EUR,Euro
AGO,Angola,Luanda,AOA,Kwanza

With the test data in place, let's write a JUnit test that will load all of the countries defined in the countries.csv file into the Coherence cache:

public class LoaderTests {
public static final NamedCache countries =
CacheFactory.getCache("countries");
@Before
public void clearCache() {
countries.clear();
}
@Test
public void testCsvToCoherenceLoader() {
Source source = new CsvSource("countries.csv");
CoherenceTarget target =
new CoherenceTarget("countries", Country.class, "code");
target.setBatchSize(50);
Loader loader = new Loader(source, target);
loader.load();
assertEquals(193, countries.size());
Country srb = (Country) countries.get("SRB");
assertEquals("Serbia", srb.getName());
assertEquals("Belgrade", srb.getCapital());
assertEquals("RSD", srb.getCurrencySymbol());
assertEquals("Dinar", srb.getCurrencyName());
}
}

As you can see, using the cache loader is quite simple—we initialize the source and target, create a Loader instance for them, and invoke the load method. We then assert that all the data from the test CSV file has been loaded, and that the properties of a single object retrieved from the cache are set correctly.

However, the real purpose of the previous test is neither to teach you how to use the loader (you could've easily figured that out yourself) nor to prove that the code in the preceding sections works (of course it does ). It is to lead us into the very interesting subject that follows.

Oracle Coherence 3.5 Create Internet-scale applications using Oracle’s Coherence high-performance data grid with this book and eBook
Published: March 2010
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

Testing and debugging Coherence applications

From the application's perspective, Coherence is a data store. That means that you will need to use it while testing the data access code.

If you have ever had to test data access code that uses a relational database as a data store, you know that the whole process can be quite cumbersome. You need to set up test data, perform the test, and then clean up after each test, in order to ensure that the side effects of one test do not influence the outcome of another.

Tools such as DbUnit (http://www.dbunit.org) or Spring's transactional testing support can make the process somewhat easier, but unfortunately, nothing of that nature exists for Coherence.

The good news is that no such tools are necessary when working with Coherence. Setting up test data can be as simple as creating a bunch of CSV files and using the loader we implemented in the previous section to import them into the cache. Clearing data from the cache between the tests is even simpler—just call the clear method on the cache you are testing. In most cases, testing Coherence-related code will be very similar to the LoaderTests test case from the previous section and much simpler than testing database-related code.

However, Coherence is not only a data store, but a distributed processing engine as well, and that is where things can get interesting. When you execute a query or an aggregator, it executes across all the nodes in the cluster. When you execute an entry processor, it might execute on one or several, possibly even all nodes in parallel, depending on how it was invoked.

For the most part, you can test the code that executes within the cluster the same way you test more conventional data access code, by having each test case start a one-node cluster. This makes debugging quite simple, as it ensures that all the code, from test, to Coherence, to your custom processing code executing within Coherence runs in the same process.

However, you should bear in mind that by doing so you are not testing your code in the same environment it will eventually run in. All kinds of things change internally when you move from a one-node cluster to two-node cluster. For example, Coherence will not create backups in a single-node scenario, but it will as soon as the second node is added to the cluster.

In some cases you might have to test and debug your code in a true distributed environment, because it depends on things that are only true if you move beyond a single node.

Debugging in a distributed environment can get quite tricky. First of all, you will need to enable remote debugging on a cache server node by adding the following JVM arguments to DefaultCacheServer run configuration:

-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005

Second, you will need to ensure that all the data is stored on a cache server node where remote debugging is enabled. That will ensure that the code you need to debug within the cluster, such as the entry processor mentioned earlier, is guaranteed to execute on that node.

Even though you will only be able to start one DefaultCacheServer now (because of the remote debug port it is bound to), as soon as your test invokes any CacheFactory method it will join the cluster as a second node. By default, the data is balanced automatically across all the nodes in the cluster, which will make it impossible to ensure that your processor executes where you want.

Fortunately, Coherence provides a feature that allows you to control which nodes are used to store cached data and which aren't. This feature is normally used to disable cache storage on application servers while allowing them to be full-blown members of the cluster in all other respects, but it can also be used to ensure that no data is cached on the test runner node when using remote debugging.

In order to activate it, you need to add the following JVM argument to your test runner configuration:

-Dtangosol.coherence.distributed.localstorage=false

Once everything is configured properly, you should be able to set a breakpoint, start the cache server, attach the debugger to it, and run the test. When the breakpoint is hit, the debugger will allow you to step through the code and inspect the variables in a remote process as you normally would, as shown in the following screenshot:

The bottom line is that even though debugging a distributed system poses some new challenges, you can overcome them using a combination of features found in Java, modern IDEs, and Coherence.

Also keep in mind that you won't have to do this very often—more likely than not you will be able to test and debug most of your code using a simple, single-node-in-a-single-process approach. However, it is good to know how to do it in a distributed environment as well, because sooner or later you will need to do it.

Summary

In this article, you have learned how to install Coherence and how to start one or more grid nodes. You experimented with the Coherence Console, and learned that it is a development and debugging tool that should never be used in production (just in case you forgot the earlier warning).

We also discussed how to configure the development environment while working with Coherence to make you most productive, and to ensure that each developer has their own, private cluster for development and testing.

We then discussed Coherence operational configuration, and you learned how to configure Coherence to use Log4J for logging and how to set up private developer clusters using the operational descriptor.

A large part of the article was devoted to the Coherence API. You learned how to perform basic cache operations programmatically, and along the way created a useful utility that allows you to import data from CSV files into Coherence. Finally, I briefly talked about the testing and debugging of Coherence applications and provided some guidelines and suggestions on how to use remote debugging with Coherence.

[ 1 | 2 ]

If you have read this article you may be interested to view :


About the Author :


Aleksandar Seovic

Aleksandar Seovic is a founder and Managing Director at S4HC, Inc., where he has worked in the architect role on both .NET and Java projects and has led development effort on a number of engagements for Fortune 500 clients, mostly in the pharmaceutical and financial services industries.

Aleksandar led the implementation of Oracle Coherence for .NET, a client library that allows applications written in any .NET language to access data and services provided by Oracle Coherence data grid, and was one of the key people involved in the design and implementation of Portable Object Format (POF), a platform-independent object serialization format that allows seamless interoperability of Coherence-based Java, .NET, and C++ applications.

Aleksandar frequently speaks about and evangelizes Coherence at industry conferences, Java and .NET user group events, and Coherence SIGs. He can be reached at aleks@s4hc.com.

Mark Falco

Mark Falco is a Consulting Member of Technical Staff at Oracle. He has been part of the Coherence development team since 2005 where he has specialized in the areas of clustered communication protocols as well as the Coherence for C++ object model. Mark holds a B.S. in computer science from Stevens Institute of Technology.

Patrick Peralta

Patrick Peralta is a Senior Software Engineer for Oracle (formerly Tangosol) specializing in Coherence and middleware Java. He wears many hats in Coherence engineering, including development, training, documentation, and support. He has extensive experience in supporting and consulting customers in fields such as retail, hospitality, and finance.
As an active member of the Java developer community he has spoken at user groups and conferences across the US including Spring One and Oracle Open World. Prior to joining Oracle, Patrick was a senior developer at Symantec, working on Java/J2EE based services, web applications, system integrations, and Swing desktop clients. Patrick has a B.S. in computer science from Stetson University in Florida.

He currently maintains a blog on Coherence and other software development topics at http://blackbeanbag.net.

Books From Packt


Oracle 11g R1 / R2 Real Application Clusters Handbook
Oracle 11g R1 / R2 Real Application Clusters Handbook

Oracle Application Express 3.2 – The Essentials and More
Oracle Application Express 3.2 – The Essentials and More

Middleware Management with Oracle Enterprise Manager Grid Control 10g R5
Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

Oracle SQL Developer 2.1
Oracle SQL Developer 2.1

Getting Started   With Oracle SOA Suite 11g R1 – A Hands-On Tutorial
Getting Started With Oracle SOA Suite 11g R1 – A Hands-On Tutorial

Oracle 11g Streams Implementer's Guide
Oracle 11g Streams Implementer's Guide

The   Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11.1
The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11.1

Oracle Database 11g – Underground Advice for Database Administrators
Oracle Database 11g – Underground Advice for Database Administrators


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
i
t
H
6
8
a
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software