Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Distributed Caching with JBoss Cache: Q&A with Manik Surtani

Distributed Caching with JBoss Cache: Q&A with Manik Surtani

This item in japanese

JBoss Cache is an enterprise-grade clustering solutions for Java-based applications, that aims to provide high availability and dramatically improve performance by caching frequently accessed Java objects. In this post InfoQ has a round-up interview with project lead Manik Surtani.

Manik, to start off could you mention the most common ways you see your clients using JBoss Cache and the benefits of using caching, especially with respect to high availability?

Data is very expensive to fetch from persistent store, especially databases.  Also, databases notoriously don't scale very well (or cheaply) when you talk about scaling out your front-ends or adding more clients.  And on the flip side, CPU cores and memory are becoming cheaper and cheaper, meaning that more people can afford to set up highly available systems.  The whole "site is down for maintenance" approach should be a thing of the past.

A distributed cache like JBoss Cache can act as a layer between your database and your front end, providing fast in-memory access to persistent state.  JBoss Cache does a number of things to ensure in-memory state is consistent, up-to-date and does not exceed JVM heap sizes.

What about the integration of JBoss Cache with other popular open source projects like Hibernate and JBoss Seam?

Several open source projects use JBoss Cache.  Hibernate (and JBoss Application Server's EJB3 implementation, as a result) use JBoss Cache to store entities retrieved from a database backend, reducing the cost of hitting the database every time entities are retrieved.  This is just a broad overview - there is much more detail and sophistication to Hibernate's use of distributed caching than just this.

Seam too uses a distributed cache to cache generated JSF page fragments, again improving the scalability of sites where fragment or page generation can be slow.

Several others - Lucene, Hibernate Search, GridGain, and JBoss App Server's HTTP Session Clustering and Clustered Single Sign-on code use JBoss Cache.

JBoss Cache comes in two distinctive flavors: the core cache and POJO cache. Could you outline for us the main differences of these two?

The core cache simply stores what you give it, in a tree-like data structure.  Key/value pairs are stored in tree nodes, and are serialized for replication or persistence.

POJO Cache, on the other hand, uses a more sophisticated mechanism of introspecting user classes by bytecode weaving, adding listeners to fields such that the cache is notified when fields are modified.  For example, storing a large, complex object in POJO Cache will result in POJO Cache introspecting the object bytecode, and only storing the object's field primitives in the tree structure.  And whenever the object's fields are changed, only those fields are replicated, allowing us to achieve very efficient fine-grained replication.

Again, there are more differences, but the primary one is detailed above.

The issue of fine-grained replication must have a significant impact on performance between POJO cache and core cache. Do you have estimation or a benchmark for this?

Such benchmarks are very system dependent and as such, as a generic benchmark, would be of little value.  Fine-grained replication can really help if you have large and complex objects in your cache.  But if you just use it to store Strings, it is of little value.  Similarly, for simple custom objects - for example a Person class that has just two String fields, POJO Cache would be more of an overhead (field interception, etc.) than a benefit.

This is why I would always recommend writing use-case dependent benchmarks to compare such things.  We have developed a framework to benchmark different caches and configurations - mainly for internal use to benchmark different versions of JBoss Cache against one another - but this is available for download and extensible enough that custom tests can be written using custom object types and access patterns.

How do you manage referential integrity, especially in the POJO cache?

If you are referring to object references, this is where the bytecode weaving comes in.  We attach interceptors to POJOs and insert referenced fields from what is in the cache.

Why would I choose a local cache over, let's say, a HashMap?

Many would say that Maps are a starting point of a cache (an argument used by the JSR-107 JCACHE expert group to make javax.cache.Cache extend Map, in fact).  But while maps are great for storing simple key/value pairs, they fall short on a lot of other features that may be necessary in a cache, such as memory management (eviction), passivation and persistence, and a finer grained locking model (a HashMap, for one thing, is not thread-safe at all.  And ConcurrentHashMaps use a coarse level of locking that will not allow non-blocking readers or even multiple readers).  And then there are the "enterprise" features of a proper cache, including JTA compatibility, and the ability to attach listeners.

So, while a Map is a good starting point, if anyone feels they need to implement or manage any of the above features themselves then they probably should be using a cache and not a Map.

What kind of locking schemes do you employ? Are they the same that are traditionally used in Databases?

JBoss Cache traditionally used a pessimistic locking approach, with a single lock per node in the tree structure.  Isolation levels - analogous to their database counterparts - are applied to these locks, which allow concurrent readers, etc.

We also offered an optimistically locked approach, which involved versioning data and maintaining copies for each transaction, validating copies upon transaction commit with the main tree structure.  This approach led to a very highly concurrent setup for a read-heavy system where readers are never blocked by concurrent writers, and also overcame the potential for deadlocks which may occur in pessimistically locked systems.

We are in the process of releasing Multi Versioned Concurrency Control (MVCC) with JBoss Cache 3.0.0, which is currently under heavy development.  This is the locking approach used in most popular database systems and will provide us with the best of optimistic and pessimistic locking.  Also, as our implementation will be lock-free for readers, this will be significantly faster than previous locking schemes.  Once this stabilizes, we hope to make MVCC the default locking mechanism in JBoss Cache.

Would you like to talk a little bit about the JGroups integration?

JBoss Cache uses JGroups as a group communication library, to detect members in a group and form clusters.  We also use JGroups as a channel over which we implemented an RPC mechanism to communicate with other caches in the group.  JBoss Cache benefits from JGroups' extreme flexibility and extensibility in network protocols and tuning, making the cache run out of the box on LAN clusters, be able to tunnel through firewalls and set up WAN clusters, etc.

Can I have the cache stand alone, without the JBoss AS?

Yes!  This is a hugely common misconception that you NEED JBoss App Server to use JBoss Cache.  This is simply NOT true.  People use it in standalone Java programs.  People use it in GUI frontends.  People use it in other app servers.  It just happens to ship with JBoss App Server as well.

Replication of data to more than one node is essential for failover and there are many different strategies to implement this. What replication models does JBoss Cache support?

Currently we support two forms, total replication (TR) and buddy replication (BR).  TR involves copying state to everyone else in the group.  While this is a great way to share state and ensure that you can fail over to anyone else in the group, it hampers scalability.  BR, on the other hand, selects specific members to use as backup members and only replicates state to the backup members.  This means that failover is most efficient when failing over to a backup node - but will still work when failing over to anyone else since the state is migrated across to follow the request.  BR is best used with session affinity since state migration can be expensive, and should hence be minimized to only happen when a failover event occurs.

The peer-to-peer model for node replication can been associated with scalability issues in specific architectures. Do you have these kind of problems with JBoss Cache?

Again, no.  Peer-to-peer networking and group communications are extremely efficient and scalable when using a LAN and IP Multicast to broadcast traffic.  Most modern network hardware readily supports IP Multicast.  However, peer-to-peer data replication - where everyone has all the state of the system - can have scalability issues.  See my later comment on total replication.  As such, we recommend buddy replication where session affinity can be attained.

We are also working on Partitioning, which will help us truly distribute state all across the group in a truly scalable fashion, and this won't require session affinity.  We hope this will supersede both buddy replication and total replication.

What trends are you expecting to see emerge in the immediate future, for caching and clustering? How will JBoss Cache evolve to satisfy those new needs?

Distributed caching will become more and more important as hardware gets cheaper and as CPU manufacturers start putting more cores per chip.  This inevitably means more "virtual" machines and an even greater strain on databases to manage such a high degree of concurrency, and distributed caching will be one of the most important solutions to a big data bottleneck.  The rising popularity of data grids and compute clouds add to this as well, since every node in a cloud or grid would need to access and share data.

Partitioning and MVCC will give JBoss Cache what it needs to scale to the next order of magnitude in cluster size.

Rate this Article