BT

JBoss Cache 1.4 Released; Adds Buddy Replication

by Floyd Marinescu on Jul 17, 2006 |
JBoss has released JBoss Cache 1.4 final, their distributed caching product which also includes PojoCache (formerly TreeCacheAOP).  The release adds Buddy Replication and Data Gravitation features and also optimizes their RPC marshalling algorithm resulting in 20-50% improved performance and throughpout. JBoss Cache is the technology behind JBoss Application Server's HTTP and EJB session replication.  InfoQ spoke to project leads Ben Wang and Manik Surtani to get more information.

Previously, JBoss Cache multicasted cache changes across all nodes in the cluster, which would make it inefficient for cases where cache data is partitioned (such as when you have session affinity). With the new buddy replication feature, instead each server instance picks one or more 'buddies' in the cluster, and only replicates to these specific buddies.  When asked about why one needs distributed caching if you have a cluster with session affinity, responded:
when you have a local cache+session affinity, you'd still want some replication for failover/high availability, one example being HTTP sessions.  This is where buddy replication comes in, improving efficiency so you don't end up replicating your entire state across the cluster - just to nominated backup nodes. 
Buddy Replication is an important feature for the project to gain in order for it to become suitable for larger deployments where inter-node communications overhead must be minimized. Other commercial solutions such as Tangosol or IBM's ObjectGrid have had this for some time. Terracotta doesn't use/need a buddy system since data is replicated to a central cache server.

JBoss Cache uses JGroups,  a group communication toolkit, to manage network-level communications between instances. Manik Surtani told InfoQ:
we've used this to build an RPC layer, wich we then use to replicate data across a cluster.  Using JGroups, we get a number of things for free, including group membership and organisation, guaranteed message delivery, and network stack tuning (switching between TCP and UDP, tunnelling through firewalls if necessary, etc).
PojoCache is a componeont of JBoss Cache which allows you to avoid interacting with the cache with a Map interface after update operations; instead, an object instance is added to the Cache with one initial PUT, and field-level changes to the object are intercepted transparently with aspects and then distributed across the cluster.

Ben Wang positioned JBoss Cache as a competitor to Tangosol (from the perspective of both being distributed Maps) and PojoCache as as more like Terracotta (from the AOP-driven, API-less perspective).

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

GigaSpaces cache by Guy Korland

GigaSpaces provides similar capabilities with a richer API based on JavaSpaces API (GigaSpaces also provides a simple Map API). But can also provide a full partitioned solution even if the session affinity isn't saved.

Guy korland
GigaSpaces: Write Once Scale Anywhere

Re: GigaSpaces cache by Brian Ehmann

It would be nice if Infoq could charge for inserting advertisements in comments?

Brian E

Clarifications by Cameron Purdy

Buddy Replication is an important feature for the project to gain in order for it to become suitable for larger deployments where inter-node communications overhead must be minimized. Other commercial solutions such as Tangosol [..] have had this for some time.


Tangosol Coherence does not use buddy replication. With "n" servers, it dynamically organizes a server mesh (partitioned as n x n, and dynamically load-balanced as n increases) that has a configurable depth (e.g. 3 for n plus 2 failover scenarios) that can be specified on a data set by data set basis.

To compare:

  • - With a buddy system and 100 servers, the death of a server will double the load on its buddy.

  • - With a partitioned n x n mesh, the death of a server will increase the load on other servers by roughly 1 percent.


  • This technologoy was introduced by Tangosol in June of 2002, and serves as the fundamental basis for much of the state-of-the-art in data grids today.

    Peace,

    Cameron Purdy
    Tangosol Coherence: The Java Data Grid

    Re: GigaSpaces cache by ARI ZILKA

    I participated in a panel discussion with Floyd and Rod Johnson. At some point during the discussion Rod felt it necessary to interrupt someone and interjected, "Spring, Spring, Spring, Spring, Spring." He was clearly satirizing all the vendor-shpiel. Agree that when we as vendors feel embarrased, it has gone too far. I can't get "Spring, Spring, Spring" out of my head every time I see something like this. Thanks for raising a voice of sanity, Brian!

    --Ari

    Re: GigaSpaces cache by Nati Shalom

    Brian

    This post mention some of the Caching solutions out there doing and solving similar problems.

    Other commercial solutions such as..


    From some reason the author missed GigaSpaces which is widely in the industry hence the reason for this comment. I would also mention Gemstone as another known caching product that should have been part of this list.
    Just an FYI one of the biggest phone launches (Can't be too specific at this stage) in Europe this weekend is based on a combination of JBoss/GigaSpaces. The main reason that users choose this combo is scalability.


    Nati S.
    GigaSpaces

    Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

    Email me replies to any of my messages in this thread

    Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

    Email me replies to any of my messages in this thread

    5 Discuss

    Educational Content

    General Feedback
    Bugs
    Advertising
    Editorial
    InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
    Privacy policy
    BT