Java Clustering Framework Shoal Provides Fault Tolerance and Distributed State Cache
Shoal is a java based dynamic clustering framework that provides infrastructure to build fault tolerance, reliability and availability for Java EE application servers. It can also be plugged into any java application that needs clustering and distributed systems capabilities. Shoal is the clustering engine for GlassFish (v2 and later versions) and JonAS application servers.
Shoal framework provides the client APIs for signaling the events such as cluster member(s) joining or leaving the cluster at runtime. A member can participate in a group either as a Core Member (where the member's failure would be notified to all other members in the cluster) or as a Spectator Member (the spectator member's failure will not be notified to all other members, however, the spectator member would receive all notifications pertaining to other Core members). Shoal's core service is the Group Management Service (GMS) which provides a group messaging handle to the clients (JVMs) to send messages to group or particular member(s) in the cluster. The framework also provides other high availability features such as recovery oriented signals and support, automatic recovery member selection (Shoal Automated Delegated Recovery Initiation) and protective failure fencing operations.
A recent article on java.net website discussed the architecture of Shoal framework and how to integrate it into java applications. InfoQ spoke with the co-authors of Shoal Framework Shreedhar Ganapathy and Mohamed Abdelaziz about the current features and future roadmap of the product. InfoQ asked how Shoal compares to other open source JVM clustering frameworks like Terracotta. They said that Terracotta uses a non-API based byte code enhancement approach that works very well for solutions aiming for the data replication problem space. Shoal's approach to clustering is from a fault tolerance infrastructure perspective. It provides a group event notification model that allows consuming applications build solutions for a variety of problems in the distributed systems area including data replication. The authors explained how their development team used Shoal features to address specific clustering requirements of GlassFish server.
For instance, in our own case, GlassFish needed a clustering solution that can address needs of several GF components such as IIOP LoadBalancer, Session Replication Module, Transaction Service Module, etc.
In the case of IIOP load balancer, the requirement was that the orb should be able to provide its remote clients the ability to failover to other cluster members when a failure occurs. The clients being remote, and connected to a specific instance's orb, would get dynamic cluster shape change information out-of-band along with the requisite IIOP endpoint addresses through Shoal's event notification mechanism.
In the case of transaction service module, for example, the use case was to do automatic transaction recovery operations from a remote cluster member on a failed member's transaction logs so that incomplete transactions at the time of failure could be completed. To support this, we provided a recovery server selection notification along with failure fencing support.
Responding to a question on how the "Group Communication" module in Shoal compares with other group communication frameworks such as JGroups, they said:
The group communication service provider exposes a set of service provider APIs, providing support for various group communication provider implementations. By default, Shoal uses JXTA framework. Alternate communication providers can be implemented provided they adhere to the service provider APIs, therefore JGroups, Appia based, or JINI based group communications providers may be implemented.
JXTA provides communication without the need to specify separate TCP and UDP transports, and it is not limited to IP (RF, BT, etc. can be supported). While multicasting within the cluster JXTA dynamically determines the best method to send a message to the cluster members, and it does so through the use of IP multicast, and virtual multicast.
JXTA based provider in Shoal takes advantage of location transparency for peer addressing provided by JXTA in that a group member is addressed by its name, which is hashed and mapped into a 128 bit identifier. This identifier is linked to a network resolvable advertisement, listing all physical and virtual addresses to a member, thus enabling fault tolerance, mobility and non-ip transport support. This simplifies connectivity within the cluster, the cluster, socket, and message channels are simply addressed by the group or instance name. In addition the naming scheme is extended to communication channels, thus virtualizing addressing between instances (e.g. instead of using the ports for a specific application service, a name can be used). TCP transport communication on JXTA is fully NIO enabled providing scalable message handling and throughput. JXTA also has support in the authentication and authorization areas providing end-to-end application security. The authors also commented on the reliable multicast feature in JGroups framework:
JGroups is a well known group communication provider. One of the many things we like about JGroups is that it provides a good reliable multicast stack. The current JXTA provider does not expose reliable multicast yet, and we are in the process of evaluating open-source mechanisms to provide efficient and light weight reliability. Other differentiators in JGroups include the RPCDispatcher which is a very useful facility for simulating blocking messages over UDP.
InfoQ asked about the "virtual multicast channels" and how they work in a typical JEE clustering application.
Virtual Multicast channels are an abstraction that provide for communication across the normal boundaries of multicast traffic. Typically, when one has to have a cluster that spans several subnets or when multicast traffic is disabled altogether, one has to work with network administrators to allow for multicast traffic on specific routers on specific sockets. The Virtual Multicast facility from JXTA allows for seamlessly performing group communication without requiring network or application level changes. One simply specifies a set of members as redundant routing hosts which then act as software routers to route traffic to all group members. It works pretty much like IGMP, nodes simply join a virtual multicast group to one of the designated router nodes, then messages are propagated with the cluster using the router nodes which mimic the behavior of network switches. In addition wherever possible, IP multicast is utilized. So in the case where you have multicast only within a subnet, configuring one or more group members as routing hosts allows for group communication to happen over TCP for members outside the subnet while members inside the subnet use multicast. This provides the basis for WAN based deployments to geographically distributed clusters (The support for WAN based deployments including cross-firewall support will be introduced in a future release).
Regarding the session replication in a clustered application, InfoQ asked if Shoal API can be used for HTTP session replication requirements in a web application as well as clustering EJB and JMS components. They said that:
EJB container, Transaction service, Timer Service, Orb, Session Replication Module and other components in GlassFish application server use Shoal for accessing and interacting with cluster members. One can use Shoal for clustering almost any product, JMS (MQ clusters), Databases (say Postgres or MySQL clusters), even for CVS and Subversion clustering.
Shoal framework also has a shared distributed storage feature called Distributed State Cache (DSC) which can be used for in-memory distributed caching of the application state. A default implementation is provided by GMS for lightweight replicated caching. Sreedhar explained how DSC compares with other java object caching frameworks like JBossCache, OSCache, and EHCache.
Distributed State Cache in Shoal is an interface for which there can be multiple implementations. The default implementation is a simple shared cache which caters to lightweight messaging use cases such as configuration data, group wide state machines, etc. It is not suited for high throughput caching and does not provide any LRU based cache invalidation or distributed locking semantics.
Grid and cloud computing are gaining more attention lately in designing and implementing scalable application architectures. InfoQ asked a question on what role Shoal framework can play in the emerging grid computing architectures. The authors explained how Shoal is a good fit for building such solutions:
Shoal does provide a notion of a group leader (which is dependent on the underlying group communication provider). The group leader has a succession plan in the case of a failure (this again is dependent on the group communication service provider). Through the use of instance name mapping, leader and successor are elected dynamically and independently, thus reducing network traffic and expediting formation, in addition to virtually eliminating split brain syndrome. Once these two basic requirements are met, it is easy to see the group leader as a very good abstraction for a compute grid's Task Manager. Coupled with JXTA's support for virtual multicast and cross subnet support, one can extend the idea of islands of concentrated resources which can perform tasks and report back to a mainline group leader. Service location transparency provides for mobility of such resources as well which means that the grid can expand or contract based on availability and move when necessary. Imagine several Project Blackbox data centers hosting a grid.
Another use case for Shoal is that the clustering framework becomes the underlying engine for a compute grid application. Project FishFarm is currently using Shoal for this very same use case.
Finally responding to a question on the future road-map of Shoal project in terms of new releases as well as the upcoming features, they said:
We are currently working to support Project Sailfin's Telecommunications appserver for a variety of use cases including as a load balancer component. On our future plans is a data grid solution based on a distributed cache.
Shoal can also be used for dynamic provisioning of servers using the GMS service thereby taking appropriate actions of providing more resources on occurrence of failures depending on the load conditions. Shoal is dual licensed under the CDDL version 1.0 and GPL v2 license with Classpath Exception.
Terracotta vs. Shoal
They said that Terracotta uses a non-API based byte code enhancement approach that works very well for solutions aiming for the data replication problem space. Shoal's approach to clustering is from a fault tolerance infrastructure perspective. It provides a group event notification model that allows consuming applications build solutions for a variety of problems in the distributed systems area including data replication.
Terracotta characterization is slightly off.
1. Terraoctta does not do data replication. Each JVM keeps only what it needs in memory and Terracotta keeps the rest in our server. It is neither centralized nor replicated. It is a hybrid heap-level soln that allows reads and writes to perform better as follows...read from local heap as much as possible so zero latency, and write only deltas to Terracotta for low overhead.
2. Our data is stored on disk @ wire speed so the data cannot be lost. Hard to beat this level of fault tolerance (Copy of all data on n-different servers on disk).
3. Cluster membership gives a very robust cluster lifecycle eventing model for applications to leverage. MapReduce style apps use this to reroute work on worker failure. Monitoring frameworks use this to rebalance load, etc.
Shoal sounds kewl. Just wanted to make sure that readers get an accurate picture of Terracotta. Terracotta is capable of highly scalable data sharing, and makes clsutered data totally fault tolerant at the same time.
For details, we'd welcome your questions at the Shoal dev or user mailing lists located here : shoal.dev.java.net/servlets/ProjectMailingListList.
We'd be glad to answer any questions and respond to any enhancement/fix requests.
Anatole Tresch Mar 03, 2015