BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Distributed JBI

Distributed JBI

This item in japanese

Abstract

The Java Business Integration (JBI) specification, known as JSR 208, describes an Enterprise Service Bus (ESB) and its interactions. These interactions are described in terms of components plugging into a software bus to exchange information. This construction, at first glance, appears to create a system where all pieces reside in the same process or virtual machine. OpenESB, an open source implementation of JBI, includes a homogeneous topology option using clustered JBI instances when run in a GlassFish Application Server. This allows for scalability and load balancing by increasing the number of clustered instances. Recently, a prototype implementation of a component called the Proxy Binding, allows OpenESB instances to be transparently linked together using a heterogeneous topology, much like extending the bus over a network. This article will describe and contrast the strengths and weaknesses of the two different styles of distributed access as applied to the OpenESB environment and show how in the end they complement each other.

Introduction

The JBI specification is often perceived as being constrained to a single Java Virtual Machine (JVM) instance. The specification doesn't explicitly state how a distributed version would function. This shouldn't stop the interested from exploring new territories. In fact, two different styles of solutions, we will call them topologies for this discussion, have appeared since the specification was approved. These solutions only required a handful of minor enhancements in the high-level management functionality and some low-level informational hooks. With these changes, the two different topologies have been added in a mostly transparent fashion.

The goal of the following sections are to: introduce the two different topologies and a combined topology via illustrations; discuss specific details of each topology; discuss the role that the implementation details of JBI components play in topology decisions. The overriding goal is to show that the two different topologies are useful on their own, but provide greater benefits when they can be combined. A side attraction is to highlight how the components built on top of all of this also contribute in a large way to the overall functionality.

Topology Overview

We will start with an example of the two different topologies and then show an combined example. The two different topologies, for historical reasons, are called homogeneous and heterogeneous. These examples come from OpenESB running in a GlassFish environment.

Homogeneous Topology

The homogeneous topology, as its name implies, creates OpenESB instances that have identical JBI content.

Illustration 1: Homogeneous Topology

Heterogeneous Topology

The heterogeneous topology, as its name implies, creates OpenESB instances that can differ in their JBI contents. The basic application flow is the same: HTTP request that results in a BPEL process invocation and response. This topology is supported by a special component called the Proxy binding (PB in the illustration.).

Illustration 2: Heterogeneous Topology

Combined Topology

This example is a blending of the first two examples with an additional JBI instance containing a non-replicated service.

Illustration 3: Combined Topology

OpenESB Topology Specifics

Many factors come into play when deciding on which topology should be chosen. In the case of OpenESB, homogeneous was chosen as the first implementation because it matched the capabilities provided by GlassFish, which is used as a primary hosting environment. Heterogeneous actually had the first prototype, and that prototype is now also available. The implementation order doesn't really matter, both are useful in their own right, and complementary when used together.

Homogeneous

The homogeneous topology creates OpenESB instances that have identical JBI content. The JBI content includes the set of components, service assemblies and configuration information that comprise one or more composite applications. These JBI instances, which map onto the underlying GlassFish cluster instances, can be created, destroyed, started and stopped as demand requires. Cluster instances can be created with one or more instances on each machine managed by a single operating system image. This flexibility allows for better use of cpu resources and greater availability on multi-threaded processors that are commonly serving the web-facing side. Newly created instances and instances that haven't been running while changes to the cluster have been performed have their configuration synchronized during startup from the central administration server (CAS) that contains the overall cluster configuration. This ensures that all cluster instances function identically.

Scalability

The favorite use of this topology is to horizontally scale web-facing services that have a large degree of fan-in. The ease of adapting to changes in the load is the primary benefit. Additionally, GlassFish includes a software HTTP load balancer that can front the farm of clustered instances and complements this usage.

Management

This topology is support directly by the GlassFish Application Server. OpenESB includes management enhancements that reside on the CAS to locally record configuration changes and than propagate the changes to active instances. These enhancements do allow a JBI management operation to target a specific instance or a set of instances, or all instances. So strict homogeneity is not a hard requirement, but a typical usage.

Example Details

The previous homogeneous illustration depicts a simple two instance cluster. Each cluster instance contains two components, an HTTP web-facing binding component and a BPEL service engine. A typical application would likely have many other components (database access, e-mail protocols, or even very specialized protocols like HL7 or SAP.) The combination of HTTP and BPEL is a minimal configuration that can perform a useful amount of work. The cluster is fronted by a software HTTP load balancer. The cluster administration server is also depicted.

Some important points to note:

  • Each cluster instance is typically identical. It's possible to configure each HTTP instance to use a different port number in cases where the cluster instances are deployed on the same operating system image and network interface.
  • Service endpoints are duplicated within the cluster, but their scope is local to each instance.
  • The number of cluster instances can vary over time. This is the feature that gives the topology its horizontal scaling. They can be on the same physical machine or on different machines or could be a mixture. Typically, cluster instances are physically co-located.

Heterogeneous

The heterogeneous topology creates OpenESB instances that can differ in their JBI contents. This topology is supported by a special component called the Proxy binding. The Proxy binding allows services hosted by components on one instance to be used by components on a different instance. The Proxy binding runs on each instance that wants to be a part of this loosely-coupled association. Components just use the same service endpoint names that they were configured to use. Each Proxy binding instance builds and maintains a directory mapping services provided by instances. If multiple instances export the same service, the Proxy binding, by default, load balances the requests among the providing instances. Heterogeneous systems are currently managed as individual systems.

The main issues dealt with by the Proxy binding are awareness and transports. Awareness is the ability for the Proxy binding to advertise its willingness to share its services and to also locate other Proxy bindings with a similar desire. Once awareness is resolved, the next issue is how are JBI MessageExchange's (ME) which contain NormalizedMessage's (NM) transported between instances.

Awareness

Awareness is a long and well studied problem. JXTA and JGroups are projects that have evolved into good low-level solutions. Shoal/GMS is a more recent project that provides a high-level solution layered over a range of low-level technologies including JXTA and JGroups. Shoal is a powerful, yet very easy to use, project that can be used in a standalone fashion or as a service provided by the GlassFish Application Server. The GlassFish Application Server itself uses Shoal to manage cluster awareness and to provide the basis for automatic XA transaction recovery when cluster instances fail and to provide distributed state caching for EJB's.

The Proxy binding uses Shoal as its default awareness mechanism. The awareness mechanism is designed with plug-ability in mind. This allows different awareness mechanisms to be created and used in the future.

Transports

There are plenty of transport protocols that one can choose from. By default, the Proxy binding uses Shoal as its transport. Shoal provides messaging primitives that allow for: point-to-point, one-to-many, and one-to-all addressing. The Proxy binding uses one-to-all for messages related to service endpoint changes, and point-to-point for messages containing ME+NM content between components. Various other transport protocols are under investigation. The investigations are exploring two themes: performance and reliability.

Transport performance is measured in terms of messages / second and total byte throughput. HTTP is an obvious candidate. Ultimate performance would likely come from some form of low-level TCP or UDP transport. A transport layered with SSL/TLS for security and integrity would satisfy most requirements. The basic thrust is to provide a high-volume transport for use when reliability is not required or is maintained in a different manner.

Transport reliability has similar performance concerns but the interesting benefit is once-and-only-once delivery. This is typically accomplished using some form of persistent storage. The most common flavor used in distributed systems is a message queuing system. A message queuing system can also hide availability problems by allowing messages to be stored until the receiving system becomes available. Availability of this type of transport can greatly enhance the overall capabilities of applications deployed on this infrastructure. The JMS message queuing system is bundled with GlassFish. This makes JMS a natural transport target. With JMS being XA transaction aware, that could allow the Proxy binding to partake in transactions. This would greatly enhance the its capability. Additionally, the effect of having the JMS transport is similar to splicing a JMS binding component between two components. Doing that as part of the transport ends up saving some overhead.

Having choices or options at the transport level helps to map application requirements to platform capabilities. It should also be possible to select a transport type based on qualities of service annotations in the ME. From the Proxy binding point of view, this just means supporting multiple simultaneous transport between instances and selecting the one to used based on any information available.

Example Details

The previous heterogeneous illustration depicts 3 separate OpenESB instances. It could implement the same of application as the homogeneous example. The difference in this case are: there is only a single web-facing HTTP binding component and there are two different BPEL service engine components that each implement the same service (epB.)

Some important points to note:

  • Each instance contains a Proxy binding component that maintains the distributed awareness and performs any messaging transparently between the instances.
  • When the NMR in Instance-1 is presented with a message addressed to endpoint epB it has a choice to route between Instance-2 or Instance-3. By default, the NMR currently load-balances between available instances. So the decision is based on the instance with the lowest count of active ME's.
  • Heterogeneous instances have a range of placement options. They could be on the same operating system image or they could be on opposite ends of the world or any where in between.

Combination

The previous combination illustration is a blending of the first two examples. We start with clustered web-facing instances fronted with a load balancer. Each cluster instance is also a member of a heterogeneous distributed system that connects with a third instance.

Some important points to note:

  • Homogeneous and Heterogeneous topologies can be mixed. The Proxy binding component controls the grouping. In theory, a instance could belong to multiple groups (this hasn't been formally tested.)
  • There are many reasons for choosing a mixed topology: licensing restrictions of third-party software, legacy system access, light loading, component architectural constraint (i.e. doesn't function well in a cluster), geographic separation.
  • When the NMR is presented with a request from HTTP for a BPEL resource, there is now a choice between the local BPEL instance or the remote BPEL instance. By default, a local instance is picked over a remote instance and this decision can be made without involving the Proxy binding component. If a local BPEL terminates for whatever reason, the requests will be routed to the surviving instance.

Component Qualities

The Normalized Message Router (NMR) is the part of the ESB that all components use to send and receive inter-component messages. The NMR concentrates on quickly switching messages between components. A ME either executes to completion or terminates. The mechanism is not designed to be persistent or recoverable. An ME can contain XA transaction information that can be used to make an ME's actions transactional. Many of the components that are available for use in OpenESB are constructed using an intermediate layer that adds various systemic qualities on top of this simple primitive. The reliability qualities are the most important for this discussion. The most interesting reliability qualities include: unique identifiers, retries, duplicate detection and transactional exchanges.

Most components support at-least-once messaging using a blend of the first three qualities. Failed messages are retried until a response is returned or until a threshold is reached. Unique identifiers are attached to the ME to help the receiving component detect and reject replay of a duplicate request. This series of actions gives an at-least-once level of service.

Some components support transactional actions. This is typically a by-product of having a persistent store available for component implementation reasons and additionally having XA transaction support. This allows a transaction to encompass the action of an ME that spans two components. Using multiple ME's a single transaction could extend to multiple components but things tend to get unwieldy when transactions span too much breadth or too much time. ME's using transactions can have a once-an-only-once level of service.

A few components support simultaneous execution of the same component on different cluster instances using a persistent shared state. This shared state can also be used to fail-over the work orphaned by a failed instance to an instance that is still running. This type of component is the most advanced in terms of its flexibility from a reliability point of view. The JavaEE Engine and the BPEL Engine are prime examples in this category, along with the JMS Binding to a lesser extent.

Example

The following illustration shows a 2 instance cluster and a set of components that are either oblivious to or aware of running in a cluster.

Illustration 4: Component Qualities

Some importance points to note:

  • XLST is a stateless component that just applies an XSLT transform to an XML document. This operation can be retried without any material effect.
  • Enterprise Java Beans (EJB) persist their state in a database as well as cache the result in a distributed in-memory cache. Access by multiple instance is supported.
  • JMS maintains its messages queues in some sort of persistent store, typically file based or stored in a database. A clustered instance is managed by a separate process to handle multiple instance access.
  • BPEL can be configured to persist its internal state of a flow to a database. A failed BPEL instance can be recovered on a different cluster instance and the processing work load will migrate to an available BPEL instance as actions that reference the flow are processed.
  • EJB, JMS and BPEL all support XA transactions. This allows atomic actions to be performed as part of a JBI Exchange. For example, a transaction could wrap the fetch from a JMS queue that is then sent to a BPEL engine and persisted as a variable in a flow.

Summary

The goal of this article was to show how OpenESB has graduated from a single JVM view of the world, to include two different topologies that support distributed functionality. These two topologies are complementary.

Messaging qualities are even more important with this wider scope. The base NMR uses an in-memory messaging structure. Components that are layered on the NMR add external protocols. Some of these external protocols, like JMS, are useful in connecting components in a more reliable fashion than the components support themselves. The use of JMS as a distributed transport makes for a cohesive story. The additional capability to make messages part of transactions adds to the power of the solutions available.

Additionally, we show that components can also play a key part of contributing to the overall functionality of the end result. Having components implement a common set of systemic qualities that can be configured gives the composite application designer the freedom to explore a large solution space. Larger isn't always better, but it's especially helpful when dealing with the integration of existing systems. Integration was the initial idea behind the spawning of JBI, but it will prove to be useful in other situations.

References

About the author

Derek Frankforth is a SOA/BI Senior Engineer - Sun Microsystems and a member of the team that helped define JBI and implemented the reference implementation, developed OpenESB and GlassFish ESB, and is now performing on Project Fuji. He joined Sun through the Forte acquistion where he worked on the core and distributed runtimes. Previously brought the Ingres database system into the commercial world.

Rate this Article

Adoption
Style

BT