Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Evolutionary integration with ESBs

Evolutionary integration with ESBs

This item in japanese


If we take a closer look at the majority of applications in a given organization, integration is done on an ad-hoc basis. As time goes by and the application portfolio grows, the interconnection between the systems and applications start looking like a coral reef: hard and stiff inside, with some life on the outside. Fumbling with tightly coupled integration points can potentially be producing a cascade of side effects, causing nobody to dare correct the root cause of the problems. Often, people just fix the symptoms by adding a new layer on top of the coral reef of highly coupled systems and applications.

To avoid this pit fall, we need to shift the way we think. We need to accept that IT systems evolve, and the way in which these systems are used in the organisation also changes. This can only be fulfilled by choosing an integration strategy that is complementary with the evolution of its IT facilities.

This series of articles will show you how this can be achieved by using well-known integration patterns and the Open Source integration platform, Mule.

The Case

Kjetil and Rune are fed up of being IT professionals, and have decided to quit their jobs and start as ski-bums at Trysil, an alpine resort in Norway.

Somehow they got to make a living, and they decided to create PowderAlert, an online application that provided useful skiing information to other ski freaks.

PowderAlert #1

The first version of PowderAlert is quite simple. The main goal of the application is to test out the main principles of Kjetil and Rune's business idea; providing skiing information on request with email as the communication channel. How to actually get paid for the information is still an unsolved question.

Figure 1 below describes the flow of the application.

Figure 1: Usage and main components of PowderAlert

  1. The end user sends an email containing the key-word "powder" to a specific email address.
  2. The PowderAlert application polls the email account on regular interval, and pop's of mails in the inbox, and stores the mail address of the user.
  3. PowderAlert collects useful skiing information from a public site. The skiing info is sent as an email to the PowderAlert application.
  4. PowderAlert pulls off the emails containing skiing information at regular intervals.
  5. An email with the information is send back to the user.
  6. Who eventually reads it.

Current implementation

After a long night of partying at 'Laaven', the well known afterski place in Trysil, the snowdudes decided to write a quick and dirty PowderAlert application. Since they are mostly focused on cruising the powder and not reporting it, they had to find some way to provide this information automatically. They searched the net for sites providing this kind of information and ended up with the Norwegian site called Skiinfo. Skiinfo provides both powderalarm emails and SMS messages, and also daily snow depth information.

The snowdudes started hacking out at real simple application using the Spring framework. Spring has nice support for sending emails trough their JavaMailSender and SimpleMailMessage. They also needed a light weight database for storing users and decided to start out with Hypersonic SQL. SQL is surely not one the snowdudes favourite languages, so they throw Hibernate and Annotations into the mix. That said, no Java program developed in 2006 can be done without the new language features of Java 5.

Of course they also wanted to dive into Maven 2 and decided to set up their project according to best practices and standards. One of these practices is of course Test Driven Development, something that is kind of hard to do off-line in a camper when mail servers are such a central point in the application.Dumbster to the rescue! Dumbster is a very simple fake SMTP server designed for unit testing.

The application consists of two modules, core and web. The core contains the domain model and all the services for polling the mail server, querying the database and sending out emails to powder addicts. The web part mainly consists of the bootstrapping servlet that handles the polling and the user interface.

The figures below show the main functionality of PowderAlert v.1.

Figure 2: Main Use Cases PowderAlert #1

As shown in figure 2, the actors involved are the user that subscribes for Powder Alerts, a mail server, and finally the Ski Info site providing PowderAlert with skiing information.

To keep things simple, PowderAlert contains four Use Cases:

  • Register Powder Alert User
  • Send Powder Alerts
  • Get Powder Alert Users at Location
  • Receive Powder Alarm

The Powder Alert Use Cases

The Powder Alert Use Cases listed above deserve some detailing to describe how the system actually works. We have used a kind of System Sequence Diagram to document this. The actual UML notation is not quite correct, so please Mr Craig Larman, don't punish us with more than twenty lashes this time for breaking the guidelines in your excellent book Applying UML and Patterns.

Figure 3: Registration Procedure

Figure 3 above shows a sequence diagram elaborating the registration process. Nothing very exciting here, but worth mentioning is that the user registers with the skiing locations of interest. This information is used later when powder alarms are received from the Ski Info site.

Receiving the powder alarms are detailed in Figure 4 shown below.

Figure 4: Sending Powder Alerts

The Powder Alert system receives powder information from the external system per location. As shown in Figure 4, the users subscribing for powder info on the given location are retrieved, and forwarded the skiing information as a Powder Alert.

The two Use Cases left, "Get Powder Alert Users at Location" and "Receive Powder Alarm", are very simple. We add the sequence diagrams (Figure 5 and Figure 6) more or less to complete the picture.

Figure 5: Retrieving Powder Alarms from Ski Info

Figure 6: Getting User subscribing Powder Location

Coding It

Domain classes

To start out the system needs some domain classes, the only obvious domain classes are USER and POWDERPLACE. However, some helper classes to transport data are also needed, one for the Subscription email from the user, and one for the Skiinfo alarm email message. For now the info.powderalert.domain package only contains the real domain classes, while the others are left in the infrastructure package, more on that later.

public class User implements Serializable {
@Id @GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String email;
private String firstName;
private String lastName;
@ManyToMany (fetch=FetchType.EAGER)
private List<powderplace> powderPlaces;
.. } @Entity
public class PowderPlace {
@Id @GeneratedValue(strategy= GenerationType.AUTO)
@Column( name="powderPlaceId")
private Long id;
private String name;
private List<user> users;
.. }


Having analyzed the diagrams, four services are identified.


Responsible for sending and receiving email to and from the mailserver and converting them to readable POJO's for the other services.

public interface MailService {
void sendMail(User user, MailMessage message);
List<mailmessage> getMail();


It adds administration capabilities to the system, responsible for adding, deleting and listing Users and PowderPlaces.

public interface AdminService {
User addUser(User user);
User modifyUser(User user);
User getUser(Long id);
User getUser(String email);
boolean removeUser(User user);
List getUsers();
List<PowderPlace> listPowderPlaces();
void addPowderPlace(PowderPlace place);
PowderPlace getPowderPlace(Long id);
PowderPlace getPowderPlace(String name);
void removePowderPlace(PowderPlace place);
void modifyPowderPlace(PowderPlace place);


It is responsible for getting the powderalert to the registered user.

public interface PowderAlertService {
void powderAlert(PowderPlace place);


Takes care of alert subscriptions in the system.

public interface UserService {
void subscribe(User user);
void unsubscribe(User user);

Data access

Well, data have to be stored, unfortunately, always a hassle. Luckily Spring, Hibernate and Annotations makes it easier, you probably noticed the annotated domain classes already ;). Currently there are two DA classes, one for handling the users and one for the powderplaces.

public interface UserDAO {
void addUser(User user);
void updateUser(User user);
void removeUser(User user);
User getUser(Long id);
User getUser(String email);
List<user> getUsers();
List<user> getUsers(String location);
} public interface PowderPlaceDAO {
void addPowderPlace(PowderPlace place);
void updatePowderPlace(PowderPlace place);
void removePowderPlace(PowderPlace place);
PowderPlace getPowderPlace(Long id);
PowderPlace getPowderPlace(String name);
List<PowderPlace> getPowderPlaces();

Infrastructure components

Spring is helping us along with some of the infrastructure stuff, like wiring, lookups and easier access to the Java Mail API. However, we still need a lot code to make things work. For example we need a Servlet bootstrapping the whole thing, some timers to poll the mail server and as mentioned before we need some data helper classes when converting email messages to POJOs and finally some property classes to store miscellaneous stuff like usernames and passwords. For now these components are left out, and provided for you to download (see resources section). The same goes for the implementation of all the interfaces listed above.

The first Grub of the Coral Reef

Nothing is really wrong with PowderAlert #1. It is well organised, it uses Spring to do the wiring, it uses a handful of design patterns, etc, etc. Thus when we look at the layering of Powder Alert #1 it is a decent application. But when it comes to being flexible regarding its integration towards other systems, evolving new services, and scaling, it looks like the beginning of a Coral Reef.

This is very typical situation which we have identified at numerous projects that are doing some kind of integration with other systems. The competence covering the system column is very good, but required resources due to distributed computing and integration are either not present or ignored.

To be able to create applications that cope with changing requirements and evolving portfolios of services, we need a platform or tool that can act as the foundation that able to face these challenges.

To be more precise, a collection of tools would be preferable, more like a LeathermanTM. Let's see what our two heroes can learn from the Afterski.

Meeting Ross at the Afterski

Ladies and Gentlemen, let us proudly present ... Mule!

We were seeking for an IT version of an LeathermaTM tool, and we got it! We got hold of an OpenSource Enterprise Service Bus incarnated by the Mule project.

Mule is a messaging platform based on ideas from ESB architectures. In short, it is a light-weight messaging framework that uses your existing technology infrastructure (Jms, Web Services, Email, etc) to build composite service applications.

A key characteristic of the Mule design is to be as flexible and extensible as possible, it can be thought of as the Spring framework for integration.

The Mule framework provides a highly scalable service environment in which you can deploy your business components. It manages all the interactions between components transparently whether they exist in the same VM or over the internet and regardless of the underlying transport used. It's non-intrusive, meaning that any object can be managed by the container, and there is no need to extend any classes or implement any interfaces. Your services and logic are not infected, and will not suffer a framework lock in. This means that it is well suited for testing and downscaling during development.

This is what we call environment transparency, and is one of the main features of Mule. It enables the snowdudes to develop their entire application on a laptop on their way up the mountains, and when they finally get online, just dropping newly developed code onto a server, change the deployment configuration and run it in a distributed environment if needed.

Other core features of the framework are JBI Integration, WebService integration (using Axis, Glue or XFire), Spring framework integration, SEDA based processing model, REST support, declarative and programmatic transaction support including XA support, end-to-end support for routing, transport and transformation of events.

A complete feature list and lots of good introductory material can be found at the Mule site.

Leveraging Mule

Okay, we got hold of a powerful collection of tools, but what we need now is some kind of user manual that actually tells us what tool is the best for the job. We need to know that we are not literarily using a sledgehammer to hammer in nails.

It should be of no big surprise that as other computer science disciplines somebody has faced the same challenges before, and has collected these best practices in the form of patterns. So to help us out with these Integration problems we have (hold your breath..) Integration Patterns.

If we compare the content of Gregor Hohpe's book "Enterprise Integration Patterns" with the Mule architecture overview shown in Figure 7, we can actually recognize many of the patterns described. It is highly recommended to read this book. It is really good. We feel obligated to raise one little warning regarding the book; it is not suitable for bed-time reading because it doesn't put you asleep...

Another very useful resource is Gregor's Enterprise Integration Patterns (EIP) site. It lists all patterns from the book, along with other useful information like blogs and articles.

Figure 7: Architecture Overview

Anyway, back to the Mule architecture and use of Enterprise Integration Patterns. We got the Integration patterns lined up between those two "Application"-bubbles at each end of the Architecture Overview figure.

First we got something called "Channel". The main purpose of this component is to communicate data between two endpoints, and we can find the used pattern on Gregor's site in the incarnation of Message Channel.

The next pattern coming up, for those who can read sideways, is hidden in the "Message Receiver" name. A Message Receiver is used to read or receive data from the Application. In Mule a Receiver is just one element of a Transport provider and Mule provides many transports such as jms, soap, http, tcp, xmpp, smtp, file, etc.

The relevant EIP in the case of Message Receiver is Message Endpoint. The main purpose of a Message Endpoint is to connect an application to a messaging channel. Thus the Message Endpoint pattern encapsulates the messaging system from the application, and customizes a general messaging API towards a specific application's interface. By doing this encapsulation we achieve messaging transport transparency. A Message Endpoint is a specialized Channel Adapter that has been custom developed for and integrated into its application.

The next thing coming up is a "Connector". The Message Receiver is coupled to this thingy to be able to communicate in the Mule-way of walking and talking. The connection takes channel specific requests in one end, and connects to Mule components in the other end talking in UMOEvents.

Again, for those who can read sideways, we got the "Transformers" box. The relevant EIP in this case is Message Translator. The main purpose of the Message Translator is to enable systems using different data formats to be able to communicate with each other using messaging. If you are looking into translating message payload, have a look on "Introduction to Message Transformation" on Gregor's site. The knowledge gained by reading this intro serves as good primer for the problem area.

Back to Mule, Transformers are used to transform message or event payloads to and from different types. Mule does not define a standard message format (though Mule can support standard business process definition message types). So the transformation provided out of the box is 'type transformations' such as JMS Message to Object, standard Xml transformers and standard protocol transformers. Data transformation is very subjective to the application and Mule provides a simple yet powerful transformation framework.

Next in line is the "Inbound Router". Here we have another instance of Enterprise Integration Patterns; the Message Router. A Message Routers main purpose is to consume messages from one Message Channel and republish them to different Message Channels depending on a set of conditions.

When it comes to the concrete implementation of the Message Router, Inbound Router, it can be used to control and manipulate events received by a component. Typically, an inbound router can be used to filter incoming events, aggregate a set of incoming events or re-sequence events when they are received.

Now we have done more or less all the secrets handshakes to be able to reach the soul of the Mule: The UMO Component.

To be sure that the UMO Component gets the proper respect (hey, we're talking about someone's soul...), it gets it own headline. Here it comes:

The UMO Component

Central to the architecture of Mule are single autonomous components that can interact without being bounded by the source, transport or delivery of data. These components are called UMO Components and can be arranged to work with one another in various ways. These components can be configured to accept data a number of different sources and can send data back to these sources.

Figure 8: The UMO component

The 'UMO Impl' specified above actually refers to an Object. This object can be anything, a JavaBean, an EJB or component from another framework. This is your client code that actually does something with the events received. Mule does not place any restrictions on your object except that if configured by the Mule directly, it must have a default constructor (if configured via Spring or Pico, Mule imposes no conventions on the objects it manages).

It is pretty obvious that UMO components are powerful and flexible stuff. Much like kids really. Full of vitality, creativity, and speed... Imaging that when you are at work your have to put your kids in a store filled with Venetian glass from the 17th century. With no supervision. Guess you would be kind of nervous then... So, with the UMO components as the kids, a kindergarten could be a good idea. The Mule container is the kindergarten for the UMO components, and the nannies are the Mule Manager and The Model. They are shown in Figure 9 below. The Mule Manager sees to that all UMO components get there brotherly share of the resources, and ensures that there is no fighting amongst them.

Figure 9: The Mule Server Components

Of these two nannies the Mule Manager acts like the chief nanny. It manages how the kindergarten is conducted, how many kids on each department, the opening hours, etc, etc. In Mule terms, the Mule Manager manages the configuration of all core services for the Model and the components it again manages. Wow, that was a lot of management, folks!

We must not forget the other nanny in the Mule kindergarten, The Model. The Model deals with our UMO kids on a day to day basis. It comforts them, feeds them, protects them from the hostile environment surrounding the kindergarten, and ensures that each of the UMO kids are playing with equals.

The Model has three control mechanisms to determine how Mule interacts with its UMO kids. The first control mechanism is the Entry Point Resolver. An Entry Point Resolver is used to determine what method to invoke on an UMO component when an event is received for its consumption.

The Lifecycle Adapter is the second control mechanism used by the Mule Model. It is responsible for mapping the mule component lifecycle to the underlying component.

As the last control mechanism we got the Component Pool Factory. This mechanism is responsible for creating proper component pools for a given UMO component. Thus departments for our UMO kids...

Okay! Back two our two lazy heroes. How and where should they apply Mule to their PowderAlert application?

The Shift towards Mule

Figure 10: Mule-fied PowderAlert, first version

Figure 10 above, shows a simplified sketch of how the first version of PowderAlert looks in a Mule costume. In this version, we use the VM option in Mule as a simple communication strategy. No messaging is involved... yet. The later articles in this series will introduce messaging as well.

So, to explain how Mule is leveraged into PowderAlert we need to have a closer look at the architecture and the components of that makes up the application.

Initially, you need to determine the services your application provides and consumes, and whether the services can be classified as external or internal. The rationale for doing this exercise is to determine options available for how the particular service should be integrated into Mule.

In this context, the classification of the service determines to what extent it is possible to alter services in your architecture. One end of the scale we have internal services that you have developed yourself that can be altered with no impact on other services. The other extreme are external services that have given interfaces that must be accepted as is, with absolutely no possibility to be altered.

By applying this definition to Mule it can be stated in general terms that the number of integration options of how to integrate a given service with Mule are larger with internal services than external services. Thus an internal service can be very tightly integrated with Mule, and if you fill out the right application form it can potentially be a UMO-kid in the Mule kindergarten!

Okay! If we take a step back and have another look at Figure 1 above, we can, generally speaking, divide the PowderAlert application into three grain coursed services:

  • The Mail server
  • The SkiInfo site
  • The PowderAlert core service(s)

We don't have any influence on the mail server and the SkiInfo site. The way they talk and walk are fixed. Thus they can be classified as external services;as a result, very tight integration with Mule would not really give us what we want. We need something that can connect to the services interface, and is able to transform the payload from component specific information elements to UMO information objects.

If we take a closer look at Figure 7 above, the Endpoint component stands out as a candidate for integrating the mailserver and the SkiInfo site towards Mule. As the figure shows, the SkiInfo site communicates with PowderAlert by sending emails, thus only one Transport is needed to handle both the communication with the PowderAlert users and the SkiInfo site. We have in this case one technical integration point (Email), but the semantics in the payload are different for SkiInfo and PowderAlert messages. This will be handled by two different email endpoints with Transformers on each responsible for transforming the given SMTP message to the according UMOEvent object.

Left is the set of PowderAlert core services (PowderAlertService and UserService). This is the actual business logic in the application, and it is possible to alter the services since they are developed by the ski-bums themselves. Thus PowderAlert can be classified as a set of internal services, and we can be ensured that it is possible to train the component to actually be UMO kids. You, know wipe their nose, stay out of trouble, learn the Mule song, etc, etc, etc. Then we can send them to the Mule kindergarten to be watched by the Mule nannies.

Then we are left with the following tasks:

  • Creating Mule Endpoints, interfacing the mailserver.
  • Converting the PowderAlert core services to true UMO components.
  • Create Transformers to convert email messages to proper UMOEvent objects.

Creating the Endpoints

Creating? Configuring will be the right term. Since Mule has all these wonderful out-of-the-box endpoints we just dive into the documentation and start copying from there.

In PowderAlert 1.0 we already created two mail accounts, one for users to who wish to register for alerts in a given powder place and one for receiving info from the Skiinfo site. As easy it was in the first version to poll the mail server using the commons-net library it required quite a lot of code. Enter Mule's POP3 endpoint!

Now we can throw all of that code away and replace it with this configuration:

<endpoint address="pop3://">

Since we have to email accounts we need two instances if this endpoint, the only difference is the username in the URI.

As for sending email we got off easy using the functionality provided by Spring, however, it did require some code and some configuration. We’ll throw that out too, and replace it with this:

<endpoint address="smtp://">

That's it for the endpoints! Now we start looking at the services we already coded in the first version. And as mentioned before, we have some strong candidates running for the UMO title.

Sending the PowderAlert core services to kindergarten

When looking at the first version we see two services that stand out as the real workers in the application;

  • UserService that handles the registration of users
  • PowderAlertService that send out alerts to the users

These will be our UMO's. This sounds as a long process, converting these services to real Mule UMO's. Hang on! What was a UMO again? It could be a regular JavaBean couldn't it? We'll have a look at the implementations:


public class PowderAlertServiceImpl implements PowderAlertService {
public void powderAlert(PowderPlace place) {
List<user> users = userDAO.getUsers(place.getName())
for (User u : users) {
MailMessage msg = new PowderAlertMessage();
msg.setBody("Powder alert! "+ place.getName()
+ "has " + place.getCurrentSnowDepth()
+ " of FRESH snow!!!");
mailService.sendMail(u, msg);


public class UserServiceImpl implements UserService {
public void subscribe(User user) {
// lookup powderplace
String place = user.getPowderPlaces().get(0).getName();
PowderPlace pp = powderPlaceDAO.getPowderPlace(place);
//check if user exist, in that case update user
User u = userDAO.getUser(user.getEmail());
if (null != u) {
} else {
List<PowderPlace> pps = new

These services are PERFECT UMO's, we can actually use them as is, the only thing we need to do is adding some transformers that transforms email messages to a domain objects understandable by the services. First we'll have a look at the whole configuration to understand how this works.

<mule-configuration id="PowderAlert" version="1.0">
<transformer name="PowderAlarmToPowderAlert"
<transformer name="SubscriptionMailToUser"
<!-- The Mule model initialises and manages your UMO components -->
<model name="PowderUMO">
<mule-descriptor name="PowderAlertUMO"
<endpoint address="pop3://"
<endpoint address="smtp://"
<mule-descriptor name="UserUMO"
<endpoint address="pop3://"

Here is a graphical presentation of the configuration. The graph is generated with the "Config grapher" provided along with the Mule distribution.

The transformers does not show in the graph, but if you look at the <transformers> section in the XML you see that we have defined two transformers:

  • PowderAlarmToPowderAlert
  • SubscriptionMailToUser

The names are quite self explaining. Both transformers receive mail messages from the POP3 endpoints and convert them to domain objects that again the defined UMO's receives.

As for the SMTP endpoint we use the provided ObjectToMimeMessage transformer to convert our message so it can be sent by the endpoint.

Wrapping up

In this article we have argued that doing point-to-point integrations will eventually create a computer incarnation of a coral reef; hard and stiff inside, and with some life on the outside. Altering the different integration points will eventually be impossible without risking unforeseeable side effects.

To illustrate this we introduced a simple application, PowderAlert, the first version of which fairly well designed. But still, as this article showed, the application would architecture was not flexible enough to respond to future related to new integration points and scaling.

To face these challenges, we introduce Mule that is a messaging platform based on ideas from ESB architectures. In short, it is a light-weight messaging framework, designed to be as flexible and extensible as possible. Mule can be thought of as the Spring framework for integration.

Mule is used is this article for making integration with the other systems transparent to the main application, PowderAlert. By doing this, it is possible to integrate with new systems with minimal impact the PowderAlert application itself.

In the next article we will have closer look at how we can use Mule to scale the application to handle more requests in the form of powder geeks wanting updated snow info, and as a consequence of demand for support for other distribution channels such as SMS, among others.

The next article will also introduce the Mule IDE as a pleasant spice to soupy world of ESBs.

Code disclaimer

As you probably noticed there are a few holes in the code described here, try to focus on the concepts and not the actual code, it's meant as a reference only. You will soon enough get a chance to download the code and tear it apart.



About the Authors

Rune Schumann ( is Senior Architect in Bouvet with experience since 1999 with large distributed Java systems. He as worked for several years as architect, developer and project manager with SOA systems both in telecom and the retail business. He has also held SOA talks both on international conferences and the Norwegian JUG as well.

Kjetil H.Paulsen ( - is currently a Senior Software Architect at Bouvet, with experience since 1996. He as worked on several large distributed Java systems as an architect and lead developer, mainly in the finance business. Kjetil is also a javaBin (the Norwegian JUG) board member, which among other things hosts the annual well known JavaZone conference. Kjetil has also contributed to several open source projects.

The snow dudes

Rate this Article