BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Apollo 1.0 Released, Next Generation ActiveMQ

Apollo 1.0 Released, Next Generation ActiveMQ

This item in japanese

Bookmarks

Apache Apollo 1.0, ActiveMQ subproject, was just released. Apollo's new threading model which is geared to multi-core microprocessors makes it faster, more scalable and more reliable than ActiveMQ and perhaps many other messaging projects.

Apollo 1.0 features:

  • Stomp 1.0 wire protocol,
  • Stomp 1.1 wire protocol,
  • topics and queues,
  • queue browsers,
  • durable subscriptions for topics,
  • reliable messaging,
  • JMS API,
  • and much more

To demonstrate the raw performance of Apollo, Hiram Chirino, lead developer on Apollo, created a set of STOMP benchmarks to build an equivalent comparison of major STOMP vendors (JBoss HornetQ, ActiveMQ and Apollo). There was some question about the validity of this since most users will use the JMS API.

Thus, Hiram created a new set of JMS benchmark to quell all doubts of Apollo's speed. The benchmarks speak for themselves. Apollo does quite well, and in most scenarios appears to be the clear winner against JBoss HornetQ and Apache ActiveMQ.

Hiram described in the Apache Apollo 1.0 release announcement that Apollo was done as a subproject because it is a radical departure from ActiveMQ, as Apollo was built to address shifts in the processor market to multi core. The current version of ActiveMQ employs complex thread locking that is a bottleneck if you want to take advantage of multi core machines. Since the Apollo subproject has shown that it is a reliable speed demon, it will be likely integrated in ActiveMQ 6.

InfoQ caught up with Hiram to cover the Apache Apollo 1.0 release:

InfoQ: Can you briefly describe what types of optimization were done in Apollo to address the multi-core microprocessors?

Apollo has switched to a fully asynchronous processing model based on the ideas behind libdispatch. This model avoids increased context switching since it uses a small fixed number of threads to process all operations.

Libdispatch powers Grand Central Dispatch from OS X. As part of the Apollo effort Hiram led up another project to come up with a Java like version of libdispatch and Grand Central called HawtDispatch. The HawtDispatch library endeavors to follow the same principles of libdispatch and Grand Central. HawtDispatch is used by Apollo so it can take advantage of multiple cores.

Hiram went on to mention that using HawtDispatch avoids the problems and performance bottlenecks associated with simultaneously locking multiple critical sections.  With this multi-core friendly approach processing Apache Apollo doesn't use waits. This approach makes heavy use of atomic compare-and-swap (CAS) processor operations.

InfoQ: What did you learn about the JMS market by doing the benchmarks?

It's great to see that Java enjoys so many high quality open source messaging options.   They all implement the same API (and some the same wire protocol) which makes it easier for end users to switch from vendor to vendor without impacting their applications.

InfoQ: What did you learn about the JMS market by doing the benchmarks? It appears the Apollo STOMP does better over all than Apollo OpenWire. Why is that? Many might assume since STOMP is a simpler string oriented protocol and OpenWire is a binary oriented protocol that OpenWire would be the clear winner.

Well, under different conditions like in a bandwidth constrained network, OpenWire might sill be better--I would have to benchmark that case.  But it shows that with todays processors there's not a huge difference between parsing a few text headers or a few binary headers.  Your time is better spent optimizing for keeping processor caches hot and avoiding processor stalls due to too much synchronization. 

InfoQ: Apollo seems to do very well against HornetQ and ActiveMQ. Is it because Apache Apollo takes better advantage of multi-core processors?

Making efficient use the muli-core processors helps, but that can't be the whole story.  Perhaps it's also because Apollo also does a really good job at managing the flow control and the size of the memory buffers used as messages move from producers to consumers.  Apollo's memory management strategy is to use the smallest possible amount of memory buffering which still provides good throughput.  If a message is not going to be consumed right away, it is stored to disk and evicted from memory.  Apollo actively monitors the consumer consumption rates to know if it should flow control producers to match the consumer rates or if it should actively swap out the new producer messages.  It also actively prefetches persisted messages from storage so that it's ready in memory by the time it's needed for delivery to a consumer.  Furthermore, Apollo sends messages to consumer while concurrently attempting to store persistent messages.  If the consumer acks the message before the storage operation completes, then the message is never stored and the subsequent message delete operation is not required either.  This in turn reduces the disk IO workload.

InfoQ: What are your options for Apollo journaling? It appears from looking at the code that you are using LevelDB or BerkleyDB for the journaling (persistent queues and durable topics). Before Apache ActiveMQ used KahaDB or AMQ Message Store (which was based on Howl, which gets used by JOTM, the open source JTA). Why the different direction and how are you using key-value database as a journal? Also since Apollo depends on LevelDB and/or Berkley DB, it is not pure Java. Right? Must it have JNI libs compiled for each supported OS? Is it the case that Apollo will primarily support Linux (at least initially)? Will there be a pure Java Apollo version or does that even make sense anymore?

In the case of the BerkleyDB (BDB) store implement, we just use BDB APIs to implement the store. BerkleyDB itself employes journalling to provide durability for persistent operations executed against it.

LevelDB indexes is more efficient if you keep the key/values small since the subsequent index compactions will also be small. So, in this store's case, we use the same type of journaling strategy that is in place for ActiveMQ KahaDB  store.  We first sync all persistent operations to log files. The LevelDB index entries can then remain small as they typically are just pointers back into the journal files. This allows us to do async updates to the LevelDB index to further improve the index performance.  Furthermore, the nice thing about LevelDB is that it performs much better at sequential reads and writes than a BTree based indexes and that is the main usage pattern of messaging queues.

(Regarding need for JNI) Apollo also ships with a pure Java implementation of LevelDB and uses the pure Java version when the native version is not available on your platform, for example, Windows.  But you can also use the BerkleyDB based message store which is a pure Java implementation too.  So while Linux might enjoy some optimizations, the other platforms are not being left out in the cold.

 

A good benchmark would be Apache Apollo on Linux versus Windows (native journaling versus pure Java).  

LevelDB is a fast key-value storage library by Google. It provides ordered mapping from string keys to string values.

Warm welcome to the Apache Apollo project. Congratulations on the 1.0 release. 

Rate this Article

Adoption
Style

BT