Paul Fremantle, Sanjiva Weerawarana and several other WSO2 employees recently blogged about a performance benchmark which showed the open source Axis2 Web services stack’s performance to be roughly similar, and in some cases superior, to XFire. The language used in the article is rather timid:
This article shows the latest performance results of Apache Axis2 vs. Codehaus XFire, both Java implementations. The results demonstrate that modern Web Services engines can perform at very high transaction rates.
The blog entries have stressed the claimed improvement over XFire more clearly — for example, Sanjiva wrote,
Axis2 is consistently faster than XFire and is 50-70% faster as the data sizes get larger. Move over XFire, Axis2 is now the new hotrod.
XFire committer Dan Diephouse, when asked for a statement, pointed out that it’s more a benchmark of data bindings:
First, congratulations to the Axis team and their recent performance improvements in their last release. And I commend them for open sourcing their test suite so we can all run it ourselves. Second, this is really a benchmark of two different databindings - ADB vs JAXB, not XFire vs. Axis. If you look at the benchmarks they do seem to imply that XFire has less overhead when using the same databinding. But, the question many are probably wondering is - does XFire provide a higher performance databinding than JAXB? The answer is yes, JiBX. JiBX can provide significant performance improvements over JAXB as it does not use reflection, but instead does byte code generation to optimize databinding. I would encourage performance conscious users to try JiBX and to also run benchmarks for themselves.
A rather nasty exchange of blog entries was triggered by “Bileblogger” and XFire committer Hani Suleiman’s post, which prompted follow-ups by both Sanjiva Weerawarana and Paul Fremantle. Paul’s post contains results of benchmarking Axis2/ADB against XFire/JiBX:
The bottom line? Axis2 with JIBX is 2.5% faster than XFire with JIBX, and Axis2/ADB is 5% faster. These are averages - on some tests XFire was a little faster. I don’t want to make out that these are huge performance leads. They aren’t. Most user’s wouldn’t see any real difference between the stacks in terms of performance. In the article we were trying to point out that both stacks are fast and that SOAP is not slow. However, these numbers clearly show that there is no “sharp pointy plunger” in the vicinity of Axis2.
Steve Loughran linked to benchmark results for Sun’s JAX-WS 2.1 reference implementation, and questions the general usefulness of benchmarks:
When you build a service, what is your goal? Is it maximum throughput for a single node, minimum latency per call, or ability to scale well across multiple machines or multicpu systems? Similarly, when you choose a data representation for the incoming XML content, is it based purely on how well the SOAP stack parses it, or how easy is it to work with?