BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles 9 Fallacies of Java Performance

9 Fallacies of Java Performance

Leia em Português

This item in japanese

Lire ce contenu en français

Java performance has the reputation of being something of a Dark Art. Partly this is due to the sophistication of the platform, which makes it hard to reason about in many cases. However, there has historically also been a trend for Java performance techniques to consist of a body of folk wisdom rather than applied statistics and empirical reasoning. In this article, I hope to address some of the most egregious of these technical fairytales.

1. Java is slow

Of all the most outdated Java Performance fallacies, this is probably the most glaringly obvious.

Sure, back in the 90s and very early 2000s, Java could be slow at times.

However we have had over 10 years of improvements in virtual machine and JIT technology since thenand Java's overall performance is now screamingly fast.

In six separate web performance benchmarks, Java frameworks took 22 out of the 24 top-four positions.

The JVM's use of profiling to only optimize the commonly-used codepaths, but to optimize those heavily has paid off. JIT-compiled Java code is now as fast as C++ in a large (and growing) number of cases.

Despite this, the perception of Java as a slow platform persists, perhaps due to a negative historical bias from people who had experiences with early versions of the Java platform.

We suggest remaining objective and assessing up-to-date performance results before jumping to conclusions.

2. A single line of Java means anything in isolation

Consider the following short line of code:

MyObject obj = new MyObject();

To a Java developer, it seems obvious that this code must allocate an object and run the appropriate constructor.

From that we might begin to reason about performance boundaries. We know that there is some finite amount of work that must be going on, and so we can attempt to calculate performance impact based on our presumptions.

This is a cognitive bias that can trap us into thinking that we know, a priori, that any work will need to be done at all.

In actuality, both javac and the JIT compiler can optimize away dead code. In the case of the JIT compiler, code can even be optimized away speculatively, based on profiling data. In such cases the line of code won't run at all, and so it will have zero performance impact.

Furthermore, in some JVMs, such as JRockit, the JIT compiler can even decompose object operations so that allocations can be avoided even if the code path is not completely dead.

The moral of the story here is that context is significant when dealing with Java performance, and premature optimization can produce counter-intuitive results. For best results don’t attempt to optimize prematurely. Instead always build your code and use performance tuning techniques to locate and correct your performance hot spots.

3. A microbenchmark means what you think it does

As we saw above, reasoning about a small section of code is less accurate than analyzing overall application performance.

Nonetheless developers love to write microbenchmarks. The visceral pleasure that some people derive from tinkering with some low-level aspect of the platform seems to be endless.

Richard Feynman once said: "The first principle is that you must not fool yourself - and you are the easiest person to fool". Nowhere is this truer than when writing Java microbenchmarks.

Writing good microbenchmarks is profoundly difficult. The Java platform is sophisticated and complex, and many microbenchmarks only succeed in measuring transient effects, or other unintended aspects of the platform.

For example, a naively written microbenchmark will frequently end up measuring the timing subsystem or perhaps garbage collection rather than the effect it was trying to capture.

Only developers and teams that have a real need for should write microbenchmarks. These benchmarks should be published in their entirety (including source code), and should be reproducible and subject to peer review and deep scrutiny.

The Java platform's many optimizations imply that statistics of individual runs matters. A single benchmark must be run many times and the results aggregated to get a really reliable answer.

If you feel you must write microbenchmarks, then a good place to start is by reading the paper "Statistically Rigorous Java Performance Evaluation" by Georges, Buytaert, Eeckhout. Without proper treatment of the statistics, it is very easy to be misled.

There are well-developed tools and communities around them (for example, Google's Caliper) - if you absolutely must write microbenchmarks, then do not do so by yourself - you need the viewpoints and experience of your peers.

4. Algorithmic slowness is the most common cause of performance problems

A very familiar cognitive fallacy among developers (and humans in general) is to assume that the parts of a system that they control are the important ones.

In Java performance, this manifests itself by Java developers believing that algorithmic quality is the dominant cause of performance problems. Developers think about code, so they have a natural bias towards thinking about their algorithms.

In practice, when dealing with a range of real-world performance problems, algorithm design was found to be the fundamental issue less than 10% of the time.

Instead, garbage collection, database access and misconfiguration were all much more likely to cause application slowness than algorithms.

Most applications deal with relatively small amounts of data, so that even major algorithmic inefficiencies don't often lead to severe performance problems. To be sure, we are acknowledging that the algorithms were suboptimal; nonetheless the amount of inefficiency they added was small relative to other, much more dominant performance effects from other parts of the application stack.

So our best advice is to use empirical, production data to uncover the true causes of performance problems. Measure; don't guess!

5. Caching solves everything

"Every problem in Computer Science can be solved by adding another level of indirection"

This programmer's aphorism, attributed to David Wheeler (and thanks to the Internet, to at least two other Computer Scientists), is surprisingly common, especially among web developers.

Often this fallacy arises due to analysis paralysis when faced with an existing, poorly understood architecture.

Rather than deal with an intimidating extant system, a developer will frequently choose to hide from it by sticking a cache in front and hoping for the best. Of course, this approach just complicates the overall architecture and makes the situation worse for the next developer who seeks to understand the status quo of production.

Large, sprawling architectures are written one line, and one subsystem at a time. However, in many cases simpler, refactored architectures are more performant - and they are almost always easier to understand.

So when you are evaluating whether caching is really necessary, plan to collect basic usage statistics (miss rate, hit rate, etc.) to prove that the caching layer is actually adding value.

6. All apps need to be concerned about Stop-The-World

A fact of life of the Java platform is that all application threads must periodically stop to allow Garbage Collection to run. This is sometimes brandished as a serious weakness, even in the absence of any real evidence. 

Empirical studies have shown that human beings cannot normally perceive changes in numeric data (e.g. price movements) occurring more frequently than once every 200ms. 

Consequently for applications that have a human as their primary user, a useful rule of thumb is that Stop-The-World (STW) pause of 200ms or under is usually of no concern. Some applications (e.g. streaming video) need lower GC jitter than this, but many GUI applications will not. 

There are a minority of applications (such as low-latency trading, or mechanical control systems) for which a 200ms pause is unacceptable. Unless your application is in that minority it is unlikely your users will perceive any impact from the garbage collector.

It is also worth mentioning that in any system where there are more application threads than physical cores, the operating system scheduler will have to intervene to time-slice access to the CPUs. Stop-The-World sounds scary, but in practice, every application (whether JVM or not) has to deal with contended access to scarce compute resources.

Without measurement, it isn't clear that the JVM's approach has any meaningful additional impact on application performance.

In summary, determine whether pause times are actually affecting your application by turning on GC logs. Analyze the logs (either by hand, or with scripting or a tool) to determine the pause times. Then decide whether these really pose a problem for your application domain. Most importantly, ask yourself a most poignant question: have any users actually complained?

7. Hand-rolled Object Pooling is appropriate for a wide range of apps

One common response to the feeling that Stop-The-World pauses are somehow bad is for application groups to invent their own memory management techniques within the Java heap. Often this boils down to implementing an object pooling (or even full-blown reference-counting) approach and requiring any code using the domain objects to participate.

This technique is almost always misguided. It often has its roots in the distant past, where object allocation was expensive and mutability was deemed inconsequential. The world is very different now.

Modern hardware is incredibly efficient at allocation; the bandwidth to memory is at least 2 to 3GB on recent desktop or server hardware. This is a big number; outside of specialist use cases it is not that easy to make real applications saturate that much bandwidth.

Object pooling is generally difficult to implement correctly (especially when there are multiple threads at work) and has several negative requirements that render it a poor choice for general use:

  • All developers who touch the code must be aware of pooling and handle it correctly
  • The boundary between "pool-aware" and "non-pool-aware" code must be known and documented
  • All of this additional complexity must be kept up to date, and regularly reviewed
  • If any of this fails, the risk of silent corruption (similar to pointer re-use in C) is reintroduced

In summary, object pooling should only be used when GC pauses are unacceptable, and intelligent attempts at tuning and refactoring have been unable to reduce pauses to an acceptable level.

8. CMS is always a better choice of GC than Parallel Old

By default, the Oracle JDK will use a parallel, stop-the-world collector for collecting the old generation.

An alternative choice is Concurrent-Mark-Sweep (CMS). This allows application threads to continue running throughout most of the GC cycle, but it comes at a price, and with quite a few caveats.

Allowing application threads to run alongside GC threads invariably results in application threads mutating the object graph in a way that would affect the liveness of objects. This has to be cleaned up after the fact, and so CMS actually has two (usually very short) STW phases.

This has several consequences:

  1. All application threads have to be brought to safe points and stopped twice per full collection;
  2. Whilst the collection is running concurrently, application throughput is reduced (usually by 50%);
  3. The overall amount of bookkeeping (and CPU cycles) in which the JVM engages to collect garbage via CMS is considerably higher than for parallel collection.

Depending on the application circumstances these prices may be worth paying or they may not. But there’s no such thing as a free lunch. The CMS collector is a remarkable piece of engineering, but it is not a panacea.

So before concluding that CMS is your correct GC strategy, you should first determine that STW pauses from Parallel Old are unacceptable and can't be tuned. And finally, (and I can’t stress this enough), be sure that all metrics are obtained on a production-equivalent system.

9. Increasing the heap size will solve your memory problem

When an application is in trouble and GC is suspected, many application groups will respond by just increasing the heap size. Under some circumstances, this can produce quick wins and allow time for a more considered fix. However, without a full understanding of the causes of the performance problem, this strategy can actually make matters worse.

Consider a badly coded application that is producing too many domain objects (with a typical lifespan of say two to three seconds). If the allocation rate is high enough, garbage collections could occur so rapidly that the domain objects are promoted into the tenured (old) generation. Once in tenured, the domain objects die almost immediately, but they would not be collected until the next full collection.

If this application has its heap size increased, then all we're really doing is adding space for relatively short-lived domain objects to propagate into and die. This can make the length of Stop-The-World pauses worse for no benefit to the application.

Understanding the dynamics of object allocation and lifetime before changing heap size or tuning other parameters is essential. Acting without measuring can make matters worse. The tenuring distribution information from the garbage collector is especially important here.

Conclusion

When it comes to Java performance-tuning intuition is often misleading. We require empirical data and tools to help us visualize and understand the platform's behavior.

Garbage Collection provides perhaps the best example of this. The GC subsystem has incredible potential for tuning and for producing data to guide tuning, but for production applications it is very hard to make sense of the data produced without resorting to tools.

The default should always be to run any Java process (in development or production) with at least these flags:
-verbose:gc (print the GC logs)
-Xloggc: (for more comprehensive GC logging)
-XX:+PrintGCDetails (for more detailed output)
-XX:+PrintTenuringDistribution (displays the tenuring thresholds assumed by the JVM)

and then to use a tool to analyze the logs - either handwritten scripts and some graph generation, or a visual tool such as the (open-source) GCViewer or jClarity Censum.

About the Author

Ben Evans is the CEO of jClarity, a startup which delivers performance tools to help development & ops teams. He is an organizer for the LJC (London JUG) and a member of the JCP Executive Committee, helping define standards for the Java ecosystem. He is a Java Champion; JavaOne Rockstar; co-author of “The Well-Grounded Java Developer” and a regular public speaker on the Java platform, performance, concurrency, and related topics.

 

 

Rate this Article

Adoption
Style

BT