Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Podcasts Thomas Wuerthinger on GraalVM and Optimizing Java with Ahead-of-Time Compilation

Thomas Wuerthinger on GraalVM and Optimizing Java with Ahead-of-Time Compilation

The promise of Java has always been, "write once, run anywhere." This was enabled through just-in-time compilation, which allowed developers to target a platform at compilation. But, this flexibility has given rise to comments like, "Java is slow." What if you could compile Java to Native Code?

On this podcast, we're talking to Thomas Wuerthinger, a senior research director at Oracle Labs. Leading programming language implementation teams for Java, JavaScript, Ruby, and R. He is the architect of the Graal compiler and the Truffle self-optimizing runtime.

For more discussion on GraalVM and Truffle, check out the podcast with Duncan McGregor or his talk at QCon London 2019.

Key Takeaways

  • The GraalVM project was initially just a replacement for the JVM C2 just-in-time compiler but has evolved to include support for multiple languages, as well as an ahead-of-time compiling mode.
  • Support for multiple languages can provide better performance for some languages, as well as making direct calls without inter-process communication.
  • With GraalVM's AOT compilation, you can statically link system libraries, which allows you to run a static binary on a bare-metal Docker image, without even a Linux distribution.
  • The major benefits of AOT are minimized startup time, memory footprint, and packaging size. This can come with a trade-off in reduced maximum throughput and higher latency.
  • The GraalVM roadmap includes supporting additional platforms, such as Windows and mobile, as well as performance improvements for both the JIT and AOT compilers.

Show Notes

History of GraalVM

  • 01:35 When GraalVM initially came out, it was thought of as a replacement for the JVM C2 just-in-time compiler, but the project scope has evolved. The initial plan was to simply write a compiler in Java, using a modern, object-oriented language. 
  • 02:07 We increased the number of languages we're targeting. In addition to Java, we target dynamic languages including JavaScript, Python, Ruby, and R. We also developed an ahead-of-time compiling mode.
  • 02:32 One advantage of writing in Java is a very modular system that allows targeting many different ways of using the core compiler.
  • 02:59 We're able to target multiple languages using a technique called partial evaluation of interpreters. You write an interpreter for your dynamic language, in Java. The compiler is able to evaluate that interpreter and produce optimal machine code.
  • 03:35 The performance results can vary among the languages. For a language with wide industry adoption, such as JavaScript, performance is comparable to the V8 engine. For languages with less just-in-time compilation already, we actually have higher performance than what is available.
  • 04:12 One reason to run R on the JVM is performance. R developers are rewriting their code in C or other languages for better performance. Graal avoids that.
  • 04:49 Another reason is interoperability with the JVM. From your Java program, you can directly call R statistical functions, without doing inter-process communication.
  • 04:50 Rust or other native languages would typically interact with JVM-based languages via GNI, which has some overhead. With GraalVM, we can completely remove the GNI boundary and compile the Rust, C, or C++ code and the Java code into the same machine code.

What is ahead-of-time compilation?

  • 05:27 Reisz: At QCon New York, you gave a talk about just-in-time compilation versus ahead-of-time compilation.
  • 05:38 For ahead-of-time compilation, GraalVM takes a JVM-based application and does closed-world analysis that figures out all the code that is reachable from the application center point. We then create a binary from AOT-compiled machine code, that can run on its own without the need for a Java virtual machine. 
  • 06:17 There are some forms of reflection that we automatically detect during the closed-world analysis. There are more complex forms that require manual configuration at image generation time. If we allow arbitrary reflection, then it is not possible to create a closed world around your application, or the closed world would become very large. One of the goals of AOT is to create a small binary and efficient machine code. That places a constraint on what is reachable.
  • 07:05 You need to select a target platform ahead of time, which includes the operating system and the ISA (Instruction Set Architecture).
  • 07:15 The binary has a garbage collector inside, so there is some amount of runtime included. But there is no JIT compiler or interpreter.
  • 07:36 At the moment, there is only a single, default garbage collector, but currently we're on adding support for low-latency garbage collectors.
  • 08:05 The resulting binary can be between 6MB to 50MB, but you no longer need any Java libraries or classpath, only that single binary.
  • 08:41 You can statically link system libraries, with --static. This allows you to run the static binary on a bare-metal Docker image, without even a Linux distribution.
  • 09:04 As of version 19.0 (latest is 19.1), this is a production system, targeting OpenJDK 8. We're actively working on JDK 11 support. We're on 8 mainly because it was the last long-term-support release when we started the project.
  • 09:53 There many different scenarios where people use Graal. The main adoption is for server-side applications running in the JIT mode but would use the performance benefits of the GraalVM compiler. 
  • 10:27 The Twitter workload has been running on GraalVM in production for some time. It is also a server-side, JVM-based workload, but the Graal compiler helps with additional throughput and reducing garbage collection pressure in the application.

Just-in-time versus ahead-of-time compilation

  • 11:14 Graal introduced the idea of ahead-of-time compilation.
  • 11:26 JIT has some drawbacks, notably startup time. There's a long sequence of steps in the JIT process that contribute to the long startup time.
  • 12:36 In AOT, you have the ability to do the full sequence of steps once, when you deploy your application and not every time you start it.
  • 12:49 Startup time is not the only benefit. Memory footprint is also reduced. In JIT, the whole program is loaded as a data structure in memory, as well as profiling data.
  • 13:27 There are smaller benefits of AOT, such as global optimizations.
  • 13:50 Does AOT give up on "write once, run anywhere" and the JIT compilation need? The benefits of JIT is when you deploy the code, you don’t need to know which platform it will run on. With AOT, you could still distribute the platform-independent parts, and leave the AOT step until the last possible minute, when you know your platform.
  • 14:48 In many scenarios, when you need to scale, you will know the platform on which you will scale, so being platform-independent is not a necessary feature.  
  • 16:05 In GraalVM JIT mode, we only replace the C2 compiler, so the startup sequence is exactly the same.
  • 16:28 In AOT mode, we do not use any other compiler, and there is no startup sequence.
  • 16:38 One disadvantage of AOT is that you do not have, by default, information about the behavior of the application, meaning you are missing profiling feedback. In JIT mode, the application can run for a few minutes and the compiler can make educated guesses about how it will behave, and perform optimizations. In AOT, the compiler needs to be more conservative in its optimization.
  • 17:25 This can be mitigated by gathering profiling feedback from a previous run of the application. This can lead to AOT having an advantage, because I can run tests and benchmarks that are focused on the part of the application that is most important to optimize. This is more deterministic than the JIT optimization could be.
  • 18:53 There are many ways to measure performance, such as memory, latency or throughput. In JIT mode, we maximize throughput over the long term. In AOT mode, we try to minimize startup time, memory footprint, and packaging size. Peak throughput with AOT can be 20-50% behind JIT, depending on profiling. AOT also has higher maximum latency because of simpler garbage collection algorithms.


  • 20:45 The top priority on our roadmap is adding support for Java 11.
  • 21:07 We're also looking for better platform support, including Windows, which is currently experimental, and mobile platforms such as ARM 64-bit and iOS.
  • 21:51 We're continuously improving the performance of our JIT compiler, in terms of peak throughput as well as startup. 
  • 22:29 For AOT, the roadmap is focused on mitigating the drawbacks we currently have, including performance.
  • 23:15 Our target is to release every three months. Version numbers correspond to the year, with the last release having long-term support.
  • 24:21 We always release versions for both Enterprise and Community Editions, and they are 100% compatible. This is an important aspect of the project, because we don't want users to feel locked into the enterprise edition, and they should be able to switch to community edition if that makes more sense.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article