BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Revolutionizing Java with GraalVM Native Image

Revolutionizing Java with GraalVM Native Image

This item in japanese

Lire ce contenu en français

Bookmarks

Key Takeaways

  • GraalVM Native Image is an ahead-of-time compilation technology that generates native platform executables.
  • Native executables are ideal for containers and cloud deployments as they are small, start very fast, and require significantly less CPU and memory.
  • Deploy native executables on distroless and even Scratch container images for reduced size and improved security.
  • With profile-guided optimization and the G1 garbage collector, native executables built with GraalVM Native Image can achieve peak throughput on par with the JVM.
  • GraalVM Native Image enjoys significant adoption with support from leading Java frameworks such as Spring Boot, Micronaut, Quarkus, Gluon Substrate, etc.

This article is part of the article series "Native Compilation Boosts Java". You can subscribe to receive notifications about new articles in this series via RSS.

Java dominates enterprise applications. But in the cloud, Java is more expensive than some competitors. Native compilation with GraalVM makes Java in the cloud cheaper: It creates applications that start much faster and use less memory.

So native compilation raises many questions for all Java users: How does native Java change development? When should we switch to native Java? When should we not? And what framework should we use for native Java? This series will provide answers to these questions.

GraalVM has caused a revolution in Java development since it launched three years ago. One of the most discussed features of GraalVM is Native Image, which is based on an ahead-of-time (AOT) compilation. It unlocks the runtime performance profile of native applications while keeping the familiar developer productivity and tooling of the Java ecosystem.

Traditional Execution of Java Applications

One of the most powerful and interesting parts of the Java platform, enabling great peak performance, is the way the Java Virtual Machine (JVM) executes code.

When you first run your application, the VM interprets code and collects profiling information. Despite the performance of the JVM interpreter, it's not as fast as running compiled code. That's why Oracle's JVM (HotSpot) also contains just-in-time (JIT) compilers, which compile your application code to machine code on the go, as your program executes. So, if your code "warms up" — gets frequently executed, it gets compiled to machine code by the C1 JIT compiler. Then, if it's still executed often enough and reaches certain thresholds, it is compiled by the top-tier JIT compiler (C2 or the Graal compiler). The top-tier compiler performs optimizations based on the profiling information about which code branches are executed most often, how frequently loops are executed, and which types are used in polymorphic code.

Sometimes the compiler performs speculative optimizations. For example, the JVM can produce an optimized, compiled version of a method based on the profiling information it collects. However, because code execution on the JVM is dynamic — if the assumptions it made become invalid at a later time — the JVM will deoptimize: it will disregard the compiled code and revert to interpreted mode. It is this flexibility that makes the JVM so powerful: it starts executing code fast, leverages optimizing compilers for frequently executed code, and speculates to apply even more aggressive optimizations.

At first sight, this approach appears to be an ideal way to run an application. However, like most things, even this approach comes with costs and tradeoffs; so what are they here? When the JVM performs its operations (such as verifying code, loading classes, compiling dynamically, and collecting profiling information), it undertakes complex computations that require significant CPU time. In addition to that cost, the JVM requires considerable memory to store profiling information, and requires appreciable time and memory to start. As many companies deploy applications to the cloud, those costs become more significant because startup time and memory directly affect the cost of deploying an application. So, is there a way to reduce startup time and memory usage and still keep the Java productivity, libraries, and tooling that we all enjoy?

The answer is "Yes", and that is what GraalVM Native Image does.

GraalVM for the Win

GraalVM began as a research project at Oracle Labs 10 years ago. Oracle Labs is a research & development branch of Oracle that investigates programming languages and virtual machines, machine learning and security, graph processing, and other areas. GraalVM is a great example for Oracle Labs — it is based on years of research and more than 100 published academic papers.

At the very heart of the project is the Graal compiler — a modern, highly-optimizing compiler, created from scratch. Thanks to lots of advanced optimizations, in many scenarios it generates better code than the C2 compiler. One such optimization is partial escape analysis: It removes unnecessary object allocations on the heap through scalar replacement in branches where the object does not escape the compilation unit, and the Graal compiler ensures that an object exists in the heap in branches where it does escape.

This approach reduces the memory footprint of the application because fewer objects live on the heap. It also reduces the CPU load as less garbage collection is necessary. Also, advanced speculations in GraalVM produce faster machine code by taking advantage of dynamic runtime feedback. By speculating that certain program parts will not run during execution, the GraalVM compiler can make the code even more efficient.

You may be surprised to learn that the Graal compiler is mostly written in Java. If you take a look at GraalVM's core GitHub repository, you'll see that more than 90% of the code there is written in the Java programming language, which once again demonstrates just how powerful and versatile Java is.

How Native Image Works

The Graal compiler also works as an ahead-of-time (AOT) compiler, producing native executables. Given Java's dynamic nature, how does that work exactly?

Unlike JIT mode, where compilation and execution happen at the same time, in AOT mode the compiler performs all compilations during build time, before the execution. The main idea here is to move all the "heavy lifting" — expensive computations — to build time, so it can be done once, and then at runtime generated executables start fast and are ready from the get-go because everything is pre-computed and pre-compiled.

The GraalVM 'native-image' utility takes Java bytecode as input and outputs a native executable. To do so, the utility performs a static analysis of the bytecode under a closed world assumption. During the analysis, the utility looks for all the code that your application actually uses and eliminates everything that is unnecessary.

These three key concepts help you better understand the Native Image generation process:

  • Points-to analysis. GraalVM Native Image determines which Java classes, methods, and fields are reachable at runtime, and only those will be included in the native executable. The points-to analysis starts with all entry points, usually the main method of the application. The analysis iteratively processes all transitively reachable code paths until a fixed point is reached and the analysis ends. This applies not only to the application code but also to the libraries and JDK classes — everything that is needed for packaging an application into a self-contained binary.
  • Initializations at build time. GraalVM Native Image defaults to class initialization at runtime to ensure correct behavior. But if Native Image can prove that certain classes are safe to initialize, it will initialize them at build time instead. This makes runtime initialization and checks unnecessary and improves performance.
  • Heap snapshotting. Heap snapshotting in Native Image is a very interesting concept and deserves its own article. During the image build process, Java objects allocated by static initializers, and all the objects that are reachable, are written onto the image heap. This means that your application starts much faster with a pre-populated heap.

What's interesting is that points-to analysis makes objects reachable in the image heap, and the snapshotting that builds the image heap can make new methods reachable for the points-to analysis. Thus, points-to analysis and heap snapshotting are performed iteratively until a fixed point is reached:

Native Image Build Process

After the analysis is complete, Graal compiles all the reachable code into a platform-specific native executable. That executable is fully functional on its own and doesn't need the JVM to run. As a result, you get a slim and fast native executable version of your Java application: one that performs the exact same functions but contains only the necessary code and its required dependencies.

But who takes care of features such as memory management and thread scheduling in the native executable? For that, Native Image includes Substrate VM — a slim VM implementation that provides runtime components, such as a garbage collector and a thread scheduler. Just like the Graal compiler, Substrate VM is written in the Java programming language and AOT-compiled by GraalVM Native Image into native code!

Thanks to AOT compilation and heap snapshotting, Native Image enables a completely new performance profile for your Java applications. Let's take a closer look at this next.

Taking Java Startup Performance to the Next Level

You might have heard that an executable generated by Native Image has great startup performance. But what does that mean exactly?

Instant startup. Unlike running on the JVM, where code is first verified, interpreted, and then (after warming up) eventually compiled, a native executable comes with optimized machine code from the very start. Another term that I like to use is instant performance — an application is ready to perform meaningful work in its first milliseconds of execution, without any profiling or compilation overhead.

JIT AOT
  • Operating system loads the JVM executable
  • VM loads classes from the file system
  • Bytecode is verified
  • Bytecode interpretation starts
  • Static initializers run
  • First-tier compilation (C1)
  • Profiling metrics gathered
  • ... (after some time)
  • Second-tier compilation (C2/ Graal compiler)
  • Finally running with optimized machine code
  • Operating system loads executable with prepared heap
  • Application starts immediately with optimized machine code

Startup Time Effect of JIT and Native Image Modes

Memory efficiency. A native executable requires neither the JVM and its JIT compilation infrastructure nor memory for compiled code, profile data, and bytecode caches. All it needs is memory for the executable and the application data. Here's an example:

Memory and CPU Usage in JIT and Native Image Modes

The charts above show the runtime behavior of a web server on a JVM (left) and as a native executable (right). The teal line shows how much memory is used: 200 MB in JIT mode vs. 40 MB for the native executable. The red lines show CPU activity: The JVM uses the CPU heavily during the warmup JIT  activities described previously, while the native executable barely uses the CPU since all the expensive compilation operations happened at build time. Such fast and resource-efficient runtime behavior makes Native Image a great deployment model where using fewer resources for less time significantly reduces costs — microservices, serverless, and cloud workloads in general.

Packaging size. A native executable only contains the required code. That's why it's much smaller than the combined size of the application code, libraries, and a JVM. In some scenarios, such as working in resources-constrained environments, the packing size of your application can be important. Utilities such as UPX compress native executables even further.

Peak performance On Par with JVM

What about peak performance, though? How does Native Image optimize for peak throughput at runtime when everything is compiled ahead-of-time?

We are working to ensure that Native Image provides great peak performance as well as fast startup. There are already a couple of ways to improve the peak performance of native executables:

  • Profile-guided optimizations. Since Native Image optimizes and compiles code ahead of time, by default it doesn't have access to the runtime profiling information to optimize code when the application runs. One way to address this is with profile-guided optimization (PGO). With PGO, developers can run an application, collect the profiling information, and then feed it back into the native image generation process. The 'native-image' utility uses this information to optimize the performance of the resulting executable based on your application's runtime behavior. PGO is available in GraalVM Enterprise, which is a commercial version of GraalVM, provided by Oracle.
  • Memory management in Native Image. The default garbage collector in an executable generated by Native Image is Serial GC, which is optimal for microservices with a small heap. There are also additional GC options available:
    • Serial GC now has a new policy enabling survivor spaces for the young generation that reduces application runtime memory footprint. Since introducing this policy, we measured peak throughput improvements for a typical microservices workload such as Spring Petclinic of up to 23.22%.
    • Alternatively, you can use the low-latency G1 garbage collector for even better throughput (available in GraalVM Enterprise). G1 is best suited for larger heaps.

With PGO and the G1 GC, native executables achieve peak performance on par with JVM:

Geomean of Renaissance and DaCapo Benchmarks

With these options, you can maximize every performance dimension of your application with Native Image: startup time, memory efficiency, and peak throughput.

Reflection, Configuration, and Other Native Image Myths Busting

Since Native Image is a completely new way of executing Java applications, there are a few things to keep in mind.

You may have heard that GraalVM Native Image doesn't support reflection. This isn't true.

Native Image performs static analysis under a closed-world assumption. Therefore, dynamic Java features, such as reflection, require additional configuration for the build process to succeed. When it performs static analysis of your Java application, Native Image will try to detect and handle calls to the Reflection API. However, in general, this automatic analysis is not always enough, and the program elements accessed reflectively at runtime would have to be specified via configuration. You can create this configuration manually or leverage the Native Image tracing agent. The agent tracks the use of dynamic features during program execution on the JVM and produces a configuration file. That file is used by Native Image utility to include parts of a program accessed via reflection. Although the agent is useful to get the initial configuration, we recommend that you manually inspect and complete it as necessary.

A similar configuration may be required when using Java Native Interface (JNI), Dynamic Proxy objects, and classpath resources. You can also use the same tracing agent to configure usage of all of those features.

Finally, you can use the GraalVM Dashboard, a web-based application that visualizes Native Image compilation, to discover which packages, classes, and methods were included in the native executable, and to also identify which objects take up the most space in the heap.

Changing Java's Cloud Game

Native Image makes a huge difference for cloud deployments, where it can have a large impact on the resource consumption profile for your applications. We already learned that native executables produced by Native Image start fast and need less memory. What exactly does it mean for cloud deployments, and how can GraalVM help you minimize your Java container images?

As we have already established, applications generated by Native Image don't need the JVM to run: They can be self-contained and include everything that is needed for your application to execute. This means that you can put your application into a slim Docker image, and it will be fully functional on its own. The image size will depend on what your application does and which dependencies it includes. A basic "Hello, World!" application, built with a Java microservice framework, is around 20 MB.

With Native Image, you can also build static and mostly-static executables. A mostly-static native executable is statically linked against all libraries, except for 'libc', which is provided by the container image. You can use a so-called distroless container image for lightweight deployments. Distroless images only include libraries to run the application and do not have shells, package managers, and other programs. As an example, your Dockerfile might simply be:

```
FROM gcr.io/distroless/base
COPY build/native-image/application app
ENTRYPOINT ["/app"]
```

For a completely autonomous deployment that doesn't even require the container image to provide libc, you can statically link your application with 'musl-libc'. You can put it in a 'FROM scratch' Docker image because it is fully self-contained.

Using Native Image in Production    

So far, we've talked about how to maximize the performance of the application that you have generated using Native Image and considered a few helpful hacks that you can apply during the build process. Now, is there anything else you can do to get the most out of your applications? Yes: lots.

To simplify building, testing, and running a Java application as a native executable, use the official Maven and Gradle plugins provided by the GraalVM team. Furthermore, those plugins support native JUnit 5 testing. They were developed in collaboration with the JUnit, Micronaut, and Spring teams and demonstrate a great example of collaboration in the JVM ecosystem.

To set up GraalVM Native Image in your GitHub Action workflows, use the GitHub action for GraalVM. The configurable action supports several GraalVM releases and developer builds and fully sets up GraalVM and specific components.

Let's talk a little about tooling. When developing a Java application that you want to distribute as an executable you can use the same tools that you would normally use.  You can use any IDE and any JDK, including the GraalVM  JDK, to build, test, and debug your application and then use the GraalVM Native Image utility to perform the final native compilation step. Depending on the complexity of the application, Native Image compilation can take some time so it makes sense to perform this as the last step.  However, we are working on a quick development mode for Native Image which will significantly reduce compilation time by not performing many of the optimizations that are needed for production deployment.

Even though you can develop your application on the JVM and then build a native executable later in your development process, we received many requests from our community to improve build times and resource usage. We've done a lot of work on this issue over the last couple of releases. With the latest release of GraalVM (22.0), you can produce a native executable from a hello-world Java application in approximately 13.8 seconds, and the executable size will be around 5 MB. We also reduced memory usage by about 10%.

To debug an executable built using Native Image, you can either use 'gdb' from the command line (on Linux & macOS), or GraalVM's VS Code extensions. This tutorial provides step-by-step instructions.

To monitor the performance of your native executable, use JDK Flight Recorder. Complete support for Native Image is still a work in progress, but you can already use it to observe custom and system events.

For additional performance monitoring, generate a heap dump of a native executable and then analyze it using a tool such as VisualVM. This is a GraalVM Enterprise feature.

Adopted by Java Frameworks

It would be very hard to write industry-grade applications without Java framework support. Luckily, you don't have to. All major frameworks support Native Image (listed in alphabetical order): Gluon Substrate, Helidon, Micronaut, Quarkus, and Spring Boot. All those frameworks leverage GraalVM Native Image to dramatically improve startup times and resource usage of applications, making them perfect for efficient cloud deployments. Future articles in this series will describe how frameworks use GraalVM Native Image.

The Future of Native Image

Since its first public release, Native Image has taken huge steps forward. It's widely adopted by Java frameworks, cloud vendors offer Native Image as a runtime, and many libraries work with Native Image out of the box. We've made several changes to the developer experience, and as our study from last year shows, 70% of developers who use GraalVM already use it to build and distribute native executables.

We have many ideas for new features and improvements in Native Image, including:

  • Supporting more platforms
  • Simplifying configuration and compatibility for Java libraries
  • Continuing with peak performance improvements
  • Keep working with Java framework teams to leverage all Native Image features, develop new ones, improve performance, and ensure a great developer experience
  • Introducing a faster development compilation mode
  • Supporting virtual threads from Project Loom
  • IDE support for Native Image configuration and agent-based configuration
  • Further improving GC performance and adding new GC implementations

We are grateful to the community and our partners for helping us move Native Image forward and making it more and more useful for every Java developer. If there are new features or improvements you want to see in Native Image, share your feedback with us via GraalVM's community platforms!

 

This article is part of the article series "Native Compilations Boosts Java". You can subscribe to receive notifications about new articles in this series via RSS.

Java dominates enterprise applications. But in the cloud, Java is more expensive than some competitors. Native compilation with GraalVM makes Java in the cloud cheaper: It creates applications that start much faster and use less memory.

So native compilation raises many questions for all Java users: How does native Java change development? When should we switch to native Java? When should we not? And what framework should we use for native Java? This series will provide answers to these questions.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • What about Leyden?

    by Ed Burns,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thank you for taking the time to write an article with clear and terse style. I appreciate it.

    Can you please address the need to unify the extra JLS ways of denoting areas where reflection is used with the work in Project Leyden?

    Thanks

    Ed

  • Is GraalVM really a silver bullet?

    by Chris Richardson,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The benefits are clear.
    But surely there are downsides, right?

  • Re: Is GraalVM really a silver bullet?

    by Thomas Wuerthinger,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The two primary downsides are the increased build time and the required reflection configuration to make the reachability analysis work. The former is a one time cost in your CI/CD pipeline that you only need to pay at deployment time. The latter can be mitigated by using a framework with GraalVM native image support like Spring Native, Quarkus, Micronaut, or Helidon.

  • Re: Is GraalVM really a silver bullet?

    by Chris Richardson,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Is there a need to test the native executable to verify the correctness of reachability analysis?

  • Re: What about Leyden?

    by Karsten Silz,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Ed,

    One article in the series will discuss Project Leyden. Hopefully, it'll answer your question!

  • Re: Is GraalVM really a silver bullet?

    by Karsten Silz,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Chris,

    Yes, there is a need to test the native executable. The recently published "Native Java in the Real World" article gives an example: JSON deserialization failed because GraalVM Native Image removed some of the classes needed there. Here's a quote: "Starting testing at the entry points for any given service, regardless of running in a JVM or a native executable, is a good way to validate functionality where it matters most."

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT