BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Native Java in the Real World

Native Java in the Real World

This item in japanese

Bookmarks

Key Takeaways

  • Microservices on Kubernetes are the business case sweet spot for native Java because they have the most significant framework and Java runtime overhead.
  • Native Java adoption can happen incrementally, one microservice at a time.
  • The application framework should fully support native Java in production.
  • Native Java requires more effort to build, debug, test, deploy, and profile.
  • Converting an application to native Java only works if all its application libraries support native Java.

This article is part of the article series "Native Compilation Boosts Java". You can subscribe to receive notifications about new articles in this series via RSS.

Java dominates enterprise applications. But in the cloud, Java is more expensive than some competitors. Native compilation with GraalVM makes Java in the cloud cheaper: It creates applications that start much faster and use less memory.

So native compilation raises many questions for all Java users: How does native Java change development? When should we switch to native Java? When should we not? And what framework should we use for native Java? This series will provide answers to these questions.

Introduction

The rising popularity of microservice architectures brings to mind the famous "Top Gun" movie quote, "I feel the need, the need for speed". Smaller containers, faster startup times, and better resource utilization have become increasingly crucial for running cloud services.

Java has been long criticized for its slow startup times, numerous dependencies, whether used or unused in their entirety, and heavy resource requirements. Add the JVM and an application server to the mix, and the demands become even heavier.

Traditional, interpreted Java services have been less-than-ideal for true microservice platforms, especially serverless APIs.

This is where native Java really shines ...

Finding the Sweet Spot

Native Java is the perfect fit for Kubernetes, microservices, and serverless components. It is also the ideal opportunity when developing new services or breaking apart larger monolith applications into smaller services.

Adopting native Java does not have to be a "big bang" approach — it can be done one service at a time. This approach minimizes risk and will help build confidence as the technology further matures over time.

Making the move may seem overwhelming at first, but it is not that different from traditional Java development today.

Logicdrop develops an all-in-one platform for business automation and data intelligence that enables enterprises to design and deploy their own solutions into the cloud. Our platform, originally developed using Spring Boot and Drools, has been redesigned from the ground up to use only Quarkus and Kogito and deploy mostly native Java executables.

Before switching to native Java, running an increasing number of Spring Boot services in a cloud-native infrastructure was becoming challenging, not to mention costly, at scale. Regardless of functionality, containers were always ~1GB+ in size because they needed a JVM and included a full set of dependencies (used or not). Startup times averaged 15-30 seconds, and only a handful could be run per node due to already tight resource constraints.

After moving to Quarkus, the native executables produced were notably smaller, started significantly faster, and used fewer resources overall. Containers were less than 50MB (compressed) in size and ready to accept requests in less than 1 second. These gains made native Java an ideal fit for environments where size and startup times were crucial, both in cost and performance.

Throughput was less of a concern, and we found it roughly the same after making the jump. Since scaling was faster and more services could be packed into fewer nodes, horizontal scaling accommodated for any differences.

Here is an example of our cost savings. A single cluster in Amazon’s Kubernetes service EKS costs almost $5,000/year for five nodes running multiple Spring Boot services. Moving to native Java reduced that cost by almost 50% because only half the resources were needed. This translated to significant cost savings across all our clusters!

When to Use Native Java

Native Java is pretty impressive: GraalVM puts Java on par with other "lighter and faster" stacks while keeping with the familiar Java constructs we all know. And "lighter and faster" is critical in the cloud!

Native Java executables can also be more secure: GraalVM reduces the surface area for exploits by stripping unused classes, methods, and fields.

New microservices are the ideal starting point for native Java since they can be written from scratch to take advantage of established proven native libraries.

When deciding what to move to native Java, these prerequisites served as a good starting point:

  1. Is the service standalone?
  2. Are startup times and scaling important?
  3. Are the external dependencies compatible with native Java?

As the GraalVM article in this series explained, extra configuration may be needed to properly handle dynamic Java features (such as reflection). Without this additional metadata, a library may fail when used as a part of a native executable! So from our experience, a Java library is either compatible with native Java or not.

Using a framework that provides a well-curated set of libraries helps knowing what works in native Java and what doesn’t. Unfortunately, things are harder for other Java libraries: Currently, the only way to tell whether a library is compatible with native Java or not is to run it in a native executable. Most of the time, if there are any failures, they will appear quickly.

Apache Ignite is such a library that failed in native Java because it relied on low-level Java APIs. We still use it for caching in certain Spring Boot services but now have replaced it with Redis in native executables.

Knowing which libraries are compatible with native Java can be a significant factor in deciding what makes a good candidate for native Java: For incompatible libraries, we either use a replacement or reimplement the functionality.

Luckily, most Java applications will usually depend on similar types of functionality that ships with frameworks already — logging, REST APIs, JSON, etc. As an example, these APIs existed already in Quarkus and are compatible with native Java:

  • Persistence (NoSQL and RDBMS)
  • Observability (Elastic, Prometheus, Jaeger, etc.)
  • AWS SDK
  • Security
  • SOAP (Apache CXF)
  • REST (RESTEasy, Jackson, etc.)
  • Support (Swagger, Logging, etc.)

The above list shows that a good number of commonly used libraries can and already do work with native Java. And the list continues to grow!

However, not all services will be an ideal fit for native Java. There will be libraries and code that make the transition more trouble than it is worth. It is better to leave those services as-is and possibly re-evaluate them later.

From our experience, moving to native Java did not make sense when:

  • Startup times, scaling, and resource requirements were not critical
  • Specialized libraries did not have a native equivalent or were not native friendly yet
  • Dynamic Java, like reflection or dynamic proxies, was heavily used

A note regarding dynamic Java: GraalVM does not support dynamic proxies because native executables need to have all of the classes available at build time. As for reflection, it is supported, but in cases where elements cannot be resolved at build time, there is an agent that can be run on a regular JVM that traces usages of reflection and dynamic proxy objects.

Anytime the complexity, effort, and risk outweighed the immediate benefits of moving to native Java, we backlogged those services to come back later. These were only a few services.

Choosing a Framework

Choosing a native framework is like choosing a starter Pokemon: Each has advantages and disadvantages. So picking one requires careful thought about the long-term usage.  

Native Java can be used for plain Java development. But most organizations should opt to build upon a framework because it will reduce boilerplate code and provide a curated set of APIs that will save time and effort. Additionally, each framework shields you from the process of building native executables, further reducing the complexity and learning curve.

The chosen framework should fully embrace GraalVM, offer a rich ecosystem supporting native Java, and simplify the building of native executables in a way that makes sense for your organization. With that in mind, only three Java frameworks do this today — Quarkus, Micronaut, and Helidon.

Some frameworks can even run "traditionally" in a JVM while still taking advantage of some GraalVM optimizations. This can be a good fallback when an application or service cannot run fully native.

We chose Quarkus after evaluating the available frameworks. It was the fastest framework to get up and running, it leveraged Java standards, the documentation was excellent, it provided all the functionality we needed out of the box, and the community was extremely helpful and supportive. That’s why in just a couple of months, our whole backend team switched from Spring Boot to Quarkus.

Adopting Native Java

Making the jump to native Java is not as scary as one might expect — the development experience remains essentially the same. But some processes need to change slightly to deliver native executables better.

For day-to-day development, we develop Java services as usual: Write Java code and use the IDE or command-line tools to test and debug them. Building native executables will introduce additional steps and new considerations to this process.

A typical lifecycle targeting native executables looks like this:

  • Develop, debug, and test services normally on developer machines
  • Enforce stricter and more robust testing
    • Test the structures of API payloads to ensure they are complete
    • Test endpoints "as if running in production" to ensure all code is covered
  • Build, test, and deploy native executables for each environment and/or OS

Unlike traditional Java development, building native Java executables is resource-intensive — it can take anywhere from 2-10 minutes to build each service, even on a sizable workstation!

And unlike traditional Java development, creating a single WAR or JAR file isn’t enough: Each OS needs its own native executable. And because native executables inline their code and properties, each environment also needs its own native executable. For example, Swagger may be exposed in staging but not in production. So, the staging exectuable needs to be built including Swagger dependencies, whereas the production exectuable would not. The same goes for any properties or configurations that cannot be handled at runtime. If only Linux containers are targeted, the number of build variations is simplified.

Building

It is best to build native Java executables on developer machines only when needed. That could be before a significant feature is about to be merged or when a problem arises that requires debugging. Instead, relying on CI/CD pipelines to offload the building and testing of different targets will make the process less intrusive and reduce the strain on developers.

We previously mentioned that containers with native executables are much smaller and require much fewer resources. That allowed us to deploy multiple preview environments into the cluster instead of relying on just a single shared environment. Developers could now test all of the services together, natively built for their specific configurations and in isolated environments, without "stepping on somebody’s toes." This is also possible with traditional Java but far more costly because of the demand on already constrained cloud resources.

For example, we started off having the usual three environments: develop, staging, and production only. Using native executables, we can now have upwards of 20 preview environments, each built and configured with all the services needed (~20 services currently). So, instead of sharing a single development environment with the capacity for only 20 services, we can now have 20 or more preview environments running in parallel, exposing a combined total of 400 services.

Debugging

When the problem lies in the native executable, then we need to debug the native executable. This requires some additional setup and tooling and having a GraalVM available. Once set up, though, it is not much different than debugging Java using popular IDEs today.

Debugging starts by attaching to the running native process, linking the IDE to the Java source files, and finally stepping through the code. Once attached to the process, all the usual actions are possible: Setting breakpoints, creating watches, inspecting the state, etc.

Luckily, the tools have come a long way since we started our native Java journey. For example, Visual Studio Code has excellent extensions for Quarkus and GraalVM that provide full Java development and debugging capabilities and includes the GraalVM runtime. This extension also integrates with VisualVM so that native executables can be analyzed.

According to the GraalVM FAQ, IntelliJ, Eclipse, and Netbeans also support GraalVM. You can always use the GNU Debugger to debug the native executable as a last resort.

Testing

Testing native Java executables is similar to testing traditional Java services. But it is imperative to be aware of the nuances.

An immediate drawback of testing native executables is their static, closed-world nature. Tried-and-true testing approaches that rely on Java’s dynamic nature, such as mocking libraries, are impossible to use here. And any changes to the source code require a new build of the native executable first, which is also a much slower process than in traditional Java.

GraalVM also tries to inline and/or remove as much code as possible. This can cause a lot of issues.

An example of mistaken code removal was Jackson JSON serialization. Our JUnit tests reported serialization to be fine during development. But the native executable was missing specific nested models, with no exception thrown. The reason was that GraalVM removed some of the models from the executable because it thought they were unused. The fix was simple: Register any classes used in the JSON payloads with GraalVM. That prevented their exclusion from the native executable. We also expanded our tests, checked the payloads thoroughly, and added more smoke tests.

Dynamic functionality, such as reflection, is another area to watch closely. In some cases, exceptions may not be thrown, or certain problems with functionality do not surface until well after the executable has been deployed.

Beyond that, we found that tests that exercise endpoints have been an excellent way to guarantee the expected functionality and correct payloads in native executables. Starting testing at the entry points for any given service, regardless of running in a JVM or a native executable, is a good way to validate functionality where it matters most.

Summary

Moving to native Java was never one of our original objectives. We only wanted to re-architect our existing platform to be more cloud-native, prepare for upcoming features, and better utilize the Kubernetes clusters at scale.

We believe that choosing Quarkus was one of our best decisions ever. It made adopting native Java very easy. With some up-front planning and diligence and after building a few prototypes, it quickly became apparent that jumping head-first into native Java was not only possible. It was also happening organically with minimal effort!

There are challenges for sure. You can expect changes in traditional development and delivery. But they won’t be that different from how Java services are developed today. Moving to native Java was merely an addition to existing processes for us.

At the end of the day, any microservice will typically benefit from faster startup times and fewer resource demands. The advantages of using native Java, especially in Kubernetes, coupled with the cost savings and measurable efficiencies, were why we moved to native Java.  

Native Java executables take Java to the next level. If the opportunity does present itself and the conditions are right, it is well worth the added effort to make the jump and start using GraalVM!

 

This article is part of the article series "Native Compilations Boosts Java". You can subscribe to receive notifications about new articles in this series via RSS.

Java dominates enterprise applications. But in the cloud, Java is more expensive than some competitors. Native compilation with GraalVM makes Java in the cloud cheaper: It creates applications that start much faster and use less memory.

So native compilation raises many questions for all Java users: How does native Java change development? When should we switch to native Java? When should we not? And what framework should we use for native Java? This series will provide answers to these questions.

About the Authors

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT