BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Contribute

Topics

Choose your language

InfoQ Homepage News Azul Launches Java Cloud Compiler

Azul Launches Java Cloud Compiler

This item in japanese

Bookmarks

Azul has launched a new cloud-native compiler that offloads Java JIT compilation from a local system to an elastic resource, lowering the amount of resources needed to run the application and improving time to peak performance.

The Java runtime is a fully self-reliant system, designed to run and improve code on a single system. The runtime works through Just In Time (JIT) compilation, using local resources that convert Java bytecode (JAR and class files) to native machine code to improve the speed and memory. Simon Ritter, deputy CTO of Azul, explains this process in a blog about Understanding Java Compilation, laying out details of how code is improved over time. JIT compilation is transparent to most users, requiring no interaction from a developer or administrator, however curious developers can monitor its role through tools like JITWatch. By offloading this JIT compilation to a separate system, Azul's cloud compiler can perform JIT faster, share optimizations between systems that run the same code, and return resources to the application to either improve throughput or decrease the total CPU/RAM needed and lower cost.

Red Hat has published a blog, How the JIT compiler boosts performance in OpenJDK, that covers how JIT improves Java applications as they run. The post discusses parts of how Java applications start, monitor code, and perform optimizations to improve performance. Done in the self-reliant JRE, these JIT optimizations share resources with the running application and seek to make improvements within the application as it runs and then switch execution to the faster JITted code.

Another alternative to JIT is Ahead Of Time (AOT) compilation, which seeks to compile a Java application straight to native machine code rather than to go through bytecode. The aim of AOT is to decrease time to peak performance and improve memory consumption by doing work in advance. A Baeldung article, Deep Dive into the New Java JIT Compiler, describes the roles and differences of the standard OpenJDK JIT compiler C1/C2 against the GraalVM AOT compiler. Neither JIT nor AOT are inherently better in nature - while AOT can often start faster, the "ahead-of" term misses out on information that can be learned from watching what parts of the code run, and AOT compilation takes significantly longer than bytecode and contains other behavioral trade-offs.

Azul Intelligence Cloud introduces a third option that bridges the benefits of JIT (peak performance) with the benefits of AOT (fast startup time). As teams often run the same code across many systems, the JRE communicates with an organization’s shared JIT server so that each JRE does not need to do the same observation and optimization cycle. The cloud compiler can then also perform deeper analysis, using CPU resources that are isolated from the running application. Resulting applications can reach peak performance in the network time needed to transfer this information and run with the improved code. In tests described at QCon, the time to peak performance was faster and throughput was between 25% to 100% faster.

Azul’s Cloud Compiler is available in the commercial Azul Prime distribution. A free version of Prime is available for development, testing, and evaluation. Instructions to set up the Kubernetes-based cloud compiler service appear within the Intelligence Cloud documentation.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Interesting approach

    by Diego Visentin,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Interesting approach. It should be tested in the field by having multiple instances of the same Java microservice to measure its memory gain, startup time and performance.
    PS: it seems to me very similar to the Eclipse J9 JITServer
    www.eclipse.org/openj9/docs/jitserver/

  • Re: Interesting approach

    by Erik Costlow,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    There's a QCon talk that covered metrics, I'm doing a write-up on that with more detail. In short it was about a 25-100% improvement on throughput and time to peak performance was way shorter.
    They cited the IBM JIT Server also but this one is more productized, whereas that JIT server is more in the mid-prototype phase.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT