BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google Boosts ART Compile Times by 18% Without Compromising Code Quality

Google Boosts ART Compile Times by 18% Without Compromising Code Quality

Listen to this article -  0:00

Google's Android Runtime (ART) team has achieved a 18% reduction in compile times for Android code without compromising code quality or increasing peak memory usage, delivering significant performance improvements for both just-in-time (JIT) and ahead-of-time (AOT) compilation.

As Google software engineers Santiago Aboy Solanes and Vladimír Marko explain, reduced compilation time for JIT-compiled code allows optimization to kick in sooner at runtime, directly improving overall device performance. For both JIT- and AOT-compiled code, faster builds reduce device workload and improve battery life and thermal performance, especially on lower-end hardware.

Aboy and Marko emphasize the importance of reducing compile times without sacrificing the performance of the generated code or increasing its memory requirements. Typically, they note, making a compiler faster means give something else up. In this case, however:

The one resource we were willing to spend was our own development time to dig deep, investigate, and find clever solutions that met these strict criteria. Let’s take a closer look at how we work to find areas to improve, as well as finding the right solutions to the various problems.

They focused on three key efforts, starting with careful measurement of compile times using tools like pprof to establish a baseline for comparing performance before and after the change. They then selected a representative mix of first-party and third-party apps, along with the Android operating system itself, to profile typical workloads and prototype potential solutions.

With that set of hand-picked apks we would trigger a manual compile locally, get a profile of the compilation, and use pprof to visualize where we are spending our time. [...] The pprof tool is very powerful and allows us to slice, filter, and sort the data to see, for example, which compiler phases or methods are taking most of the time.

With that foundation in place, ART engineers reduced unnecessary work across internal compiler phases by skipping iterations that yielded no effect, using heuristics and additional caches to avoid expensive computations, lazily computing results to eliminate redundant cycles, cleaning up abstractions, and more.

Identifying upfront which optimizations were worth pursuing required particular care to minimize unproductive efforts.

After detecting that an area is consuming a lot of compile time and after devoting development time to try to improve it, sometimes you can’t just find a solution. Maybe there’s nothing to do, it will take too long to implement, it will regress another metric significantly, increase code base complexity, etc.

To this end, the ART team aimed to estimate how much each metric could be improved with minimal effort. This involved using previously collected metrics or sometimes just a gut feeling, building a quick and dirty prototype, and finally implementing a proper solution.

Aboy and Marko also provide a list of some of the optimizations implemented, including reducing the lookup complexity of FindReferenceInfoOf from O(n) to O(1), passing data structure by reference to avoid unnecessary data creation and destruction, caching computed values, and many other that cannot all be covered here.

Some of these speed improvements were introduced in the June 2025 Android release, while the remaining optimizations are included in the end-of-year release. Additionally, Android version 12 and above can receive these improvements through mainline updates.

About the Author

Rate this Article

Adoption
Style

BT