BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News How Facebook Redesigned the HHVM JIT Compiler for Performance

How Facebook Redesigned the HHVM JIT Compiler for Performance

This item in japanese

Bookmarks

In the Summer of 2013, Facebook engineers started a major redesign of the HHVM JIT compiler that brought an overall 15% reduction of CPU usage on Facebook’s web servers. Facebook engineer Guilherme Ottoni has recently described how Facebook achieved that result by backing profile-guided optimizations (PGO) into their JIT compiler.

Profile-guided optimization is a technique that aims to use runtime profiling, such as identifying parts of the code that are executed more frequently, to improve code generation. PGO is particularly well-suited for integration in dynamic and JIT compilers given the integrated nature of the compiler and the runtime environment.

Facebook engineers focused on two main goals: using profile information to optimize decisions that are made at compilation time and to help the compiler identify larger type-specialized compilation regions, i.e., regions where generated code can be optimized for a given known type, thus avoiding the cost of type checks. To make this possible, the HHVM JIT compiler had to learn how to translate arbitrary code regions instead of just tracelets, which are very basic type-specialized blocks that are independently translated to machine code. Tracelets do not grow arbitrarily, since by their definition tracelets end whenever the type for an input to the block cannot be determined, or when the JIT compiler cannot determine the direction of a branch.

The first step taken by Facebook engineers to generalize tracelets was to assemble several of them together based on profiling information. By doing this, they could reduce the overhead of entering and exiting different tracelets, and additionally implement more advanced cross-tracelet optimizations, such as hoisting loop-invariant computations out of loops.

Building larger regions out of basic tracelets had the advantage of not violating any assumptions in the existing JIT optimizer and backend, which were tightly designed around that concept. In a second phase, though, Facebook engineers started a major redesign of those components to be able to handle regions with arbitrary control flow. This effort, completed by the spring of 2015, significantly improved JIT compilation performance by reducing CPU usage by 15%, thus trebling the improvement gained in the first phase.

Rate this Article

Adoption
Style

BT