Our JIT generates code that is roughly equivalent to the performance of unoptimized C code (gcc -O0).
He also explains how tracing works:
Traditional just-in-time compilers (like Sun’s Hotspot VM) are in their design and structure very similar to static compilers (like GCC). They observe which methods get executed frequently, and translate the method into native machine code once a certaint threshold has been reached. While such methods often contain performance-critical parts (such as loops), they often also contain slow paths and non-loopy code, which barely if at all contributes to the runtime of the method. A whole-method compiler, however, has to always analyze and translate the entire method, even if parts of it are not particularly “compilation-worthy”.
Trace-based compilation takes a very different approach. We monitor the interpretation of bytecode instruction by the virtual machine and scan for frequently taken backwards branches, which are an indicator for loops in the underlying program. Once we identify such a loop start point, we follow the interpreter as it executes the program and record the sequence of bytecode instructions that get executed along the way. Since we start at a loop header, the interpreter will eventually return to this entry point once it completed an iteration through the loop. The resulting linear sequence of instructions is what we call a trace.
Traces represent a single iteration through a loop, and can span multiple methods and program modules. If a function is invoked from inside a loop, we follow the function call and inline the instructions executed inside the called method. Function calls themselves are never actually recorded. We merely verify at runtime that the same conditions that caused that function to be activated still hold.
Brendan Eich blogs about what these developments mean:
Pulling back from the details, a few points deserve to be called out:
- We have, right now, x86, x86-64, and ARM support in TraceMonkey. This means we are ready for mobile and desktop target platforms out of the box.
- As the performance keeps going up, people will write and transport code that was "too slow" to run in the browser as JS. This means the web can accomodate workloads that right now require a proprietary plugin.
- As we trace more of the DOM and our other native code, we increase the memory-safe codebase that must be trusted not to have an exploitable bug.
- Tracing follows only the hot paths, and builds a trace-tree cache. Cold code never gets traced or JITted, avoiding the memory bloat that whole-method JITs incur. Tracing is mobile-friendly.
- JS-driven <canvas> rendering, with toolkits, scene graphs, game logic, etc. all in JS, are one wave of the future that is about to crest.
TraceMonkey advances us toward the Mozilla 2 future where even more Firefox code is written in JS. Firefox gets faster and safer as this process unfolds.
One area that I'm especially excited about is in relation to Canvas. The primary thing holding back most extensive Canvas development hasn't been rendering - but the processor limitations of the language (performing the challenging mathematical operations related to vectors, matrices, or collision detection). I expect this area to absolutely explode after the release of Firefox 3.1 as we start to see this work take hold.
Re: Awesome news
For example in IE 6, it renders everything (including fonts) same as it would on Safari or FireFox. Why wory about rendering different on different browsers and versions?
Also, it's faster then this TODAY, not tmrw. Tmrw, Flash player runs at speed of your 3D graphics card for your UI.
Mike Amundsen May 29, 2015
Ben Linders May 28, 2015