Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Studying JavaScript Performance

Studying JavaScript Performance

This item in japanese


Performance issues can always be an unexpected gotcha when developing your latest and greatest web 2.0 application, and a lot of times performance problems surface through the most benign of operations. Recently Coach Wei has taken the task of doing a quick study of the cost of many different Javascript operations as well as the differences in performance across browsers. The results aren't really all too surprising.

As expected, eval is still evil, performing very slowly on all browsers. Of particular interest though is that eval doesn't seem to fair as bad on Safari, taking only 9.4ns as opposed to 172ns and 546ns on IE7 and Firefox, respectively. Shift and Join array operations are a drag on all browsers.

An interesting point result that stuck out, as expected, IE tends to fare worse than other browsers when it comes to performance, especially in crucial areas such as computed box model calculations, String manipulation, and HTML DOM operations. Also of interest (which we've seen before) is that DOM operations in general are costly across all browser versions, although newer releases have seen some improvement. However, using innerHTML still seems to outperform the equivalent DOM manipulation that would be needed to accomplish the same task.

Other notable aspects of the study:

  • Safari's pop array operation is signiffently slower than other browser implementations
  • calculating computed style and computed box model can be expensive... the guess is that this cost is due to properties on currentStyle being recalculated on each call
  • object creation and the “in” operation seem to run slower on Firefox than in other browsers

The table of his results are worth a look, and may provide some guidance in debugging performance bottlenecks of your own.

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Might be good to modularize JS benchmarking

    by Porter Woodward,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    It occurs to me as I read these JS benchmarks (there was another one just a couple of months ago) that there needs to be some modularization of the benchmarking.

    1> Core. Standard JS tasks like string and array manipulations. JS that can be run anywhere, inside a browser, via an external interpreter, etc. With scripting coming to both the CLR, and JVM - as well as external interpreters already out there - it might be handy to baseline browsers against external implementations.

    2> DOM. Depending on whether this can be broken into two approaches it'd probably be beneficial to disentangle Visual from non-visual manipulation of the DOM. Just because we add elements to a tree, or manipulate elements of the tree doesn't mean we want to see it rendered on the page right away. In any case that would allow the separation of the rendering performance from the back-end.

    3> DOM/Visual. Turn rendering on. Just because we want to know how fast the back end is - it won't matter much if the front end renderer is hideously slow. While not strictly a JS test - it is important for helping to determine overall user experience (also note it's irrelevant to JS implementations outside a browser).

    4> I/O. Again, while not strictly a JS performance issue I/O affects overall user experience. Being able to make XMLHTTPRequests - and other I/O calls and getting an idea of their performance will probably tell more about the HTTP and networking implementation of a browser and OS combo than anything else (and can we make XMLHTTPReqs from Rhino, etc.?). But as these are fairly critical to AJAX apps - making requests, caching results, etc - it might be nice to understand their performance characteristics.

    Just my $0.02

  • innerHTML -vs- appendChild

    by Peter Wagener,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Virtually all the JS performance studies we've seen in the past few years have shown two things consistently:

    1. node.innerHTML = "Some Text" is faster than node.appendChild(someTextNode)
    2. IE is incredibly slow at string manipulation... especially large string concatenation.

    Taking the above into account, has anyone built a large-scale JS application where they have been able to use 'innerHTML' to build up the DOM tree, and been able to avoid long string concatenation? I'm curious how other folks solved this conundrum.

    In my experience, I haven't been able to do so. As a result, I've built most of the applications using DOM operations. The code usually comes out cleaner & more manageable as well.

    Any other experiences?

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p