BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Improving Webapp Performance with Multi-Threading: a Study of Web Workers' Communication Overhead

Improving Webapp Performance with Multi-Threading: a Study of Web Workers' Communication Overhead

Surma, Web Advocate at Google, recently published a study on the performance of postMessage, the method used by web workers to communicate. Surma concludes that while postMessage comes with some overhead, provided the payload is below a given budget, moving non-UI tasks off the main thread may result in increased overall user-perceived performance.

As more than half of internet access worldwide is realized from a mobile or a tablet, and with low-end mobile multi-core devices having a high and continually increasing market share, developers must ensure that their web applications remain performant and usable even in a low-specs environment (such as slow CPU, no GPU, or low memory).

In that context, multi-threading with web workers allows to offload the relevant computations from the main execution thread. The net resulting performance is the sum of the benefits added by parallelizing tasks over the native multi-core architecture (when available), and the cost related to the communication overhead between threads. For web applications, the communication overhead is mostly dominated by the cost of using the postMessage method for communication between the main UI thread and web workers.

Driven by the reticence of some developers to use web workers, Surma sheds some light on the postMessage performance. Surma explains the rationale behind his study:

The fact that people will not even consider adopting Web Workers because of their concerns about the performance of postMessage(), means that this is worth investigating.

The study consists of two benchmarks. The first benchmark analyzes the time it takes to send a message between threads and shows its positive correlation with the complexity of the object being sent.

The first benchmark is run on Firefox, Safari and Chrome on a 2018 MacBook Pro, on Chrome on a Pixel 3XL and on Chrome on a Nokia 2. Object complexity refers to the combination of depth (depth of the JSON tree) and breadth (number of properties) of the JSON representing the object.

The second benchmark confirms the strong positive correlation with the length of the string returned by JSON.stringify(), with caveats. Surma explains:

I think the correlation is strong enough to issue a rule of thumb: The stringified JSON representation of an object is roughly proportional to its transfer time. However, even more important to note is the fact that this correlation only becomes relevant for big objects, and by big I mean anything over 100KiB. While the correlation holds mathematically, the divergence is much more visible at smaller payloads.

Surma goes on to define slow by setting a budget inspired from the Response Animation Idle Load (RAIL) guidelines, and distinguishes between contexts involving JS-driven animations (16ms budget to display the next frame) or not (100ms budget to react to user interaction).

Surma concludes:

Even on the slowest devices, you can postMessage() objects up to 100KiB and stay within your 100ms response budget. If you have JS-driven animations, payloads up to 10KiB are risk-free. This should be sufficient for most apps. postMessage()does have a cost, but not the extent that it makes off-main-thread architectures unviable.

Surma additionally provides alternatives when postMessage cost overrides the benefit of running tasks over several threads. Performance may also be improved through judicious use of chunking techniques, or WebAssembly.

The full study with included data and visualization is available on Surma’s blog.

Rate this Article

Adoption
Style

BT