Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Google's V8 Engine Adds Support for WebAssembly SIMD

Google's V8 Engine Adds Support for WebAssembly SIMD

This item in japanese

The WebAssembly SIMD proposal has come to Google JavaScript engine V8, albeit still as an experimental feature. Exploiting data parallelism, V8 support for single instruction, multiple data (SIMD) aims to accelerate compute intensive tasks like audio/video processing, machine learning, and more.

SIMD operations are supported on most modern CPU architectures, although each of them in a different way. Therefore, the current WebAssembly SIMD proposal aims to define a reduced set of operations that can be widely supported on current hardware. This includes operations on fixed-width 128-bit data, represented through a new v128 value type. All of this is exposed to the programmer in a specific wasm_simd128.h header file.

This is how you can multiply the elements of two arrays and store the results in a third one:

#include <wasm_simd128.h>

void multiply_arrays(int* out, int* in_a, int* in_b, int size) {
  for (int i = 0; i < size; i += 4) {
    v128_t a = wasm_v128_load(&in_a[i]);
    v128_t b = wasm_v128_load(&in_b[i]);
    v128_t prod = wasm_i32x4_mul(a, b);
    wasm_v128_store(&out[i], prod);

However, thanks to LLVM autovectorization optimizations, you do not need to use those SIMD intrinsics. Instead, you can express SIMD operations through usual loop arithmetic and then relay on compiler optimizations to transform it into SIMD operations:

void multiply_arrays(int* out, int* in_a, int* in_b, int size) {
  for (int i = 0; i < size; i++) {
    out[i] = in_a[i] * in_b[i];

As mentioned, SIMD aims to accelerate compute-intensive applications. Google Research showed a number of demos using V8 SIMD support for computer vision-based tasks such as hand tracking, credit card recognition, and augmented reality. In hand-tracking case, using SIMD parallelization enables a 5x performance boost, giving a peak 15-16 FPS experience up from 3 FPS without SIMD. Additionally, Google engineer Nikhil Thorat, working on TensorFlow for JavaScript, tweeted his team is seeing 3-fold speed-ups using WebAssembly SIMD with real world models.

V8 WebAssembly support is available in Chrome Canary and can be enabled using the --enable-features=WebAssemblySimd flag. Support is still experimental and subject to change, as this is tracking the evolution of the WebAssembly SIMD proposal.

V8's is not the only SIMD implementation available in a browser. The first such effort was carried through by John McCutchan for the Dart language. Indeed, the WebAssembly SIMD proposal derives some of its elements from Dart SIMD specs. Furthermore, support for SIMD operations in WebAssembly was already provided by WebAssembly runtimes Wasmer, and Wasmtime, among others.

Rate this Article