Jake Archibald recently announced the release of a detailed benchmark comparing web application bundlers. The first release tests the browserify, parcel, rollup, and webpack bundlers across six dimensions and 61 feature tests. The benchmark aims at giving developers relevant and structured data from which to pick a bundler. Future releases should feature both an extension of the feature tests and additional bundlers participating in the benchmark.
The web.dev team behind the project summarized the motivation for the new benchmark page (bundlers.tooling.report) as follows:
web.dev is launching a new initiative called tooling.report.
It’s a website that gives web developers an overview of the features supported across a selection of popular build tools. We built this site to help you choose the right build tool for your next project, decide if migrating from one tool to another is worth it, or figure out how to incorporate best practices into your tooling configuration and code base.
[…]
We aim to explain […] tradeoffs and document how to follow best practices with any given build tool.
Four bundlers made the cut for the first benchmark release: Browserify, Parcel, Rollup, and Webpack. The benchmark methodology involved selecting user-centric test criteria that assess the ability of the bundler under test to provide a fast, responsive, and smooth user experience. While developer experience, learning curve, and other subjective criteria are not a part of the benchmark, developers may make their own impression by reviewing the code for each scoring tests. For instance, the Splitting Modules Between Dynamic Imports criteria links to an explanation of the criteria together with four sample codes for each of the evaluated bundlers.
The first benchmark release features six dimensions of analysis and a grand total of 61 feature tests evaluated across the dimensions. The six dimensions are code splitting, hashing, importing modules, the handling of non-JavaScript resources, output module formats, and transformations.
Independently from the raw value of the benchmark, developers may find it interesting to read up the didactic description provided for each of the dimensions and feature tests. By having an accurate vision of what bundlers can do, or alternatively do not support, developers may minimize future pain points. The release note duly noted:
Our team is focused on providing the best web experience to users. […] For example, if a main thread script and web worker script have shared dependencies, we would like to download the dependencies once instead of bundling it twice for each script. Some tools support this out of the box, some need significant customization effort to change default behaviors, and for others it’s outright impossible.
[…]
Our hope was to create a checklist for features so that next time we start a new project, we can evaluate and choose which tool is best suited for our project.
If every criterion mattered the same to all developers, Rollup would be the all-round benchmark winner, with 65% of the features tests passed. However, Archibald explained on Twitter that each bundler has its strengths:
Overall build tool strengths:
- Parcel: Being able to go HTML-first is the best design for a bundler targeting “the web”.
- Rollup: Simpler API and design makes writing plugins easy. Well documented. Small output.
- Webpack: Community plugins are great. Good CSS support.
Archibald and Surma expressed in a talk at JSConf Budapest last year how they ended up choosing the Rollup build tool for the Proxx app. Rollup, in this specific occurrence, provided the best experience when came the time to write custom plugins to support the extra performance required by Proxx to run on feature phones.
One enthusiastic developer on Twitter mentioned the omission of the build speed criteria from the benchmark:
The project is a great idea, congrats! I’m just missing out a bundling speed comparison as that’s the selling points of some tools and something to take into account when choosing your bundling tool. Do you have planned adding it in the future?
Archibald answered:
We tried to stick to what “works” and “doesn’t work” for the initial release.
The FAQ section of the site explains further the methodology and provides additional details. The benchmark site, together with its code artifacts, is open-sourced and available on GitHub. Developers are encouraged to contribute extra tests for existing or additional bundlers.