Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Svelte at the Edge - Luke Edwards at Svelte Summit

Svelte at the Edge - Luke Edwards at Svelte Summit

This item in japanese

Luke Edwards recently gave a talk at Svelte Summit 2020 in which he discussed running Svelte applications at the edge. Edwards demoed building and running a simple Svelte application with Cloudflare Workers and Google Cloud.

Edwards first explained web workers and service workers. The Web Workers API allows developers to run a script in a background thread that is independent of the main thread. Web Workers are often used to run expensive computations without blocking the main thread. The main thread runs the event loop in which user interactions are processed. Blocking the main thread would thus negatively alter the responsiveness to user inputs. Web Workers are also used to run computations concurrently, and possibly in parallel on multi-core architectures.

Workers may themselves spawn new workers, as long as those workers are hosted at the same origin as the parent page. Workers are however limited by design in their operations. Workers cannot update the DOM to protect against the dangers of concurrent access to the DOM state. Workers cannot use some default methods and properties of the window object for similar reasons.

Workers, being on a separate thread and scope, rely on messaging for communication with the main thread. Both communicating parties send their messages using the postMessage() method, and respond to messages via the onmessage event handler. Message data is copied rather than shared. A web worker that runs the code contained in a worker.js file can be created in the main thread as follows:

/* main.js */

const myWorker = new Worker('worker.js');

Service workers are a type of worker that serve the explicit purpose of being a proxy between the browser and the network and/or cache.

service worker architecture illustration

The Service Worker API provides service workers with the functionality they need to fulfill their proxy purpose. Once they are installed and activated, service workers are able to intercept any network requests made from the main document. They also have the ability to control a separate storage cache. An example of service worker implementation that intercepts fetch requests is as follows:

/* service-worker.js */

self.addEventListener('fetch', function(event) {
    // Return data from cache

Edwards then explained that Cloudflare workers are service workers in disguise:

Cloudflare workers are exactly the same thing, literally the exact same thing. They use the service worker interface. They are equally positioned on the Cloudflare network, so the difference is really just physical location. They still proxy between the client and the network and the controller cache.

Unlike service workers residing on the user machine, Cloudflare workers do not have access to a local file system. A Cloudflare worker is deployed on Cloudflare’s machines that are distributed around the world at more than 200 locations. Application frameworks like Svelte’s Sapper however or Next.js use file-based routing to map application routes to the file implementing the page.

Edwards worked around this limitation by using third-party client bundle hosting. In the demoed example, the Svelte application is built locally and uploaded on buckets handled by Google Cloud Storage. Edwards showed an implementation of a Cloudflare worker that relies on the URLs provided by the cloud storage service rather than local filenames.

Edwards explained the advantages of the demoed approach. His approach is flexible and gives more control to the developer. vs. using Cloudflare Worker Sites — only appropriate for static sites. Because the bucket is a separate component in the demoed architecture, assets can be uploaded without redeploying the worker.

Edwards mentioned that costs should also remain controlled:

Buckets are practically free. […] Egress which is, you know, data leaving the bucket, is super super cheap and with this approach, you only have to pay for that once.

As a possible downside, Edwards quoted the extra administration effort (e.g., define router logic and application build, manage two separate deployment units, synchronize when needed, purge caches).

Edwards gave two demos and plenty of additional technical explanations. The source code for the demos can be found online. The reader is encouraged to review the full talk.

Svelte Summit is a virtual conference about Svelte. The 2020 edition took place online in October. The full list of talks is available on the Youtube channel of the Svelte Society.

Rate this Article