Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Experimenting with WASM for Future Audience Experiences in BBC iPlayer

Experimenting with WASM for Future Audience Experiences in BBC iPlayer



Tim Pearce discusses how they used WebAssembly to deploy their iPlayer across various web browsers, what advantages this approach had and how they intend to use WebAssembly outside the browser.


Tim Pearce works as a software engineer and WebAssembly advocate in the Research & Development department at the BBC. He is part of a team which is exploring how the BBC might distribute future audience experiences which are immersive, personalised, interactive and object-based to audiences.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.


Pearce: I'm Tim Pearce. I'm a software engineer working in BBC R&D's North Lab in Salford, where we are looking at what our audience experiences might look like in the future. For the past few years, I've been part of a large team looking at the challenge of how we allow our audiences to access exciting new, immersive, interactive experiences, regardless of what device they might have at home. I'm excited to have been invited to the WebAssembly track to talk to you about how we have used WebAssembly to address some of the unique technical challenges that we face. I'll discuss how we've used it to build an experimental version of iPlayer, which can play back future experiences, but also introduce our plans to use WebAssembly outside the browser for our technology stack for delivering universal access to audiences.

The Role of the BBC

The BBC is the world's leading public service broadcaster. We're impartial and independent. Every day, we create distinctive world-class programs and content, which inform, educate, and entertain millions of people around the UK and throughout the world. We do this through our traditional broadcast television and radio services, and through our digital services, such as the BBC website, and our native applications, such as our flagship video on-demand application, BBC iPlayer. As a public service broadcaster, one of the unique technical challenges we have as software developers, is ensuring universal access to our services, that every audience member can access every experience regardless of what device they have at home.

Object-Based Media

In BBC R&D, we are exploring what future experiences might look like, experiences which are more personalized, interactive, and immersive. We call this new range of experiences, object-based media. Object-based media allows the content of programs to change according to the requirements of each individual audience member. The objects refer to different assets they use to make a piece of content. They could be large objects, such as the audio or video that's used to make a drama, or small objects, like the individual frame of video, a caption or other on-screen graphics. By breaking a piece of media down into separate objects, and attaching meaning to them, and describing how they can be rearranged, a program can change to reflect the context of an individual viewer. We think this approach has the potential to transform the way content is created and consumed, bringing efficiencies and creative flexibility to production teams, enabling them to deliver a personalized BBC to every member of our audience. We have created a variety of content experiences, which demonstrate the advantages of object-based media, including augmented and virtual reality, branching narrative stories, and multi-device immersive audio and video. Many of these future experiences can be viewed on the BBC Taster website, but not yet via iPlayer.


However, this range of future facing content leaves us with a number of challenges. Our first challenge is, how do we give every audience member access to these experiences when they might not have the most up to date hardware at home? They may have an old smartphone or smart TV, how do we give every audience member access to an interactive, immersive console-like experience without them having to buy any new hardware? Our second challenge is, how can we distribute new experience formats that might be offered in a variety of tools? Can all of these format types be published and made available on iPlayer in the same way that video is today? Our third challenge is, how can we get the best performance from devices in a secure, sandboxed way? We think that WebAssembly provides the perfect solution for the challenges we face. In this presentation, we will talk about some of the reasons why it's a great fit for our requirements.

BBC iPlayer Code Bases

Currently, the BBC's flagship on-demand product, BBC iPlayer, has multiple code bases for different platforms, iOS, Android, responsive web, smart TVs, and there are many more legacy code bases to support our audiences on older devices. While sharing a common backend, the client application is written in completely different platform-specific technologies and programming languages, which poses a challenge for introducing new features. As a research and development team, we are looking at how iPlayer might need to adapt to fit in new content experiences, which presents a massive integration challenge. We wanted to allow our audience to be able to select an advanced immersive, interactive object-based program from the carousel in iPlayer like they would the latest episode of "Doctor Who," and dive straight in without having to install a new application. These different types of object-based media programs require different capabilities, and we want to make these available across as many platforms as possible. By selecting a program from the carousel, the application no longer just has to initialize the video player, it has to have the ability to load up new capabilities for that application, such as rendering an interactive scene.

Single Service Player

We wanted to demonstrate a way that we could build a version of iPlayer which could move between traditional video playback and beyond video experiences. We'll call this prototype the Single Service Player, or SSP. We've implemented the SSP in C++ to get the most game-like performance out of devices. Through using cross-compilation, we were able to target mobile, desktop, and Smart TV platforms from a single code base. We are using SDL to help us do that.

Let us look at some of the features of the SSP in action. We have written a multimedia playback engine, which allows playback of DASH streams available from the BBC's backend media services. We've implemented an object-based media storage engine, which can play back experiences such as branching narratives produced by our creative content teams. Previously, playing back interactive experiences like this was something that was only available as a web application. We've started experimenting with remote streaming, running high-end graphical experiences in the cloud, sending user inputs across the network, and streaming the resulting scene as video to be displayed on the client device. This might be how we solve our challenge of delivering high-end immersive experiences to audiences without devices with powerful GPUs in their home.

What about the Web?

Now we've built a cross-platform native version of iPlayer, how do we get this to work on the web, where many of our audience use iPlayer today? That's where we were first introduced to WebAssembly. WebAssembly allows us to take native applications written in languages such as C++, and port them for use on the web. For this, we added an additional target to our cross-compilation tool chain, and made use of the excellent Emscripten project to help us do that.

SSP for the Web, Using WebAssembly

How does this work in our SSP that we've designed for the web? Within the browser, we are relying on the number of underlying web APIs. Our Single Service Player supports the ability to render on the web in multiple ways, WebGL, Canvas2D, and even HTML elements. For video, we are using Media Source Extensions to access hardware accelerated decoding in the browser. For networking, we use the Fetch API to perform HTTP requests to download content. We also use WebSockets as part of our remote streaming solution. In order to access these underlying browser capabilities from our WebAssembly module, right now we have to go via the JavaScript engine. The Emscripten tool chain generates JavaScript glue code to bootstrap our WebAssembly module, and give us access to these web APIs. It also generates some code to polyfill various system C++ libraries for better use on the web. In our case, we're making use of Emscripten's slot-in support for the SDL framework, which generates code to assist with rendering the event loop, input handling, and many other things. We've added in some extra glue code to help with multimedia playback, and accessing our backend BBC media services.

Using the Emscripten tool chain has allowed us to port our Single Service Player to the web. One of the advantages in using WebAssembly is that the SSP looks identical across all browsers, and is identical to the native application because all of the user interface components, layout, and business logic is in the WebAssembly module. As a result, we sidestep limitations and differences in the implementation between different browser CSS engines. This approach has delivered a smooth, consistent user experience across different web browsers, and it's very close to the performance of the native application.

Universal Binaries

We realized that we could use WebAssembly in more ways than just porting our monolithic Single Service Player to work in web browsers. We could use WebAssembly and the fact it is supported as a target from multiple languages to allow developers inside and outside the BBC to offer individual experiences in the language and tools of their choice. Such as the games engine, and produce a universal binary, which is both performant, but also sandboxed from other experiences running on the platform. Applying these ideas, this turns our Single Service Player into something a bit more like a launcher application on an operating system. As programs gain more interactivity, they move from simple video to applications in their own right. WebAssembly allows us to deliver applications to a future iPlayer in a way that's performant, sandboxed, and secure. Morphing into a WebAssembly runtime, our future iPlayer application can also lazily load the capabilities required to run various experiences when they are needed, to reduce the amount of download that happens up front when the page loads.

WASM and BBC R&D - Our Journey

Let's have a quick recap of our journey in developing a Single Service Player. We started by taking the user experience from iPlayer, which has multiple code bases, generating multiple binaries for the various platforms that we support. In order to more easily integrate features and get the best performance out of devices, we use cross-compilation to target different platforms by rewriting the application in C++. Then we used Emscripten and its tool chain to port the SSP to the web. After we realized the benefits of using WebAssembly to package individual content experiences on iPlayer, we thought, could we package our launcher application and individual content experiences as WebAssembly modules that could not just work on the web, but in native applications too? Fortunately, the WebAssembly community had already started building runtimes, which allowed WebAssembly to run outside the browser on different platforms natively.

SSP for Native, Using WebAssembly

In order to compile our SSP to WebAssembly, and run it outside of the browser, we embed a WebAssembly runtime in a new C++ application. Specifically, we're using the Wasmtime runtime from the Bytecode Alliance, which has support for WASI, giving our WebAssembly module access to some of the underlying system APIs that we need. We adopted the In-Progress WASM C API for embedding, which allows us to easily switch between WebAssembly engines in the future. As WASI is still an emerging standard, it doesn't yet have all the APIs that we need to support playback of multimedia or immersive experiences. To fill this gap, we have taken some of the code from our native SSP and used this to export these capabilities to the portable WebAssembly module.

The Future: Flexible Compute?

Now we have experimented with running our Single Service Player in a WebAssembly runtime, where do we go next? Going back to our original challenge, what if we wanted to give our audiences access to a photorealistic, real-time rendered experience that's personalized to them? How can we use WebAssembly to help us do that? For example, what about an episode of "His Dark Materials," where the onscreen daemon could change based on the viewer's personality? Even running natively outside of the browser, a WebAssembly runtime would still be limited by the capabilities of the hardware it runs on. How might we do this?

WASM and BBC R&D - Flexible, Migratable Compute

As WebAssembly can run on multiple platforms outside of the browser, what if we broke down the universal binary and ran it across multiple machines, distributing the computation and rendering. That way the rendering and other computational tasks that make up the experience could be performed on more capable devices. Such as a machine with a high-end GPU that supports ray tracing, or a machine with a neural processing unit to perform real-time machine learning. This could allow audience members to leverage other devices that they have in their homes to deliver the experience. Where these devices are not available, we would provide them access to machines in our scalable cloud infrastructure. We're also seeing a trend towards edge computing, where the rollout of superfast, low latency 5G networks is introducing access to compute that's closer to our audiences. We're calling this new way of delivering programs, flexible compute. There's a number of new projects within the WebAssembly community to support WebAssembly running as microservices throughout the technology stack. We're currently in the process of evaluating these. We're particularly interested in how the WASI standard might develop around our use cases for distributed rendering. Our next task is to work out how WebAssembly might fit into the content production pipeline, and what experiences we want to enable. What tools would content producers inside and outside the BBC be able to use to create experiences for our new object-based iPlayer? For example, how can we offer experiences in Unity and Unreal, and run these on our future flexible compute platform.

WebAssembly: An Integral Part of the BBC's Broadcasting Chain of the Future

What I've talked about is very early stage research in applying WebAssembly to solve the challenges that we face. However, we believe WebAssembly will be an integral part of the BBC's broadcast chain of the future. That we can use it to deliver immersive, interactive experiences to every audience member, regardless of what devices that they might have at home.

Questions and Answers

Eberhardt: The first question that's popped up is just some love and nostalgia for the BBC. It's just someone who lived in the UK 10 years ago, and they miss BBC. It still is that great British institution, isn't it?

Pearce: Yes. I think what a lot of people don't realize as well is the BBC has international presence. A lot of our TV shows are shown across all the different video on-demand platforms, a lot of our shows are on Netflix, and Amazon Prime, and all of the other ones. Also, we've got offshores, we do lots of co-productions with HBO and stuff, whilst the B in the BBC is British, we're all over the world, and we do have quite a few video on-demand products in other countries as well.

Eberhardt: One of the things that I found interesting about your talk is the other talks in this track are mostly talking about specifications, tools and tooling, the platforms that they've built, that then other people can use. What I liked about this talk is you're talking about building directly something for the end user. You're building some experience for your next generation iPlayer. How close is this to reality? Is this very early stage R&D, or is it relatively mature? Are there parts of it that you might split off and actually embed into the iPlayer? Just wondering, when do we get to play with it?

Pearce: I think that's a common question asked to all R&D engineers after they've demonstrated something exciting. Yes, it's still very early stage research. For us, as a R&D team, we've worked on a number of experiences, and those, like I've shown, like branching narratives, and interactive, immersive things. Some of those things we put out on the web and in apps. We've got our platform, which is Taster. Taster is only available on the web at the moment. A lot of our R&D research does end up in that space. What we wanted to do with this project is say, we're, across R&D, building lots of things. We've not had a project yet which explores how we take those projects and deploy them across lots of different platforms, and what they might look like if iPlayer changed to do that. We've designed this whole thing in order to encourage people in the wider BBC, and we've demonstrated it to teams in iPlayer, and architects in other parts of the BBC to show that potential future. iPlayer is the most relevant way of doing that.

Eberhardt: We've got a question about cross-platform, and there being multiple different ways of doing it. I did want to ask a slightly more technical question. You described your SSP. You described your runtime, it was based on Wasmtime, and some of the WASI tooling. How does it run on mobile devices? Is it the same stack? Does Wasmtime run on mobile as well?

Pearce: It's quite a long journey. I've got a feeling we've not quite done mobile yet. We have explored it. What we've got is our native Single Service Player, and then we ported that to the web using Emscripten. When we've been doing these experiments with using WebAssembly outside of the browser, we are using some of the same APIs that we used in the native applications, so things to do the media playback and rendering, everything that Emscripten is doing in JavaScript. The way we use Wasmtime is using the C API, which is an embedding API that is provided. It's a standardized header, and that allows you actually to swap between different WebAssembly runtime implementations. There's loads of WebAssembly runtimes out there, there's Wasmtime, Wasmer, WAMR. There's a big long list.

Pearce: They're all at different stages of development, have different backers. Some are projects. Some of them are academic. It's great that there's this C API, which is supposed to abstract over all of those. Yes, in terms of Wasmtime and running it on mobile, I don't think we've done it yet. I don't think it would be too difficult to do, because our tooling is set up to produce that on Android applications. I don't think we've quite done iOS yet. We've got the tooling to create an Android application. What Wasmtime produces is a C++ library that you can use to easily embed that into your application. That's quite exciting, because Wasmtime is written in Rust, but that doesn't matter.

Eberhardt: There are many ways to do cross-platform or multi-platform code, what's the unique benefits of WASM?

I'm not sure there is one because cross-platform, multi-platform, typically, when you're talking about mobile, there are no ends of different solutions, whether it's Xamarin, React Native, Flutter, various HTML based solutions. From my experience, it's the UI layer which is the challenge. The basic act of running your code on multiple mobile devices, that's easy. They all support C, C++. It's the UI layer that's the challenge. To my mind, WASM doesn't provide any additional benefits on the UI layer. It just doesn't feel like the niche for WebAssembly.

Pearce: With our UI stuff, we're doing it all ourselves. I think the Qt framework has been ported to WebAssembly, so you can use that to build apps. In terms of cross-platform and multi-platform, we're interested in the fact that if all of our programs on iPlayer become interactive, we want a way for experience authors to be able to write their experiences, whether they're games or if they're branching narratives, if they're just maybe a new video player or something. I don't want to make too many comparisons there. Java and the JVM is a similar example of something in this space. We want to be able to target WebAssembly as a universal binary. With WebAssembly, the key benefit is the sandboxing that it provides.

Eberhardt: What you're trying to achieve is some plugin model where people can create their own experiences, which you can then safely host within your player due to the security and the isolation model of WebAssembly. Is that correct?

Pearce: Yes, correct. In the same way, at the moment, we commission content from third party video producers. There are standards for broadcast, like the frame rate and what codec it's in, and things like that. We're seeing WebAssembly as a way of achieving that for interactive content, so we can host these kinds of experiences, and only give it access to certain APIs, and make that also available to users, so users can decide, I don't want that experience to track or store my personal data, and things like that. The fact that there's these runtimes that allow us to run that same bit of code in the browser, but also in services where it can run on the edge and in the cloud as well.

Eberhardt: The use of WebAssembly as a plugin model or a plugin runtime, that's certainly starting to take off. I think one of the first companies I think that really embraced it were Figma, who've got a design tool. There was a brilliant blog post they published that explored the various different browser based plugin models, with the start being just eval all the code, no security, versus running it in IFrames, versus running it within WebAssembly. The initial design of WebAssembly does mean that it does lend itself very nicely to being that plugin model.

Since WASI is not an official standard at this time, has it caused any problems regarding multiple changes in your projects?

Pearce: I have to admit, yes. We don't keep a close track on WASI. We saw the initial announcement, and I think we occasionally update the software development kit that you can get. WASI has a whole release process. Because we have a WebAssembly module, and if you compile it outside of the browser, you basically get access to nothing, so you don't even get a standard C API, or access to any of the POSIX APIs at all. WASI gives you access to that. There's lots of pull requests and discussions that are going on integrating things like neural networks, and using GPUs and sockets and stuff. What we've done is use the first release of WASI, and then use the embedding API that Wasmtime provides, to guess what future APIs we might need. I might be completely out of date on this. WASI doesn't quite support networking yet.

Eberhardt: I think it's the most frequently asked question when people talk about WASI, for the majority of people using it outside of the browser, whether it's on edge networks, whether it's on blockchain or whatever else, you need network. It's a hot topic.

Pearce: Yes, that's it. Our big thing is rendering, so we want to be able to have access to the GPU for rendering content, and being able to make microservices. We've done experiments with Krustlet, which I believe is the next tool. We're tracking it. I think we're going to try and get more involved as well.

Eberhardt: I think the whole WebAssembly ecosystem is very early stages, which probably lends itself to R&D development. I'm not sure it lends itself to really using it in anger, in products at the moment.

Pearce: I do occasionally look around the BBC, and there are a few teams actually outside of R&D, which have started to explore WebAssembly. I think that's still very early stage experimentation. It's in things like Envoy proxies and things like that.

Eberhardt: I think WebAssembly is certainly ready to use in production, and it's stable. If you're using outside of the browser, you have to build a lot of the surrounding, a lot of the hosting yourself at the moment. WebAssembly within the browser is mature and it's supported. Outside of the browser, you've got to create a lot of the glue yourself.

Pearce: Google Earth is my favorite.

Eberhardt: Will you provide an SDK to achieve a similar look and feel for your content providers in generating interactive WebAssembly modules? Are you ready to commit to that yet? Someone's thinking ahead about if they're a content provider that we're writing a module that sits within your runtime, how do they ensure that they maintain the same look and feel?

Pearce: In the BBC we have something called GEL. I think it's all open. There's a website about it. It stands for Global Experience Language. It's our version of material design. It's how we design our components, cards, and things like that. That's the thing that's come up before. Because at the moment, everything's designed for JavaScript on the web. We do have mobile apps as well but everything is done separately. When we're talking multi-platform, our games team who do games, predominantly our children's apps, but they have done other things as well, they look at what the more game-like UIs look like. That's something we need to think about. We're also thinking about, you have the design of your launcher, and once you leap into an experience, the look and feel of "Doctor Who" is going to look a lot different from the look and feel of "His Dark Materials." When we make experiences more interactive, in terms of designing a software development kit, we want to be able to allow people to offer experiences to maximize their creativity, and then click and drop it onto our platform. Then we'll handle where that runs, and allowing people to access it.


See more presentations with transcripts


Recorded at:

Mar 06, 2022