BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Build Features Faster with WebAssembly Components

Build Features Faster with WebAssembly Components

Bookmarks
46:12

Summary

Bailey Hayes discusses what has been impossible: the ability to write an application that combines libraries written in different languages, runnable in the web, on the server, and at the edge.

Bio

Bailey Hayes is a director at Cosmonic. She believes the future is in distributed systems and WebAssembly (Wasm). Her daily activities include wrangling distributed apps, finding new tools for better devx, and discovering the best food for any given location.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Hayes: I want to start with a story. It's a story about a Gopher. She is a backend engineer and prefers coding in Golang. Her goal is to build an app that she can share with her friends and coworkers at her company. Her best friend is Crab. Crab knows what Gopher is working on, and Gopher comes and says, "I've got this one specific problem. It's not really related to the thing that I'm building but I know you worked on something similar." Crab says, "I've totally solved this problem already.

Just please take my library and run it and you don't even have to worry about it." Here's the thing. Crab is a Rustacean, and Rustaceans code in rust. That means this library that Crab sent to Gopher, that is a crate. If you know anything about Golang, then you probably know what Gopher's reaction to this was, which was, "Oh no, cgo." Because you want to be able to use that awesome library but, usually, if you have to enable things like CFFI bindings to be able to use it, that can be really complicated.

It complicates your build. It complicates which platforms you're able to support. You might need to have a larger matrix for building things. I'm not picking on Go here. I am myself a Gopher, but every single language has something like this. Java, it's using JNI bindings. Rust has its own way of building in with C. Basically every language goes down to this common denominator of CFFI bindings. That alone isn't necessarily a problem. The problem is that there's no standard way to be able to work with all of these different things and be able to use different languages within the same build process.

Language Selection

We start with our language selection, as Golang kicked us off already, is that it starts us into a path of silos. I want to highlight that it's like that first decision that you make, the language, but that takes you down this giant decision tree matrix of, what frameworks am I going to use? What libraries am I going to use? For example, just within Rust, there's async-std or using Tokio, you have the same paradigms where folks only want to work in one specific framework or one specific library.

Angular and React will be another example. That eliminates the number of people that are willing to come use your project or contribute to it. I see that as a really large problem that's worth solving. Not only that, but a few others that I have listed here is like, some libraries and some projects only build for certain operating systems. That eliminates the number of people that you can pull in.

Then, even down to our protocols. If you think about it at a single company, if they don't have a company-wide mandate to say, thou must use Protobuf IDL for defining your interfaces, then just even using other people's microservices can be a bit a friction pool, because your team might have a process for defining everything in OpenAPI and using REST. The other team says, we use this gRPC thing, like connect it this way.

Now they've got to write maybe some wrapper libraries and stuff to be able to consume it. This set of silos just goes on and on for as far as the eye can see. It's not just your languages but, really, once you start talking about within the languages, there are so many other silos that are worth breaking down.

Background, and Outline

I'm going to talk about the component model, that is part of WebAssembly. I am a WebAssembly enthusiast. I started playing around with WebAssembly, and then shipping to production as early as 2012, if you count asm.js, which is really a precursor to WebAssembly. While I was working at SAS, we ended up shipping. We were one of the first folks that shipped WebAssembly, taking our C++ code and shipping it to the browser, so that we could share code, and we had JavaScript bindings in there.

I do lots of different things within the WebAssembly space. First, I'll introduce what WebAssembly modules are. We're going to focus on the parts that are important for the component model. By the end, I hope you know what the component model is and why it's worthwhile designing and proposing a new standard that builds on top of the existing one with WebAssembly modules.

Wasm Modules

First up, WebAssembly modules. They're basically a compilation target. It's weird to have all these different talks and stuff about WebAssembly because it's like, how many talks have you seen outside about .so's and dylib's. It's strange to be talking about a compilation target in these terms, but it was designed in a way so that I can take high-level languages, compile them down into something that is sandboxable. It's fast. It supports streaming compilation. It has a lot of really amazing properties.

What it really is, is a bunch of numbers in a trench coat. What I mean by that is that it's pure compute. When you crack open what's inside a WebAssembly binary, it can't do things like create threads, or do networking, or run HTTP. It requires a host runtime to give it those hooks to be able to run those types of capabilities. A lot of people talk about WebAssembly as being deny-by-default. Really, at the root of it, it's like no function call by default.

You have no ability to compile a Wasm module that's able to reach outside of its memory, to be able to make network calls. There's nothing to lock down because it is locked down. Some other aspects of Wasm modules that will be more relevant later when we talk about the component model, is that they're usually built with one .wasm for the entire application. They're usually built from one target language. I might write things in C++ or Rust today, and compile that into Wasm.

With WebAssembly and compiling that down to Wasm, I was talking about a bunch of numbers in a trench coat, and the host has to provide those capabilities. One of the standardized ways of providing those capabilities is with WASI. WASI, initially, in 2019, when we started working on the standard is the WebAssembly Systems Interface. That evokes maybe POSIX. Really, what we're designing with WebAssembly is something that is security first, and so that what we pass into the sandbox, and what the sandbox is allowed to execute, isn't in any way poking a hole for our sandbox.

Really, when I talk about WASI, what I'm talking about is WebAssembly Standard Interfaces. These can be high level. They can be low level. A few examples of those is, wasi-clocks, so that I can get access to a system clock. The host could fuzz that so that I'm not able to do certain system time attacks. Other types include wasi-http. With WebAssembly, I said it can't access the network or make HTTP calls. If I'm given a function call, and the host wires that together to me, links it to my Wasm module, my Wasm module now can make that function call.

That function call might say, send this HTTP request for me, please. That is up to the host now to provide that sandbox. That's why I've got basically a box around this whole thing, is that it might have multiple instances of the WebAssembly modules, and each one of those is sandboxed within the runtime.

Now, once you have a .wasm, because this is a compilation target, how do you run it? The answer is lots of different ways, like tons of different ways that people are running Wasm today. I'm only going to focus on one, which is basically server side. Wasmtime is the runtime that I'm going to be using. Obviously, you can run it in the browser. It's one of the original use cases for WebAssembly, but not the only original use case, the original document included non-web embedded use cases.

Some others that I wanted to give a shout out to. In a lot of ways, a WebAssembly module is really the last plugin model you'll ever need. Tools like Extism and Atmo come up with basically app frameworks around that to make it really easy so that I can take a Wasm module and run it as a plugin within a greater system. On that note, I need an application runtime basically to wire together those different capabilities.

That's why you need an application framework that embeds a Wasm runtime. The application framework that I'm going to talk about and demo with is called wasmCloud. I'm a maintainer on wasmCloud. This isn't specifically about it. It's just, you need an application runtime to be able to wire together these capabilities. wasmCloud itself is a CNCF sandbox project, soon to be incubating. It's built on many other CNCF projects, the one I'll specifically highlight here is NATS.

NATS is that connectivity layer that lets me have basically a way to distribute my Wasm modules. That's a really powerful thing to be able to have a self-healing network. Like I said earlier, Wasm is one binary, it's super small. It supports streaming compilation so that by the time I download a Wasm module, I can instantiate it. Being able to do that over something that orchestrates Wasm over lattice is really powerful. That's why we're going to show up a little bit with wasmCloud.

Demo

I have an app, and it's called kvcounter. I'm going to run it really quickly, just so you can see what we're talking about. It's pretty simple. Every time I hit this button, a counter increments. I'm posting here in a log, and it's saying, I incremented to 106. Where this is running is over in Google Cloud. This happens to be an x86 machine. Right now, what I was running, it was spread across, round robin, in several different places.

This one here is x86 Linux. It took a Wasm module that I built on my Mac, so I built it here locally on an N1MAC, push that up as an OCI ref. Then I also put it out on Kubernetes, because I could. I have it running in a couple different Linux operating systems, different architectures, so Arm, Linux, it's a different matrix. The thing that I'm trying to highlight here is that I'm able to take the exact same .wasm that's built in one place, and distribute it all over, anywhere I want, on any architecture.

Each one of these boxes here, basically represents a WebAssembly runtime. This is my WebAssembly host, and inside it is embedded Wasmtime. Now if I look at the app, the way that it looks, you can see that I have basically seven instances of this running. It needs HTTP server and a key value. I'm using some of these built-ins just so that I don't have to configure or host anything myself.

The code is the cool part. This is kvcounter. The first thing I want to highlight is this is building with WebAssembly modules, not WebAssembly components. With WebAssembly modules, I have to do things like embed a wasmCloud specific interface. In this case, this is something specific to wasmCloud. Many different application platforms exist for building WebAssembly modules. This is a really common pattern where we have to come up with basically our own language bindings, our own way of parsing in different types.

A lot of folks, what they do is they serialize types in, like as JSON and back out, and that has some performance cost. If I look at the code, basically on a GET request, I call increment_counter. Down here in increment_counter, I call IncrementRequest on keyvalue sender. The code here is pretty brief. It's only 54 lines. The thing that I want to highlight is what's not here. Already with WebAssembly modules, I am not adding things like how to connect my keyvalue client.

Typically, if you're writing an application today, you might have an import here for DynamoDB, or any other key-value store that you want to use. You configure that client and you make a connection. Whereas with this system, and with many other WebAssembly application frameworks, that is done for you by the host. The host is maintaining those connections. It might have different ways of performing the connection pooling, and all that type of work. To me as a WebAssembly author and the person that's writing this code, I'm really just focused on my functional logic here.

Neither Web, nor Assembly

I just showed a lot of cool things that I can do with WebAssembly modules, and that it's an Open W3C Standard. It's actually a recommended web standard starting in 2019. It's safe, secure, fast, polyglot, all these things. Why would I want to change it? Because I'm able to move it, I'm able to build real applications today. In the application that I was showing, that was Cosmonic. Cosmonic is built on top of wasmCloud, which means it's wasm turtles all the way down. I was running an application for Wasm built on a PaaS that's built on Wasm, so I can do real cool things in production, so why would we change?

WebAssembly Component Model

First, we have to talk about what we're changing to. That's called the WebAssembly component model. This is a phase 1 proposal within the W3C. Most of the work is happening within the WASI working group. WASI working group being that WebAssembly Systems Interface group. We're very interested in finding ways to run WebAssembly everywhere, and finding good ways to embed it. Part of embedding includes being able to call APIs on that component. With the WebAssembly component model, part of the proposal is that I can now parse in high-level types, like strings, records, which are basically structs, different Enums, like all kinds of different stuff.

When I look at the difference between what is a WebAssembly module, and what is a WebAssembly component, you'll see that at the very beginning, the first few bytes are different. One says it's a module, one says it's a component. That part's obvious. You can also see that when I'm trying to do things with strings, I have to convert those basically into core value types that are supported by WebAssembly. Those are basically all numbers, like I said before, so these are i32s. On the right-hand side for the WebAssembly component, I'm able to talk about things with real names.

Like, I want to use logging, and in logging, I want to be able to parse in a string to the log function. That's really nice when you start talking about ways to parse in different types, in and out of Wasm. If I could say one thing about what is the WebAssembly component model, it's a way to work with high-level types within WebAssembly.

It really has three key properties that make up the proposal. The first thing is that you need to be able to link with other WebAssembly components, so I'm able to take them and compose them together. I'm able to parse in high-level types, and that can be defined with the WebAssembly Interface Types IDL. This is an IDL that's part of the proposal that's specific for being able to translate types between WebAssembly.

Then this last one is a new concept, that's been coined by Luke Wagner, called virtual platform layering. We'll talk a little bit more about that towards the end. The high-level idea is that if I have all of these three properties combined, I have something that's composable, virtualizable, and easily language interoperable. Once you have all of those things, that means I can come up with different ways to make the same .wasm run in lots of different environments.

The diagrams here at the bottom is by Lin Clark. She does amazing code cartoons. On the left-hand side, that's representing a WebAssembly runtime, and the lines in between are basically those interface types, those high-level type definitions. The WebAssembly runtime, the host, and the guest module, that .wasm are able to communicate with each other over high-level types. The same can be said for two different WebAssembly components that are both compiled from different languages, they too will be communicating over these high-level types.

A lot of folks get confused when I talk about component model on WASI, because in a lot of ways, these two different standards have been co-evolving. In a lot of ways, they're tightly coupled. They're designed to be built on top of each other. It all starts with the WebAssembly core specification. Then the component model proposal sits directly on top of that. That's where WASI Preview 2 comes in. WASI Preview 2 is this big milestone release that we've been working on for about two years now.

It's very soon going to be in draft stage. We've just put out an announcement saying that it's going to be in Wasmtime 10.0, which is not released yet. I'm going to be running on that locally so that you can see a little bit of it in action. The end goal here is that not all of the stuff has to be standardized, that anybody can come and write their own modular interfaces and build on top of these proposals. Now you have the ability to take advantage of all this stuff and mix and match, and be able to run components anywhere.

I don't want to bury the lede. I've already talked a little bit about how components are basically going to be available with WASI Preview 2. All the WASI Preview 2 definitions are defined within WIT IDL definitions. It is very much under construction. Building this demo, there was a lot of different things changing at the time. If you want to take my code and run it, you literally have to do it today, because I can't promise that it won't work tomorrow.

Step one, creating a component. Today, nothing actually targets WASI Preview 2 because it hasn't quite entered draft phase, which means a call for implementers, please implement this. An implementer who would implement WASI Preview 2, that's like RustLink, or TinyGo, or Go Upstream. They would be able to add another target. In addition to WASI Preview 1, they would target WASI Preview 2. The thing that comes out once you say, go build, is basically something that adheres to the Preview 2 spec.

Because that doesn't exist yet, what I'll do is use another tool that takes a Preview 1 module, a WebAssembly module, and adapts it to the Preview 2 specification. Then I'm going to use tools that generate guest language bindings. The popular one that a lot of folks use is wit-bindgen, but there's several others that are cropping up in different spaces like the JavaScript component tooling called jco. That's all so that you can get something that's a little bit more native to your environment. The combination of all those different steps and tools, out pops a WebAssembly component.

Writing a Wasm component is not all that different from what I was showing earlier, of building with a WebAssembly module. The key thing to highlight is that at the very top of this file I'm using wit-bindgen. The point is, I'm using a different tool to be able to generate stuff. At the very top of this, I'm not pulling in things from wasmCloud or from any other proprietary language SDK, or anything like that. I'm able to use upstream tools, code to the WIT IDL, and so take an off-the-shelf component and basically run it in any one of these other application frameworks.

That's the goal, to eliminate all of the silos that we currently have that you can see, and being able to work with different FaaS frameworks. You see this everywhere. If I'm trying to write a Knative service within Knative, I might not be able to use all the different APIs that I'm used to, I have to use their SDK to build things out. Same thing exists within the WebAssembly module ecosystem today. In the WebAssembly module ecosystem today, I build against specific SDKs. Let's eliminate that, have a standardized set so that I can take the same component and run it.

If one day a serverless framework works best for me, run it there. If another day, I want to run that exact same thing inside a database, I should be able to do that. That's the first key to being able to write a component is that I don't necessarily care where it's going to run. The other thing that I'm highlighting is the same thing that I showed with WebAssembly modules, which is, I say, keyvalue set. I say publish to a Message Broker.

This little piece of code, all it does is listen in a Message Broker, and it's listening for a ping. Once it gets a ping, it increments a keyvalue bucket. Then it publishes a pong with the current value. With this little amount of code, I'm still relying on the host to link together that capability and provide that connection to the keyvalue bucket into the Message Broker. That's the next evolution from WebAssembly modules to WebAssembly components, is being able to basically eliminate walled gardens between language SDKs that are in different application frameworks.

Linking Components

The really incredible thing and the reason why I think we're here, is that in order to build features faster, we have to be able to take components and libraries written in other languages, whatever is the best one for that language, whatever is the best one for that job, and be able to bring them together into one single binary. The way that we do this today, is we basically take and write things in microservices, and then we build out different containers. Containers themselves aren't composable.

With something like the WebAssembly component model, I can actually compose them together so that they're all running within the exact same process. We link that together with this command called wasm-tools compose. This is basically taking three different components, one written in Rust, Go, JavaScript, and then what results is one single .wasm, one single binary at the end. What I really am saying is that, in this case, WebAssembly components are a bunch of numbers in a trench coat, pointing at each other. They don't actually necessarily touch. They do say like, here's my import and here's my export, and we'll figure out how to translate those types between them.

Once you have that, there's one really cool feature that I want to highlight that changes the way that I think we'll start writing software today. That's the ability to at runtime, link in different capabilities. In this screenshot, I'm showing, at runtime, I linked in NATS Jetstream. Then I changed it to Redis. Then I changed it to Vault because I could. The result of that is, my code never had to recompile, all it had to do was relink.

This is a little bit canned, because you're probably not going to jump around between different stories, in that case. Probably, what we're all going to do is move from Redis 64 to Redis 65, because there was some vulnerability. The important thing is that my Wasm component, it never had to change. It's just linked to those different capabilities. That's handled seamlessly with zero downtime by the host.

That's why I think declarative linking of components is really cool. I feel bad for Log4j, because we all dunk on it throughout the entire conference, but it earned it. Right now, when we have a vulnerability, something like Log4j that shows up and is pervasive across the entire language ecosystem, we have to rebuild the entire world. We don't just have to rebuild the world of now, we have to rebuild the world of 15 years ago when we first built that thing, and it's long-term supported.

That, to me, is a problem worth solving, because it's like taking a ton of our human power, a lot of our energy, like just think about the CPU and burn on the environment that events like this cause. By being able to not have to recompile my code from 10 years ago, and just relink it to the right capability that doesn't have that vulnerability, I save so much. I've moved that complexity to the dependency that's caused the problem. It's moving the complexity out of my code and into the place where it is. I really like that.

If you think about the way that we've moved across epics of computing, we've moved to greater levels of abstractions. The component model in my mind, is that next evolution, another step, and having maybe even the final abstraction that we can build against. Starting with what we built within data centers. Then we eventually moved into containerization, where we started building a lot of containers.

Then we realized we need to orchestrate them, so we all landed on Kubernetes to orchestrate our containers. What I expect to happen next, over the next decade, is that we're going to find new ways to componentize our applications, rather than just containerize them. The benefit being is that when we're writing components, we're focused just on our app logic, and all that other stuff underneath is moved to basically the platform.

Demo

So far, we've hit some of the high-level stuff, I'm going to go a little bit deep, but only briefly, because I think the spec is really cool. Basically, one of the key architecture decisions behind the component model is that it's based on shared nothing linking. What that means is that a single component instance fully encapsulates the core Wasm module. The core Wasm module, it's got memories, tables, globals, functions. Basically, everything that makes up a normal Wasm module, that's all in it.

Within that memory, the only way that you can talk to it is through its imports and exports, basically, its interface boundary. When I look at a single Wasm component, there are many different component instances. Each one of those component instances is fully encapsulating its module, its logic, and its memory. That means that I can do things like allowing for principle of least authority. Now, I can't go and reach into my other library's dependency and read what it was doing.

Think about how we write apps today. We send in a giant config bundle of all of our environment variables, and we say good luck. It's got all of our API keys. It's got our database user and password. If a library that is upstream for me, somewhere within my supply chain has been attacked, and now it just says os.getenv, AWS key. Then it now has access to that key and can do any kind of gremlin. With separate component instances, I'm able to parse in exactly what it needs.

I know with WASI exactly what interfaces it wants to use. I know that those interfaces have been standardized, so we've looked at them for things that might have tricky issues like SQL injection. By building on this design and architecture, then I'm able to really focus on modular programming so that I am no longer worried about what all these different libraries are doing to be able to work with my app. I'm focused just on my app logic, and I know that I'm safe from the libraries that I'm pulling in.

When you look at the WebAssembly specification, one thing that shows up a lot within the component model specification that is, is lifting and lowering. Lifting means converting a core value like an i32 to a high-level value like a string. Lowering is taking that high-level type and bringing it back down to a core value. An example of doing that would be taking a value and converting that into a concrete string representation.

If you're familiar with how strings work across different languages, everybody implements them differently. A Python string is not the same thing as a Rust string. That's why you need basically something to be able to bind to, to know, ok, if I'm going from one language to the other language, how do I get it to a common type, and back down so that it will work within that other language? We call that the Canonical ABI. Those instructions are called like canon lift, and then whatever the type is that it's moving up or down. That boundary is right there. It's like glue code right in between the imports and the exports.

Let's actually take a look at another demo of this. In this case, I'm going to use a cheat sheet. We call them guests when it's the WebAssembly component that you're building. In this case, I've got some pretty simple Go code. It is a Go component, and all it says is, "Hey." It's got a Gopher emoji here, so that you know that it's coming from Go. I build it by doing some stuff that's fairly common to Go developers.

I use a tool called TinyGo rather than Go specifically. TinyGo is its own compiler ecosystem that targets WebAssembly extremely well. Go Upstream is working on support for WASI Preview 1. Today it only supports command modules. Basically, TinyGo is a great way to get started if you want to write Go and do WebAssembly. I tell it to target WASI. I pass it my Go module. I've printed out the WAT file.

I've shown it a few times, but I didn't quite describe what it is. WAT is the WebAssembly Text Format. If I look at the top of this, I'll see that the thing that I just produced by building with tinygo build target wasi, and Wasm tools print is this module here. This is a textual representation of the binary output. If I jump all the way down to the bottom, I'll see that even in just core WebAssembly, so this is a WebAssembly module, I have imports and exports. That type of the specification stays a little bit the same. The important part is that I'm basically able to make this greeting, Hello. It has this string for defining what that function name is.

Once you have a WebAssembly module, we need to convert it to a WebAssembly component. Upstream, there's this monorepo called wasm-tools. Inside wasm-tools we have a bunch of goodies. The one that I'm going to use specifically is the wasm-tools component tool chain. What this does is it takes this component adapter, which is actually a .wasm itself, because once you get things in Wasm, everything starts ending up in Wasm.

I'm able to adapt the Wasm module that came out and convert that into a component. If I want, I can print this out again, and say, ok, given that new component binary that I just created, what's its textual format? Its textual format is the same, but a little bit different. Again, at the top, it's a component. Down back at the bottom, I'll see that I have a new way of defining, so I get instances. I can take that same core component or core module and create multiple different instances.

That's handy, if you've got one library that's used in a lot of different places, but you want to parse in different things depending on how it's used. The other thing I just wanted to show is that I'm able to parse in something, greeting, it takes in a string. Then once we start digging in a little bit for where greeting shows up, and where hello shows up, you'll see somewhere in here, I got a canon lift and lower.

There we go. This shows a little bit of it. I'm basically eventually getting back down into the core WebAssembly specification, which means I'm back down to numbers. I was able to find different types to be able to convert between Go, and back to the core Wasm type, so from string to numbers.

One more thing, we're going to look at the WIT. Given a WebAssembly component, I actually can get a full definition of all types. I don't actually need a WIT IDL file to be able to figure this out. All I need is the WebAssembly component binary. We'll see that it's got a couple different imports. Those were added for me by the adapter that took me from Preview 1 to Preview 2. The main thing to look at is that this export is component example greeting.

That gives me the ability to say, "Hey," as a Gopher. I've done the same thing, essentially, but I've done it now in Rust. If I open up Rustacean, and I look at its source code, you'll see at the top here I have, dependent on my exports, and I'm saying, I'm making a greeting. I have a component and it says hello with a Crab. Now let's build this one, which is significantly shorter, cargo component build. If you're familiar with Rust, this looks basically the same as what you would normally do.

The only difference is that I'm using a tool chain called cargo component. That knows how to produce componentized binaries. That built because I just built it. Now let's pop up one more WebAssembly component that we're going to build. This one I'm not going to use cargo component just to illustrate exactly what the calls are within the cargo ecosystem. Basically, I target wasm32-wasi, that produces a .wasm. I can convert that to a component.

Then I'm going to go ahead and just print these, just in case we want to look at the WIT for those. Here we go. This final thing that I built, this is my app. My app needs a Gopher, and it needs a Rustacean to work. Those are my imports. Then my export is just run. If somebody loads this component and wants to see what it does, they have to call the run function. Now let's run. I'm going to cd into the host directory. Right here, I've written a really simple Rust host, pull up its code. It embeds Wasmtime.

I'm going to scroll down to show one other interesting thing. We're going to skip that part of the demo. In this case, I'm loading our component from file. That is that final linked Wasm module. I'm going to run a command called wasm-tools compose. The config file for that is this. It's saying basically where each one of those WebAssembly components exist, so ones for the Gopher, Rustacean, and the app.

I run one command, which is wasm-tools compose. I want it to output app_component.wasm, and run it. I got a linked Wasm module. Now let's go back to the host. I should just be able to run, cargo run. I got to run the function. I really like this WAT stuff, so I got to show you at least a little bit of what it looks like on the inside. This final thing that we linked together, let's look at its WAT, cd guest, and wasm-tools component.

We see that this final thing, notice that when I looked at linked.wasm, so this is the combination of app, Rustacean, and Gopher, it doesn't have any information about Gopher and Rustacean. Even those component dependencies are fully encapsulated inside that top level component. The only function that I'm running here, or that I even have access to run at a top level from the host perspective is that run function. If I show you a little bit of that, I basically take the WebAssembly instance, and I look for the run function. It's pretty simple. I call run and I'm printing the result.

C-You Later Alligator

With that part of the demo, at least you can see that a lot of these implementations are getting built. They're real. What it really means is that I'm able to say, C-you later alligator, to a couple of different things that are a real pain in the application development lifecycle today, including having to write my own manual bindings over FFI. When I was showing the Go stuff, that actually is building over cgo today. Right now, a lot of our tool chain and everything that we build is in Rust.

To get things going a lot faster, we used our existing Rust stuff, and then use cgo. I expect what we'll build later is using Go native to be able to do the right language bindings without having to do a C binding CFFI. Effectively, if you take one component, and I showed being able to print out that WIT definition and knowing all of its imports and exports for that component, I no longer have to take that and then generate a bunch of different language SDKs for other people to use it.

All they need is my component, and then they can generate the language SDKs that they need. I think that's handy because it's pretty annoying today when I make one change, and then I've got this giant build matrix that I basically kick off for all the different languages that people depend on my one library. It also means that we've gotten rid of insecure memory sharing. I'm not able within one component instance to be able to reach out to a different components instance.

Then I showed even at an interface boundary that everything is fully encapsulated even from the top level. That basically means that we're no longer vulnerable to transitive dependencies. If we start writing apps that are just targeting WebAssembly components, I'm no longer needing to sprinkle IFDEFs about what my operating system and CPU architectures are.

Build With the Right Tool for The Job

The goal here is that we'll get to the point where we're able to build with the right tool for the job. A different way of saying that is like in Hunger Games, like may the best component win, because it's the fastest, most ergonomic, smallest. While I don't necessarily expect everybody to go off, and one WebAssembly component includes every language under the sun.

What I do expect to have happen is folks pull in dependencies written from Rust, or C, or C++, and languages that can be highly optimized and super small, and then the people consuming those can write in Python, or JavaScript, or whatever language they're familiar with, or whatever works best in their ecosystem that they're targeting, and be able to benefit from those libraries that are written by others. One thing that kept showing up when I was showing the WIT definition, is this concept of a world.

One way to think about worlds is that they are profiles. It's a definition of the environment that I need to be able to run. My component says, I have these imports, and I want to import things like the stuff that I would use for running a CLI command, which means things that I would need if I'm running a terminal app. A lot of these world files probably won't be standardized. There are a few key ones that we've been talking about within the standardization process.

One of those is WASI CLI. We think a lot of people are going to write CLI applications, and so there are certain things that everybody's going to need if they want to write a CLI app. If I'm writing a CLI app, I want the clock. I want to be able to access the file system. At the very end, every component that is targeting this world file, it basically just needs to expose a run command. This is really interesting, because while I say that I need a WASI file system, and I know exactly what those interfaces are, these are all also fully virtualizable.

It doesn't mean that I actually have to give it a file system. I could give it a cloud blob store. I could give it something that's an in-memory representation. There's a project called wasi-vfs, a virtual file system that creates a file system in memory. If you're able to do that, then you can solve new problems like building your own digital twins, and doing that with platform layering.

Digital Twins with Platform Layering

The first example I'll give is that I can take the same that .wasm, like I showed being able to run it across different clouds and on different architectures. I could take the same .wasm and have different capabilities linked to it from local to dev to prod. For example, in dev, I probably want a file system so that I can poke around easily and figure out what my program is doing. Then, in dev, I might be connected to a really slow S3 connection, but in prod, I'm connected to something that's HA.

It can be hard when you're bundling in different services and capabilities into your app to know like, what is it connected to, and where's the problem? What's going wrong with it. If I'm able to say literally the same .wasm in all of these different environments, now I have a really great way of having a good development loop. The same thing applies if I'm building digital twins, and I want to target really esoteric hardware or firmware that runs, that's hard to update and whatnot.

I can build something that's a .wasm, the exact same thing. I have two different world files. One world file for being able to run locally and a different world file for being able to run on that embedded device. That can be really powerful. I think this will unlock a lot of new software paradigms that people will get into.

Conclusion

In a world where a Gopher, a Rustacean, and if you write JavaScript, you might like Dino, where they can all work together in a single world file that produces a single component. That's really powerful because it should break down a number of silos, not just language ecosystem silos, but even the frameworks and libraries and protocols in between all of these different layers. You might think, that sounds a little crazy.

All my friends are all Gophers, or they're all Rustaceans. Honestly, I didn't have to look too far to see that this would solve real problems today for me, because there are real people behind each one of these libraries. On my team I looked and I saw Taylor, who likes Rust, and Lachlan, who writes mostly in frontend engineer side, and writing in TypeScript. For us to be able to cooperate and build things in the tool that works best for us, is extremely powerful. I think most people can think about a giant people network that they already know, of folks that they would like to collaborate with.

Get Involved

If you think this is exciting, then please get involved. We are working on the standardization process. It's phase 1. Input from folks that are building applications today and can provide real use cases that they need solved, that is invaluable feedback. That's within the W3C. Then the implementation of those standards is happening within the Bytecode Alliance Software Foundation, and we have weekly meetings discussing the component model. Then most of us hang out in Zulip chat. If you're into distributing Wasm and orchestrating Wasm, then I recommend checking out wasmCloud, which is a project I work on.

 

See more presentations with transcripts

 

Recorded at:

Nov 14, 2023

BT