BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations WASM in the Wild West: a Practical Application Tale

WASM in the Wild West: a Practical Application Tale

Bookmarks
39:06

Summary

Taylor Thomas and Matt Butcher discuss the possibilities afforded by WASM and why they think it will be a major component of application development in the cloud, along with some of the lessons learned.

Bio

Taylor Thomas is a senior software engineer working on Krustlet, Bindle, Wasm, and other open source tooling at Microsoft. Matt Butcher is a principal software developer at Microsoft where he leads the team of open source developers that manage Helm, Krustlet, CNAB, Brigade, Porter, and several other projects.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Butcher: I'm Matt Butcher. This is Taylor Thomas. I lead an open source team at Microsoft. I've worked on Helm and Kubernetes. Even going back, I worked on OpenStack, and things like that. A lot of what you'll hear is deeply grounded in that background.

Thomas: I work with Matt. I am also in the open source world, and I work on Krustlet as a core maintainer. I used to work on Helm as well. I've been doing Kubernetes and Docker things for a long time, which is the basis for where we started a lot of what we're talking about. I came to the REST side of things, which we'll also be mentioning a bit by way of Go.

Wet Your Whistle

Butcher: We've been working on Kubernetes for quite a while at this point. In fact, Taylor and I met because of our work on Kubernetes. Kubernetes is really just a way to run containers on a host of different machines or virtual machines. You can think of it as, in the olden days, I would have a virtual machine and it did just one thing. That wasn't a terribly efficient way of utilizing virtual machines or physical pieces of hardware. Containers came along and offered us an abstraction layer, which we could run lots of containers, each reasonably isolated, but sharing the same host. In the Kubernetes model, you have one API server and all the clients are talking to that one API server and saying, can you schedule this container and run it for me? Can you schedule this container and run it for me? Kubernetes takes that request and says, somewhere out there, do any of you nodes have space for this? We're going to talk about these kubelets. A kubelet is the piece that sits on the individual node, the individual container runtime's system, and says, "Yes, I can run a container here," or, "No, I can't run a container here, I'm full," or, "Your container request doesn't meet my set of requirements." Kubernetes, for us, for the majority of the time we've been using Kubernetes, we've been working all around the periphery of what at the end of the day is a big scheduler. We wrote Helm, which is basically like a package manager where you can install things in there and say run this. Then Kubernetes would take it and look at your Helm Chart and say, ok, I can run this. Kubelet, can you run this? Yes, I can run this. Can you run this? No, I can't. Ok, how about next kubelet?

The Big Question

We had been working at that layer for a while. At some point, we stopped and said, Kubernetes is interesting, and it's solving a lot of problems for us. If we look at the world and take a step back, we see that there are different tiers of virtualization. We've got these big, bulky virtual machines. A virtual machine runs the entire operating system, kernel and drivers all the way up to application layer all gets bundled in one big virtual machine image and you execute it somewhere. They tend to be Gigs in size. Kubernetes gave us a distributed runtime for containers. Containers are a lot smaller, because they don't need the kernel and the drivers packaged up in every single container. It still has a pretty big chunk of the operating system packaged up in each container. You might not be pushing around a 9 Gig or 10 Gig image, but you might be pushing around a 50 Meg to 2 or 3 Gig image. We started saying, what if we could trim that out a little? What if we only needed a smaller segment of the operating system features, like just the ability to say read environment variables and read and write files on the file system? Could we get smaller executables and a less bulky runtime at an exchange of not being able to do quite everything that we could do with containers? We looked around and WebAssembly popped up on our radar and looked very interesting.

Tall Tale

That's the groundwork for this tall tale that we're going to tell you. We got together back in the day about two years ago now, our whole team, Deis Labs got together in Canada for our annual on-site. We got together, spent three days, maybe four days holed up in conference rooms during the day, and then going out to restaurants and doing escape rooms and things in the evening. As we got going, we started talking about this idea. We got the big, bulky virtual machines, and we've got the slightly less bulky containers, what if we tried to find this felt thing that we could run in addition to this? We got going on the WebAssembly idea, and tossed around concepts that we could build with WebAssembly for three, four hours one night.

At the end of it, we were all so excited. We said, we got to try this. We got to try something. What if we took Kubernetes and we ran WebAssembly in Kubernetes instead of containers, or better yet, ran WebAssembly side by side in a Kubernetes cluster along with containers? That's where our project called Krustlet came from. Krustlet is a kubelet written in Rust that executes WebAssembly. We worked on a rough draft of this, just barely enough that I could demo it to somebody and say, "Look what we did. Don't touch anything, it'll tip over." It worked and got other people excited. We designated a bigger block of time. We went from a proof of concept to building a minimum viable product, something that we felt like really could provide most of the benefits for WebAssembly modules that one would get with containers when running them inside of Kubernetes. That is the stage at which we have been for just about a year now. We're wrapping up at the end of that. We're getting very close to having the 1.0 release of Krustlet.

The excitement didn't dwindle at all for us over the last year. In fact, we got more excited about the different things we could do with WebAssembly. Consequently, we've started building out a whole bunch of new WebAssembly projects. We built a package storage system called Bindle. We built a very lightweight web application runtime called WAGI. Now we're just tinkering around with other ideas and other goals, to see if we can push this lightweight, cross-platform, cross-architecture, secure environment all the way up to its limits. We'll be talking about that kind of stuff.

It's Never Too Late to Check Your Cinch, or What We Learned

Thomas: That's where we really get into some lessons and things learned. This is where we're going to talk about the very practical things we talked about. That's why we called this Wasm in the Wild West, because we've been out on this bleeding edge for a while. Right here is a phrase that I got because my spouse does horse things. One of the things they always say is, it's never too late to check your cinch. If you've never sat on a horse before, there's actually two cinches underneath. The main one, you have to make sure it's the right tightness. If it's too loose, you'll tip over or slide, or all sorts of things can happen. If you leave it too tight, you'll make your horse pass out, which will also result in you falling on the ground. We're trying to teach here, what are these things that we've checked? What are we checking? What cinch are we checking here?

The Horse and the Pony Show - Java VM

We're going to talk about the horse and pony show here, as we like to call it, about the various runtimes that people like to compare Wasm to. The first one is the Java VM. People always ask, isn't this the Java VM? In a sense, yes, you could consider it like that. For purposes of our comparison here, it's the big draft horse. It's tried and true and infinitely configurable. There are people who literally have jobs configuring the JVM. It's very bulky. It's limited to a single language ecosystem. It is Java and only Java, and derivatives, all the other things you can run on top of it. That is what you start off with in this horse and pony show.

Containers

We also have containers, and these containers are more like our Quarter Horse, which is the horses I'm used to personally. They're mid-size. They're big, strong, but they're also not as big and hefty as a draft horse with their big dinner plate sized hooves. The problem here is we started going, now we have these containers. They're more lightweight than Java, particularly when you have a well-made container, and they can run things written in multiple languages. It's really not very good at cross-platform. If we're being honest to ourselves, Linux containers only work on Linux systems, Windows containers only work on Windows systems. There's been lots of cool work that's been done to make that work, but they're only for single systems. Also, it requires extra build steps that developers now have to keep track of. This is talking like the Dockerfile where you have to specify all these different dependencies and all the things you build in. There's extra build steps required to get it into a container.

Butcher: I would add that in addition to Linux containers only working on Linux and things like that, we're seeing a resurgence of interest in architectures like ARM, and now we have containers that can only run on ARM and containers that can only run on x86. The problem is the complexity of the problem is compounding.

Wasm Runtimes

Thomas: Last but not least, we have our little Shetland pony, which is the Wasm runtime. It's cute and adorable. It's very lightweight. This both refers to runtimes and to binaries. We can get the Wasm binaries, they're called modules, down to just a few Megs. It depends on obviously what you're doing but it's still at least 10% of the size, generally speaking, of a larger Docker container. They're sandboxed by default, using a capabilities model, meaning you can only approve specific things that it can do, saying you can only access this file or this file system, everything is granted with an explicit permission. They're actually truly cross-platform. There's edges that we're going to talk about, you're going to have to watch yourself with. If I build a Wasm binary on my Mac, I can pass it to someone on a Windows machine, it'll run there. On a Raspberry Pi, it'll run there. On a Linux machine, it'll run there using the exact same binary. It also allows compilation from any language without additional steps. There's a few that require some more, but we'll get there too.

More than One Way to Run It

Butcher: One of the exciting things about the compilation thing is that when you're dealing with a bytecode file, the host can choose at runtime how it wants to execute it. We're starting to see a lot of different runtimes that are all optimized for different execution models. I'll start out with that first one and talk about Wasm3, an ultra-light interpreter that executes a program as it's reading it in, which means it can use a very scant amount of memory and system resources at the expense of maybe executing something a little bit more slowly.

Thomas: We also have the JIT runtime, which each of the things here are examples of them, they compile at runtime. It's a little bit more space requirements, but it executes faster. It's generally the one that we've seen the most of, but we also have the AOT, which is the compile ahead of time, that's what it stands for. We lose the platform independence at some level, but you gain a lot of speed, and you take up more space. Like it says there at the very bottom, the best thing about this is that decision is made when it matters. You can do any of these things from the exact same WebAssembly module. That is very powerful, because you can make the decision based on your requirements at runtime rather than three weeks in advance when your developer is compiling it.

The WASI Spec

Butcher: WebAssembly is standardized, and it's standardized by the W3, the same place that does HTML and cascading style sheets. What we're excited about is there's an emerging second spec that complements WebAssembly. WebAssembly really defines how the particular thing is compiled and how it executes. The WebAssembly System Interface Specification, WASI, is where we feel like the real future for WebAssembly in the cloud lies in this specification. The goal of this is to have like a common interface that lots of different WebAssembly projects can build on, all to get things like POSIX or libc-like APIs, where you can use a file system, open a file on the file system. Read an environment variable from the environment, but in a common way that can be sandboxed, and that can be tooled for security and things like that. Still very much in flux, but we will be talking about that here and there, because for us, WASI really opens up a lot of options for how we can run things in the cloud, as opposed to maybe WebAssembly's initial goal, which was to run things inside of a browser.

Keep Your Butt in the Saddle, or How to Avoid the Gap

Thomas: Once again, we've arrived at another random Wild West phrase, but is very accurate here. There's this phrase of keep your butt in the saddle, which sounds so dumb, but it really is true. Because, oftentimes, you stop thinking about what you're doing and you're so focused on the reins, or what pressure you're putting on the horse, whatever, that you don't keep your butt in the saddle. That makes you lose points at shows at the least case, and fall off the horse in the worst case scenario. This is how do we avoid the gap that's there right now? What are the gaps that we currently see?

Community Fracturing

The first one that we point out that could be the biggest threat and we see efforts right now to gather this together is that we have community fracturing. We're going to have a whole bunch of where we are and where we want to be. Right now, there's just a whole bunch of competing runtimes. Each one of those runtimes has its own buy-in. You have to use their custom libraries, their custom interfaces, and there was no major community or foundation to use as a gathering place, though that is now very much changing right now. That's why we left it right there because it's on the border of the Bytecode Alliance, whereas this is acting as a meeting place for everyone wanting to work on these specs and things. We want to be in a place where we have that better collaboration with foundations and have that common specification, which is WASI, with various implementations, so we have less lock-in to those custom libraries. What we really desire here is to have a good, solid set of things that then everyone can build their competing solutions on. We know we're in a world where people are going to build competing things. That's great. We need a good common foundation to set it on. We want to avoid that fracturing.

Butcher: We've seen that happen in just about every language and tooling ecosystem before. We're just eagerly awaiting that motion to occur in WebAssembly. It really is starting with Bytecode Alliance, and W3. I think we're really seeing a lot of people gather.

Developer Experience

Another interesting one is developer experience. The Wild West theme definitely runs strong when it comes to developer experience. Because right now, it is a lot of work to set up a tool chain. A lot of times we are writing custom low level bindings, and some tools require bespoke tool chains. If nothing else, there's just a lot of steps. You're compiling this. You're using this tool for that. You're signing with this. We're just now starting to see some motion to where we want to be. One of those things is we want to see a unified workflow, where we've got tools that in one step will compile the binary, sign it, package it up, and be able to push it somewhere. Or code generators that can scaffold out projects very quickly. We love VS Code, for obvious reasons. We are excited that as this ecosystem matures, it means more will be able to be done from within VS Code. Scaffold something out. Open it up. Type a couple of commands. Hit your command P button, and compile things. These are all things that we want to see happen, because the developer experience right now is definitely Wild West.

WASI

Thomas: There's still gaps in the WASI spec. Where we are right now is we're missing networking. It's a mishmash of stopgap solutions. We have even created one to add into that mishmash, so that we can get things done, which is our experimental HTTP library. We also have this whole debate going on of, should we do streams versus POSIX? Which we're leaning towards streams right now. There's still debate. There's this really cool idea of what are called nanoprocesses, which are concurrent tasks, and it's still just an idea. There's some really cool ideas, but nothing implemented. Where we want to be with that, because obviously, we want all the things, in about a year we want to have a working stream implementation, which allows for these flexible extensions on top, like networking things. Then, also, an initial nanoprocesses implementation. That's something we look forward to, but just realize it's still rough there, too.

Butcher: I think the way that you can visualize this or think about it is, on one hand, we could have a whole bunch of hard coded libraries that give you a bunch of hard coded functions that you could just access reliably, and that would do the same thing. That's the way that the original WASI specification was written and sounded like a good idea at the time. In many ways, some things like files and environment variables, probably still should act that way. Coming up with a more pluggable model where the outside can tell the module, I'm going to plug in an implementation of this thing here, but you just use it in a generic way. Imagine like key-value storage is. The WebAssembly module you write shouldn't have to care whether the actual key-value storage underlying it is Redis, or Memcached, or anything, some hosted cloud provider thing. It should just care that it can ask for a key and get a value, and write a key-value pair. The streams API is really starting to broach the subject as, can we write a pluggable layer on top of WebAssembly so that hosts can provide implementation on behalf of the modules that are running within them. That's why we're interested in that because the flexibility gained in that model really opens things up so that we can all get a lot more done.

Guest Languages

Another thing that we are excited about, for sure, is the host of different languages that you can use in WebAssembly.

Thomas: Where I say -ish, I mean that you can compile to normal browser Wasm and there's work in progress to support WASI. It's not entirely there yet, that's why it's Go-ish, and Python-ish. These are the languages that have it. As much as we love Rust and really think it's the future of a systems development language, the thing is that we need the "enterprise languages" to be there with support. That is C#, and Java, and Python, and Go. Those all need to be there so they can compile straight to WASI. We hope to get there, but there's not support everywhere yet.

Butcher: The one thing that we are excited about in the where we are category is that we are starting to see some languages like AssemblyScript and Grain that are first generation, targeting WebAssembly. AssemblyScript and Grain, both are WebAssembly languages. Those will be good for a certain class of WebAssembly things where compact sizes and well understood runtimes will be really important to squeeze everything you can out of WebAssembly without necessarily embarking on a level of system programming, you'd have to use with C or Rust.

Why We Are Excited About Wasm

I think we've done a good job of conveying the fact that overall, we are just very excited about the opportunities that we are seeing with WebAssembly. We started off with what was an out there idea of, can we substitute a WebAssembly runtime for containers inside of Kubernetes? As we developed it, this could have been the thing where you get halfway through the project and you say, "It's fun to try it, but it didn't work out." Instead, the opposite happened. We have just started recognizing opportunity, one after another, and saying there's so much that can be done in this ecosystem. There's so much improvement that we collectively as a community can start to bring to this cloud native environment that we're all navigating right now. Distributed applications are hard to write right now. WebAssembly is promising to fill a lot of those little gaps that will start making it easier. Cross-platform, cross-architecture, those are huge, especially as architectures like ARM are starting to gain prominence again. Security is obviously a huge deal right now, and the security that you can wrap around a virtual machine like the WebAssembly VM, those are going to be great. We do need to figure out a way early on to prevent ourselves from getting too fragmented and hone in again and work together.

Gazing Into the Crystal Ball

Thomas: There's a gap in the cloud native ecosystem that we think this can fill because of its portability and smaller footprint. Let's finish up with gazing into the future a little bit. We could be 100% wrong about all of these things, in as little as three months. Let's go ahead and give it a shot, because people often want to see, what do you think is going to happen? We get that asked all the time.

These are five of the biggest things we thought of as we brainstormed. One is plugin models. This is things like Envoy proxy and Cloudflare Workers, all these things where you can have somebody write a plugin for a system that you have running in any language and just have it run, which sounds amazing. Rather than having weird gRPC interfaces and other things in between.

Butcher: We were talking about being able to use WASI streams, and things like that. The plugin model is where that model is going to shine, because a plugin author can mock things up in one environment, and then the runtime can supply a true implementation of those things. I heard one person refer to WebAssembly as the last true plugin model. Saying, finally, we've gotten to the technology that's going to solve the big problems for us. I don't know if he was right or not, but I liked the sound of that.

Thomas: I could be wrong again in three months. We're talking true microservice here, not people that just slap a microservices label on it, because it's a container, but actual small services being orchestrated together will be very good with WebAssembly. Same with Functions as a Service. Rather than having bespoke runtimes that will compile or whatever, you can just have everything do WebAssembly, which makes it even easier to expand language support everywhere. Really, the last two are interesting ones. We have constrained, IoT, and edge devices. The smaller footprint really comes into effect here, and the ability to tweak how you want it to run so that it runs most efficiently, and in the space given. Trusted compute, this is a sandbox model project, so if all of Wasm is sandboxed. There's some interesting trusted compute things that we see here coming in the future. That is just some guesses of gazing into the crystal ball. That's where we think it's going to be here soon.

Questions and Answers

Eberhardt: You've built so many different things, I forget some of the things that I've used or things that you've built. I was talking to Lin Clark, and I mentioned that I had been playing around with WAGI, and I really like the simplicity of it. The old-school CGI style interface. She went, "Yes, that was the guys from Krustlet."

Where does Deis Labs fit in with Microsoft? What's the relationship there? Are you an R&D group, or is there some other relationship there?

Butcher: Deis was acquired by Microsoft five years ago now. We were building a bunch of Kubernetes specific stuff at the time. When the company was acquired, they split us in two. One went off and built AKS. The other, we worked on various R&D projects for Kubernetes. Gradually over time, our definition has gone from R&D on Kubernetes, to R&D on containers, to R&D on Compute. We keep broadening out our scope of work. Right now, R&D on the interesting, emerging trends in cloud native development probably best describes how we do it. We do still maintain Helm, and Brigade, and some of those Kubernetes projects we've worked on for a long time. Our R&D work really is edging closer toward WebAssembly, and some of these more emerging technologies.

Eberhardt: That's interesting, because a number of talks I've seen in the past and a number of people using WebAssembly, it's a side project. It's R&D, whereas it's front and center in your website. WebAssembly is clearly something that's quite important to you.

Thomas: Have you built production applications with Krustlet? That relates to, as well, everyone's used it as a side project.

If you go to the Krustlet site right now, you'll see the big disclaimer that we have on most of our very new work that says, "Please, do not use this in production." We're not supporting this. There's no support contracts from people who know this. This is way out there. That's why we called this talk the Wild West. The good news is, we are very close to releasing the first alpha release of Krustlet 1.0, in which case, we will be stating that this is now ready to use for production work cases. Does that mean it has all the features and all the bells and whistles? No. There's only going to be that half networking support that we have and those things. It's to a point where it's nice and stable for people to actually use. The thing is, these are all very important things for us. Krustlet was the first one for us to just try out these ideas in a familiar environment, which is Kubernetes. For most people in the cloud, they've at least touched Kubernetes. That's why we have Krustlet hopefully in the next few weeks going to the first alpha release for 1.0. Yes, it will be production level at that point. We'll be working on more examples of "real life workloads."

Eberhardt: I'm interested to understand how you would use Krustlet to build production applications. Let's take one of the classic applications, say for example, a shopping cart application where typically you'd have maybe a microservice that takes care of your stock levels. You'd have a microservice which actually allows people to populate their carts. You might have it deployed to a cloud provider, and you may use the edge network for caching and so on. Where would you see Krustlet fitting in into that fairly standard cloud and potentially edge architecture?

Thomas: Krustlet, if you're not familiar with Kubernetes, is just a kubelet implementation. It's a node. It registers itself with Kubernetes to make itself part of the cluster. That means that you can run container supporting nodes alongside WebAssembly supporting nodes. For a real production use case, you probably don't want to go wholesale all in, but let's say you want to do the stock management because there's not really too many networking calls. You just have some database connections and some things that need to be updated. You could create a WebAssembly module that's hooked in to the rest of your things already running in Kubernetes, if you have that set up, and be able to run it just all connected together. That is where you get the benefit of Krustlet is it is just sitting there in the middle.

You mentioned some of our other projects like WAGI that are what we see as being part of the building blocks for the future of what WebAssembly could do that's outside of the Kubernetes environment. Kubernetes is familiar and gives people a good starting place to write maybe one part of their little microservice over in WebAssembly, and then start migrating some things over. The big benefit personally comes from developers. In the ideal world, when we get all the language support, developers don't have to worry about stuff like a Dockerfile anymore. They can just build it to the target, and then the SREs and other people running the infrastructure, just now have a whole pool of compute. It doesn't matter if it's Windows, or ARM devices, or Linux machines. It doesn't really matter because WebAssembly can run on all those, which is very helpful for people who already have existing compute.

Eberhardt: Say, for example, I might have some simulation, whether it's a financial simulation or machine learning simulation that's written in C++, potentially in the future, I could compile that to WebAssembly. Run it within Krustlet, alongside my more conventional application logic sitting within containers.

Butcher: Yes. You'll see advantages on startup time and runtime and things like that. One of the most interesting cases that really is what got us interested in WebAssembly in general was, we started talking a lot about, there should be clusters everywhere. Why are we thinking of clusters primarily as something that live in a cloud or a data center? What if we tried to build ad hoc clusters in your home, as different devices came and went? One of the limitations we ran into quickly with Docker was the architecture and OS limitations. You have to have your image compiled and executable on each one of those, and it's very difficult for devices to negotiate that thing on the fly. WebAssembly, you can compile once, and it'll run on all the supported architectures and OSs, which is all the major ones: ARM, x86, Windows, Linux, macOS. Immediately, it got us a little further down the road of this idea of data centers spanning clusters, and home clusters, and IoT clusters, and having Kubernetes play in all of those different spaces.

Eberhardt: That's really interesting, because the talk from Tim, who is from BBC R&D, they're looking at experimenting with the next generation media player. One of the things that they're excited about, they haven't tried it, is the potential of offloading some of the computation to your home network, specifically using exactly that point. That rather than the cloud being something quite distant that sits in a region somewhere, it's a more global network. I had not thought of that before, because the edge networks mean they're getting closer to the home, but there is compute power around the home. Why not take that one step further?

Butcher: Yes. A lot of that compute power is basically wasted. You have things sitting idle or at 10% capacity most of the time. If you can figure out a way to intelligently and securely move some of that workload closer, the user gets a speedup without paying any penalty anywhere else on their local network for that speedup. It's a really exciting prospect for the future. We're probably still a year, two years away from being able to really execute on that, because all of this WASI stuff is still moving very quickly. The WebAssembly specification is solid, but we're still trying to add a couple more things to it in subsequent iterations that will make all of this stuff possible.

Eberhardt: You mentioned on the developer experience your frustrations around how fragmented the ecosystem is. Part of the reason is because it's young, but I think the bigger issue is because it's a multi-language ecosystem. Do you think you'll ever achieve consistent developer experience when you've got cargo, npm, you've got all these different tooling, is that a realistic game?

Butcher: Consistent here, is a broad term that you can interpret a lot of ways and we probably misguide people when we use it. Our goal is for a Rust developer to feel like they use their regular tools and spit out WebAssembly. A C++ developer uses their own regular tools and spit out WebAssembly, and so on. Then, to add on to that, when I in my Go program import your WebAssembly library that was written in Rust, I don't have to know that it was written in Rust. It just appears to work for me when I call that import. Those are the kinds of experiences we look for more. We're meeting the developer where they already are and making that environment comfortable for them. Rather than giving yet another sophisticated set of tooling that they have to master, or everybody has to use the same set of extensions, and the same things, regardless of your language. That task doesn't really get to developer ergonomics.

What we'd really love to see is, you use the tools you already know, and you add on a --Wasm flag to your compiler and whatever, and that's it. The Rust ecosystem has done a really good job of this so far. AssemblyScript is really getting there. Grain, a highly experimental language, but it's WebAssembly first, and the compilation experience is just great. You're like Hello World in 15 seconds, and then you deploy it to WAGI and it works. That kind of thing is the experience we're really looking for.

Eberhardt: The one other thing I'd have to pick up on is JavaScript and enterprise language. Don't forget that one. That one should have been on the right.

You mentioned, C#-ish, considering you're mostly considering WebAssembly in the cloud or the wider network, I presume you're referring to C# but without Blazor or without a lot of what Blazor actually is.

Thomas: Blazor in this case is more towards the frontend, like a browser based WebAssembly. That's something to note. The difference is that there is what we call server-side WebAssembly, which is really where the WASI space hits, and browser-side WebAssembly, which is already in use in some crazy things right now that are fun to look at. That's the difference there. When we're saying C# as you're saying right now, it can't really compile to WASI. You can't compile a C# thing and run it in Krustlet, but you can from Rust, and AssemblyScript, and C, and all the languages that support WASI compilation targets.

Eberhardt: Is that actively being worked on, because you're right, I have not seen anyone do a simple WASI, effectively a command line C# application yet.

Thomas: As far as I know, it depends on the language. Go is working on it with its TinyGo project. I think they're working on it. C#, there is rumors and ramblings. I'm not completely entirely sure what their plans are. That's one of the gaps that we're trying to hope to fill. A lot of times that happens because as a technology gains traction, like WebAssembly is, people are like, we should probably enable this. That's what we're hoping will help drive some more languages to add support.

Eberhardt: The challenge that C# probably has is effectively their standard library and the .NET runtime, which is all part of Blazor, whereas TinyGo, it's easier to genuinely create a tiny runtime with other languages.

 

See more presentations with transcripts

 

Recorded at:

Mar 10, 2022

BT