BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Matt Butcher on Web Assembly as the Next Wave of Cloud Computing

Matt Butcher on Web Assembly as the Next Wave of Cloud Computing

Bookmarks

Today on the InfoQ Podcast, Wes Reisz speaks with long-time open-source contributor and startup founder Matt Butcher. Matt is the CEO of Fermyon Technologies and is at the forefront of the Web Assembly (Wasm) work being done in the cloud. The two discuss Matt’s belief we’re at the start of a 3rd wave of cloud computing, the state of the Wasm ecosystem, and what Fermyon’s doing in the space. The conversation includes Spin (Fermyon’s inner loop Wasm development tooling), wasm performance/compile times, similarities to the docker ecosystem, and language support for Wasm.

Key Takeaways

  • We are seeing the early stages of an evolving ecosystem around the Web Assembly (Wasm) binary. This next wave of cloud computing is a natural evolution from the virtual machine to the container to the Wasm runtime. The Bytecode Alliance, along with companies like Fastly, Suborbital, Cosmonic, and Fermyon, are at the heart of this evolving ecosystem.
  • Spin, an open-source developer tool from Fermyon focused on the inner loop (the fast iterative local development cycle), allows you to quickly build web assembly-based applications without having to worry about deployment. Spin has a Visual Studio Code plugin, and functions similar to serverless event listener models like AWS Lambda.
  • One of the often-stated design goals for server-side Wasm tooling is inline request compile times (from cold state to executing code at a request level) to be measured in tens of milliseconds. Spin, along with most of the cloud Wasm tooling, is reaching sub-1ms compile times. Incredibly low request-level compile times is possible because Wasm in the cloud knows details about the code much further in advance than browser-based Wasm. So code can be cached and optimized for performance.
  • Strong language support is happening for Wasm. Earlier this year, CRuby and CPython announced they’re compilable to Wasm via Wasi. tinyGo, dotnet (with the Blazer framework and soon directly with C# and F#), Java, and Kotlin all have or are working on native implementations for compiling to web assembly. There is hope these may all be available by the end of the year.

Introduction [00:17]

Wesley Reisz: Cloud computing can be thought of as two, or as today's guest will discuss, three different waves.

The first wave of cloud computing can be described as virtualization. Along came the VM and we no longer were running on our physical compute. We introduced virtual machines to our apps. We improved density, resiliency, operations. The second wave came along with containers and we built orchestrators like Kubernetes to help manage them. Startup times decreased. We improved isolation between teams, we improved flow, velocity. We embraced DevOps. We also really introduced the network into how our applications operated. We've had to adapt and think about that as we've been building apps, taking all of that into consideration. Many have described Serverless (or functions as a service) as a third wave of cloud compute.

Today's guest, the CEO of Fermyon Technologies, is working on functions as a service delivered via Wasm (Web Assembly), and that will be the topic today's podcast.

Hi, my name is Wes Reisz. I'm a technical principal with ThoughtWorks and cohost of the InfoQ podcast. In addition, I chair a software conference called QCon San Francisco. QCon is a community of senior software engineers focused on sharing practical, no marketing based solutions to real-world engineering problems. If you've search the web for deeply technical topics and ran across videos on InfoQ, odds are you've seen some of the talks I'm referring to about QCon. If you're interested in being a part of QCon and contributing to that conversation, the next one is happening at the end of October in the Bay Area. Check us qconsf.com.

As I mentioned, today our guest is Matt Butcher. Matt is a founding member of dozens of open-source projects, including Helm, Cloud Native Application Bundles, Krustlet, Brigade, Open Application Model Glide, the PHP HTML5 parser and Query Path. He's contributed to over 200 open source projects spanning dozens of programming languages. Today on the podcast we're talking about distributed systems and how Web Assembly can be used to implement functions as a service. Matt, welcome to the podcast.

Matt Butcher: Thanks for having me, Wes.

What are the three waves of cloud computing? [02:11]

Wesley Reisz: In that intro, I talked about two waves of cloud compute. You talk about a third, what is the third wave of cloud compute?

Matt Butcher: Yes, and it actually, spending a little time on the first two autobiographically helps articulate why I think there's a third. I got into cloud services really back when OpenStack got started. I had joined HP and joined the HP Cloud group right when they really committed a lot of resources into developing OpenStack, which had a full virtual machine layer and object storage and networking and all of that. I came into it as a Drupal developer, of all things. I was doing content management systems and having a great time, was running the developer CMS system for HP, and as soon as I got my first taste of the virtual machine world, I was just totally hooked because it felt magical.

In the past, up until that time, we really thought about the relationship between a piece of hardware and the operating system as being sort of like one to one. My hardware at any given time can only run one operating system. And I'm one of those people who's been dual booting with Linux since the nineties and suddenly the game changed. And not only that, but I didn't have to stand up a server anymore. I could essentially rent space on somebody else's server and pay their electricity bill to run my application, right?

Wesley Reisz: Yes, it was magic.

Matt Butcher: Yes, magic is exactly the word that it felt like at that time, and I was just hooked and got really into that world and had a great time working on OpenStack. Then along came containers and things changed up for me job wise and I ended up in a different job working on containers. At the time I was trying to wrestle through this inner conflict. Are containers going to defeat virtual machines, or are virtual machines going to defeat containers? And I was, at the time, really myopically looking at these as competitive technologies where one would come out the victor and the other one would fall by the wayside of the history of computing, as we've seen happen so many other times with different technologies.

It took me a while, really all through my Deis days, up until Microsoft acquired Deis, and I got a view of what it looked like inside the sausage factory to realize that no, what we weren't seeing is two competing technologies. We were really seeing two waves of computing happen. The first one was us learning how to virtualize workloads using a VM style, and then containers offered an alternative way with some different pros and some different cons. But when you looked at the Venn diagram of features and benefits and even patterns that we used, there was actually very little overlap between the two, surprisingly little overlap between the two.

I started reconceptualizing the cloud compute world as having this wavy kind of structure. So here we are at Microsoft, the team that used to be Deis, and then we joined Microsoft and we gain new developers from other parts of Microsoft and we start to interact with the functions as a service team, the IoT team, the AKS team, and all of these different groups inside of Azure and get a real look, a very, very eyeopening look for what all of this stuff looks like under the hood and what the real struggles are to run a cloud at scale. I hate using the term at scale, but that's really what it is there. But also we're doing open source and we're engaged with startups and medium-sized companies and large companies, all of whom are trying to build technologies using this stuff, containers, virtual machines, object storage and stuff like that.

We start seeing where both the megacorp and the startups are having a hard time and we're trying to solve this by using containers and using virtual machines. At some point we started to realize, "Hey, there are problems we can't solve with either of these technologies." We can only push the startup time to containers down to a few hundred milliseconds, and that's if you are really packing stuff in and really careful about it. Virtual machine images are always going to be large because you've always got to package the kernel. We started this checklist of things and at some point it became the checklist of what is the next wave of cloud computing?

That's where we got into Web Assembly. We start looking around and saying, "Okay, what technology candidates are there that might fill a new compute niche, where we can pack something together and distribute it onto a cloud platform and have the cloud platform executed?" serverless at the time is getting, and we should come back to serverless later cause it's an enticing topic on its own. serverless is getting popular but wasn't necessarily solving that problem and we wanted to address it more at an infrastructure layer and say, "Is there a third kind of cloud compute?"

And after looking around at a couple of different technologies, we landed on Web Assembly of all things, a browser technology, but what made it good for the browser, that security isolation model, small binary sizes, fast startup times, those are just core things you have to have in a web browser. People aren't going to wait for the application to start. They're not going to tolerate being able to root your system through the browser and so all these security and performance characteristics and multilanguage, multi-architecture characteristics were important for the browser. That list was starting to match up very closely with the list of things that we were looking for in this third wave of cloud computing.

This became our Covid project. We spent our Fridays, what would it mean to try and write a cloud compute layer with Web Assembly? And that became Krustlet, which is a Web Assembly runtime essentially for Kubernetes. We were happy with that, but we started saying, "Happy, yes, but is this the right complete solution? Probably not." And that was about the time we thought, "Okay, it's time to do the startup thing. Based on all the knowledge we've accrued about how Web Assembly works, we're going to start without the presupposition that we need to run inside of a container ecosystem like Kubernetes and we just need to start fresh." And that was really what got us kicking with Fermyon and what got us excited and what got us to create a company around this idea that we can create the right kind of platform that illustrates what we mean by this kind of third wave of cloud computing.

What is the ecosystem around Web Assembly in the cloud? [08:12]

Wesley Reisz: We're talking about Web Assembly to be able to run server side code. Are we talking about a project specifically, like Krustlet's a project, or are we talking about an idea? What is the focus?

Matt Butcher: Oh, that's a great question because as a startup founder, my initial thing is, "Well, we're talking about a project," but actually I think we're really talking more about an ecosystem. There's several ecosystems we could choose from, the Java ecosystem or the dotnet ecosystem as illustrations of this. But I think the Docker ecosystem, it's such a great example of an ecosystem evolving and one that's kind of recent, so we all kind of remember it, but there were some core technologies like Docker of course, and early schedulers including Mesos and Swarm and Fleet and the key value storage systems like ETCD and Consul. So there were a whole bunch of technologies that co-evolved in order to create an ecosystem, but the core of the ecosystem was the container.

And that's what I think we are really in probably the first year or two years of seeing that develop inside of Web Assembly, a number of different companies and individual developers and scholars in academia have all sort of said, "Hey, the Web Assembly binary looks like it might be the right foundation for this. What are the technologies we need to build around it and what's the community structure we need to build around it?" Because standardizing is still the gotcha for almost all of our big efforts. We want things standardized enough so that we can run reliably and understand how things are going to execute and all of that while we all still want to keep enough space open that we can do our own thing and pioneer a little bit.

I think that the answer to your question is the ecosystem is the first thing for this third wave of cloud compute. We need groups like Bytecode Alliance where the focus is on working together to create the specifications like Web Assembly system interface that determines how you interface with a system clock, how you load environment variables, how you read and write files, and we need that as a foundational piece. So there's that in a community.

There's the conferences like Web Assembly Summit and Wasm Day at KubeCon, and we need those as areas where we can collaborate and then we need lots and lots of developers, often working for different companies, that are all trying to solve a set of problems that define the boundaries of the ecosystem. I think we are in about year one and a half to year two of really seeing that flourishing. Bytecode Alliance has been around a little longer, but only formalized about a year and a half ago. You're seeing a whole bunch of startups like Fermyon and Suborbital and Cosmonic and Profion bubbling up, but you're also seeing Fastly and CloudFlare buying into this Microsoft, Amazon, Google buying into this so we're really seeing once again the same replay of a ecosystem formation that we saw in the Docker ecosystem when it was Red Hat at Google.

Wesley Reisz: I know of Fastly doing things at the Edge, being able to compile things at the Edge and be able to run Web Assembly Wasm there. I can write Wasm applications myself and deploy them, but the cloud part, how do I deploy Wasm in a Cloud Native way? How does that work today?

Matt Butcher: In this case, Cloud Native and Edge are similar. Maybe the Edge is a little more constrained in some of the things it can do and a little faster to deliver on others. But at the core of it, we need to be able to push a number of artifacts somewhere and understand how they're going to be executed. We know, for example, we've got the binary, a Web Assembly binary file, and then we need some supporting file. A good example of this is fermyon.com is powered by a CMS that we wrote called Bartholomew. For Bartholomew, we need the Web Assembly binaries that serve out the different parts of the site, and it's created with a microservice architecture. I think it's got at this point five different binary files that execute fermyon.com.

Then we need all of the blog posts and all the files and all the images and all the CSS, some of which are dynamic and some of which are static. And somehow we have to bundle all of these up. This is a great example of where Bytecode Alliance is a great entity to have in a burgeoning ecosystem. We need to have a standard way of pushing these bundles up to a cloud. And Fastly's Compute@Edge is very similar. We need a way to push their artifacts up to Compute@Edge with Fastly or any of these.

There's a working group called SIG Registry that convenes under Bytecode Alliance that's working on defining a package format and defining how we're going to push and pull packages, essentially where you think of in the Docker world, pushing and pulling from registries and packaging things up with a Docker file and creating an image file, same kind of thinking is happening in Bytecode Alliance specific to Web Assembly. SIG Registries is a great place to get involved if that's the kind of thing that people are interested in. You can find out about it at bytecodealliance.org. That's one of the pieces of community building/ecosystem building that we've got to be engaged in.

What is the mission of Fermyon? [12:57]

Wesley Reisz: You started a company, Fermyon, and now what's the mission of Fermyon? Is it to be able to take those artifacts and then be able to deploy them onto a cloud footprint? What is Fermyon doing?

Matt Butcher: For us, we're really excited about the idea that we can create a cloud run time that can run in AWS, in Azure, in Google, in Digital Ocean that can execute these Web Assembly modules and that we can streamline that experience to make it frictionless. It's really kind of a two part thing. We want to make it easy for developers to build these kinds of applications and then make it easy for developers to deploy and then manage these applications over the long term.

When you think about the development cycle, oftentimes as we build these new kinds of systems, we introduce a lot of fairly heavy tooling. Virtual machines are still hard to build for us now even a decade and some into the ecosystem. And technologies like Packer have made it easier, but it's still kind of hard. The number one thing that Docker did amazingly well was create a format that made it easy for people to take their applications that already existed, package them up using a Docker file into a image, and we looked at that and said, "Could we make it simpler? Could we make the developer story easier than that?"

And the cool thing about Web Assembly is that all these languages are adding support into their compilers. So with Rust, you just add --target Wasm32-wasi and it compiles the binary for you. We've really opted for that lightweight tooling.

What is Spin? [14:22]

Spin is our developer tool, and the Spin project is basically designed to assist in what we call the inner loop of development. This is a big microsoft-y term, I think inner and outer loop of development.

Wesley Reisz: Fast compile times.

Matt Butcher: What we really mean is when you as the individual developer are focused on your development cycle and you've blocked out the world and you're just wholly engaged in your code, you're in your inner loop, you're in flow. And so we wanted to build some tools that would help developers when they're in that mode to be able to very quickly and rapidly build Web Assembly based applications without having to think about the deployment time so much and without having to use a lot of external tools. So Spin is really the one tool that we think is useful there, and we've written VS code extension to streamline that.

And then on the cloud side, you got to run it somewhere, and we built the tool we call Fermyon or the Fermyon platform, to really execute there. And that's kind of a conglomeration of a number of open source projects with a nice dashboard on top of it that you can install into Digital Ocean or AWS or Azure or whatever you want and get it running there.

Wesley Reisz: And that runs a full Wasm binary? Earlier I talked functions as a service, does it run functions or does it run full Wasm binaries?

Matt Butcher: And this gets us back into the serverless topic, which we were talking about earlier, and serverless I think has always been a great idea. The core of this is can we make it possible so that the developer doesn't even have to think about what a server is?

Wesley Reisz: Exactly. The plumbing.

Matt Butcher: And functions as a service to me is just about the purest form of serverless that you can get where not only do you not have to think about the hardware or the operating system, but you don't even have to think about the web framework that you're running in, right? You're merely saying, "When a request comes into this endpoint, I'm going to handle it this way and I'm going to serve back this data." Within moments of starting your code, you're deep into the business logic and you're not worried about, "Okay, I'm going to stand up an HTTP server, it's got to listen on this port, here's the SSL configuration."

Wesley Reisz: No Daemon Sets, it's all part of the platform.

Matt Butcher: Yes. And as a developer, that to me is like, "Oh, that's what I want. No thousand lines of YAML config." serverless and functions as a service were looking like very promising models to us. So as we built out Spin, we decided that at least as the first primary model that we wanted to use, we wanted to use that particular model. Spin for example, it functions more like an event listener where you say, "Okay, on an HTTP request, here's the request object, do your thing and send back a response object." Or, "As a Redis listener, when a message comes in on this channel, here's the message, do your thing and then optionally send something back." And that model really is much closer to Azure functions and Lambda and technologies like that. We picked that because developers seem to really enjoy that. Developers say they really enjoy that model. We think it's a great compliment for Web Assembly. It really gets you thinking about writing microservices in terms of very, very small chunks of code and not in terms of HTTP servers that happen to have microservice infrastructure built in.

Wesley Reisz: Spin lets you write this inner loop, fast flow, event driven model where you can respond to the events that are going like the serverless model, and then you're able to package that into Wasm that can then be deployed with Fermyon cloud? Is that the idea?

Matt Butcher: Yes, and when you think about writing a typical HTTP application, even going back to say Rails, Rails and Django I think really defined how we think about HTTP applications, and you have got this concept of the routing table. And in the routing table you say, "When somebody hits /foo, then that executes myFoo module. If I hit /bar that executes myBar module." That's really the direction that we went with the programming model where when you hit fermyon.com/index, it executes the Web Assembly module that generates the index file and serves that out. When you hit /static/file.jpeg, it loads the file server and serves it back. And I think that model really kind of resonates with pretty much all modern web application and microservice developers, but all the writing in the back end is just a function. I really like that model because it just feels like you're getting right to the meat of what you actually care about within a moment of starting your application instead of a half hour or an hour later when you've written out all the scaffolding for it.

How do you handle state in cloud-based Web Assembly? [18:35]

Wesley Reisz: What about State? You mentioned Redis before having Redis listeners, how do you manage State when you're working with Spin or with Fermyon cloud? How does that come into play?

Matt Butcher: That's a great architectural discussion for microservices as a whole, and we really have felt strongly that what we have observed coming from Deis and Microsoft and then on into Fermyon or Google, in the case of some of the other engineers who work on Fermyon, Google into Fermyon, we've seen the microservice pattern be successful repeatedly. And Statelessness has been a big virtue of the microservice model as far as the binary keeping state internally, but you got to put state full information somewhere.

Wesley Reisz: At some point.

Matt Butcher: The easy one is, "Well, you can put it in files," and WASI and Web Assembly introduced file support two years ago and that was good, but that's not really where you want to stop. With Spin, we began experimenting with adding some additional ones like Redis support and generic key-value storage, which is coming out and released very soon. Database support is coming out really soon and those kinds of things. Spin, by the way, is open source, so you can actually go see all these PRs in flight as we work on PostgreSQL support and stuff like that.

It's coming along and the strategy we want to use is the same strategy that you used in Docker containers and other stateless microservice architectures where State gets persisted in the right kind of data storage for whatever you're working on, be that a caching service or a relational database or a noSQL database. We are hoping that as the Web Assembly component model and other similar standards kind of solidify, we're going to see this kind of stuff not be a Spin specific feature, but just the way that Web Assembly as a whole works and different people using different architectures will be able to pull the same kinds of components and get the same kind of feature set.

What is the state of web assembly performance in the cloud? [20:20]

Wesley Reisz: Yes, very cool. When we were talking just before we started recording, you mentioned that you wanted to talk a little bit about performance of Web Assembly and how it's changed. I remember I guess a year ago, maybe two years ago, I did a podcast with Linn Clark. We were talking about Fastly and running Web Assembly at the Edge, like we were talking about before, and if I remember right, I may be wrong, but if I remember right, it was like 3 ms was the overhead that was for the inline request compiled time, which I thought was impressive, but you said you're way lower than that now. What is the request level inline performance time of Web Assembly these days?

Matt Butcher: We're lower now. Fastly's lower now. As an eco, we've learned a lot in the last couple years about how to optimize and how to pre initialize and cache things ahead of time. 3ms even a year and a half ago would've been a very good startup time. Then we are pushing down toward a millisecond and now we are sub one millisecond.

And so again, let's characterize this in terms of these three waves of cloud computing, a virtual machine, which is a powerhouse. You start with the kernel and you've got the file system and you've got all the process table and everything starting up and initializing and then opening sockets and everything, that takes minutes to do. Then you get to containers. And containers on average take a dozen seconds to start up. You can push down into the low seconds range and if you get really aggressive and you're really not doing very much, you might be able to get into the hundred milliseconds or the several hundred milliseconds range.

One of the core features that we think this third wave of cloud compute needed, and one of our criteria coming in was it's got to be in the tens of milliseconds. That was a design goal coming out of the gate for us, and the fact that now we're seeing that push down below the millisecond marker for being able to get from cold State VM to something executing, to that first instruction, having that under a millisecond is just phenomenal.

In many ways we've kind of learned lessons from the JVM and the CLR and lots and lots of other research that's been done in this area. And in another, some of it just comes about because with both us and with Fastly and other cloud providers distinctly from the browser scenario, we can preload code, compile it ahead of time into Native and then be able to have it cached there and ready to go because we know everything we need to know about what the architecture and what the system is going to look like when that first invocation hits, and that's why we can really start to drive times way, way down.

Occasionally you'll see a blog post of somebody saying, "Well, Web Assembly wasn't terribly fast when I ran it in the browser." And then those of us on the cloud side are saying, "Well, we can just make it blazingly fast." A lot of that difference is because the things that the run time has to be able to learn about the system at execution time in the browser, we know way ahead of time on the cloud and so we can optimize for that. I wouldn't be surprised to see Fastly, Fermyon, other companies pushing even lower until it really does start to appear to be at Native or faster than Native speeds.

Wesley Reisz: That's awesome. Again, I haven't really tracked Web Assembly in the last year and a half or so, but some of the other challenges were types and I think component approach to where you could share things. How has that advanced over the last year and a half? What's the state of that today?

Matt Butcher: Specifications often move in fits and starts, right? And W3C, by the way, the same standards body that does CSS, HTML and HTTP, this is the same standards body that works on Web Assembly. Types was one of the initial, 'How do we share type information?" And that morphed in and out of several other models. And ultimately what's emerged out of that is borrowing heavily from existing academic work on components. Web Assembly is now gaining a component model. What that means in practice is that when I compile a Web Assembly module, I can also build a file that says, "These are my exported functions and this is what they do and these are the types that they use." And types here aren't just like instant floats and strings. We can build up very elaborate struct like types where we say, "This is a shopping cart and a shopping cart has a count of items and an item looks like this."

And the component model for Web Assembly can articulate what those look like, but it also can do a couple of other really cool things. This is where I think we're going to see Web Assembly really break out. Developers will be able to do things in Web Assembly that they have not yet been able to do using other popular architectures, other popular paradigms. And this is that Web Assembly can articulate, "Okay, so when this module starts up, it needs to have something that looks like a key value storage. Here's the interface that defines it. I need to be able to put a string string and I need to be able to get string and get back a string object or I need a cache where it lives for X amount of time or else I get a cache miss." But it has no real strong feelings about, it doesn't have any feelings at all. It's binary, it has no real strong...

Wesley Reisz: Not yet. Give a time.

Matt Butcher: Anthropomorphizing code.

And then at startup time we can articulate, Fastly can say, "Well, we've got a cache-like thing and it'll handle these requests." And Fermyon can say, "Well we don't, but we can load a Docker container that has those cache-like characteristics and expose a driver through that." And suddenly applications can be sort of built up based on what's available in the environment. Now because Web Assembly is multi-language, what this means is effectively for the most part, we've been writing the same tools over and over again in JavaScript and Ruby and Python and Java. If we can compile all of the same binary format and we can expose the imports and exports for each thing, then suddenly language doesn't make so much of a difference. And so whereas in the past we've had to say, "Okay, here's what you can do in JavaScript and here's what you can do in Python," now we can say, "Well, here's what you can do."

Wesley Reisz: Reuse components.

Matt Butcher: And whether the key value store is written in Rust or C or Erlang or whatever, as long as it's compiled to Web Assembly, my JavaScript application can use it and my Python app can use it. And that's where I think we should see a big difference in the way we can start constructing applications by aggregating binaries instead of fetching a library and building it into our application.

What's happening in the language space when it comes to Web Assembly? [26:23]

Wesley Reisz: Yes, it's cool. Speaking of, language support was another thing that you wanted to talk about. There's a lot of changes, momentum and things that have been happening with languages themselves and support of Web Assembly like Switches, there's things with Node, we talked about Blazer for a minute. What's happening in the language space when it comes to Web Assembly?

Matt Butcher: To us, Web Assembly will not be a real viable technology until there is really good language support. On fermyon.com we actually track the status of the top 20 languages as determined by Red Monk and we watch very closely and we continually update our matrix of what the status is of Web Assembly in these languages. Rewind again back only a year or two and all the check boxes that are checked are basically C and Rust, right? Both great languages, both well-respected languages, both not usually the first languages a developer says, "Yes, this is my go-to language." Rust is gaining popularity of course, and we love Rust, but JavaScript wasn't on there. Python wasn't on there, Ruby wasn't on there. Java and C Sharp certainly weren't on there. What we've seen over only a year, year and a half is just language after language first announcing support and then rapidly delivering on it.

Earlier this year, I was ecstatic when I saw in just the space of two weeks, Ruby and Python both announce that the CRuby and CPython run times were compilable to Web Assembly with WASI, which effectively meant all of a sudden Spin, which applications were kind of limited to Rust and C at the time, could suddenly do Python and Ruby applications. Go, the core project is a little bit behind on Web Assembly support, but the community picked up the slack and Tiny Go can compile Go programs into Web Assembly Plus WASI. Go came along right around, actually a little bit earlier than Python and Ruby, but now what we're seeing, now being in the last couple of weeks, is the beginning of movement from the big enterprise languages. Microsoft has been putting a lot of work into Web Assembly in the browser over the past with the Blazer framework, which essentially ran by compiling the CLR, the run time for C Sharp in those languages into Web Assembly and then interpreting the DLLs.

But what they've been saying is that was just the first step, right? The better way to do it is to compile C#, F#, all the CLR supported languages directly into Web Assembly and be able to run them directly inside of a Web Assembly runtime, which means big performance boost, much smaller binary sizes and all of a sudden it's easy to start adding support for newly emerging specifications because it doesn't have to get routed through multiple layers of indirection.

Steve Sanderson, who's one of the lead, I think he's the lead PM for the dotnet framework, has been showing off a couple times since KubeCon in Valencia, now I think four or five different places has shown off where they are in supporting dotnet to Web Assembly with WASI, and it's astounding. So often we've thought of languages like C# as being sort of reactive, looking around at what's happening elsewhere and reacting, but they're not. They are very forward thinking engineers, and David Fowler's brilliant and the stuff they're doing is awesome. Now they've earmarked Web Assembly as the future, as one of the things they really want to focus on. And I'm really excited, my understanding is the next version of dotnet will have full support for compiling to Native Web Assembly and the working drafts of Native's out now.

Wesley Reisz: Yes, that's awesome. You mentioned that there's work happening with Java as well, so Java, the CLR, that's amazing.

Matt Butcher: Yep. Kotlin too is also working on a Native implementation. I think we'll see Java, Kotlin, the dotnet languages all coming. I think they'll be coming by the end of the year. I'm optimistic. I have to be because I'm a startup founder and if you're not optimistic, you won't survive. But I think they'll be coming by the end of the year. I think you'll really start to see the top 20 languages, I think we'll see probably 15 plus of them support Web Assembly by the end of the year.

Wesley Reisz: That's awesome. Let's come back for a second to Fermyon. We're going to wrap up here, but I wanted you to walk through, there's an app that you talk about, Wagi, that's on one of your blog posts, that's how you might go about using Spin, how you use Fermyon cloud. Could you walk through what it looks like to bootstrap an app? Talk about just what does it look like for me if I wanted to go use Fermyon cloud, what would it look like?

Matt Butcher: Spin's the tool you'd use there? Wagi is actually just a description of how to write an application, so when you're writing it. Think about Wagi as one of You download Spin from our GitHub repository and you type in Spin new and then the type of application you want to write and the name. Say I want to create Hello World in Rust, it's Spin New Rust Hello World. And that commands scaffolds out, it runs the cargo commands in the background and creates your whole application environment. When you open it from there, it's going to look like your regular old Rust application. The only thing that's really happening behind the scenes is wiring up all the pieces for the component model and for the compiler so that you don't have to think about that.

“spin new”, you've got your Hello World app created instantly. You can edit it however you'd normally edit, I use VS code. From there, you type in Spin Build, it'll build your binary for you. And again, largely it's invoking the Rust compiler in Rusts case or the Tiny Go compiler in Go Case or whatever. And then Spin Deploy will push it out to Fermyon. So assuming you've got a Fermyon instance running somewhere, you can Spin Deploy and have it pushed out there. If you're doing your local development, you can just, instead of typing, Spin Deploy, you can type Spin Up and it'll create you a local web server and be running your application inside there so the local development story is super easy there. In total, we say you should be able to get your first Spin application up and running in two minutes or less.

Wesley Reisz: How do you target different endpoints for when you deploy out to the cloud? Or do you not worry about it? That's what you pay Fermyon, for example.

Matt Butcher: Yes, you're building your routing table as you build the application. There's a toml file in there called Spin.toml where you say, "Okay, if they hit slash then they load this module. If they hit /fu, they hit that module," and it supports all the normal things that routing tables support. But from there, when you pushed out to the Fermyon platform, the platform will provision your SSL certificate, set up a domain name for you. The Fermyon dashboard that comes as part of that platform will allow you to set up environment variables and things like that. So as the developer, you're really just thinking merely in terms of how you build your binary and what you want to do. And then once you deploy it, then you can log into the Fermyon dashboard and start tweaking and doing the DevOps side of what we would call the outer loop of development.

What’s next for Fermyon? [32:42]

Wesley Reisz: What's next for Fermyon?

Matt Butcher: We are working on our software as a service because again, our goal is to make it possible for anybody to be able to run Spin applications and get them up and running in two minutes or less, even when that means deploying them out somewhere where they've got a public address. So while right now if you want to run Fermyon, you got to go install it in your AWS cluster, your Google Cloud cluster, whatever. As we unroll this service later on this year, it should make it possible for you to get that started just by typing Spin Deploy, and have that up and running inside of Fermyon.

Wesley Reisz: Well, very cool. Well, Matt, thank you for, thanks for the time to catch up and help us further understand what's happening in the Wasm community and telling us about Fermyon and Fermyon cloud.

Matt Butcher: Thanks so much for having me.

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT