Transcript
Laurent Doguin: Welcome to this talk about Wasm: Wasm's Components are a FaaS Best Friend. I'm going to talk about Function as a Service. I'm going to try to give you my opinion on what makes a good Function as a Service runtime, different than an actual Function as a Service. We'll talk about a practical example, which is building a FaaS with wasmCloud.
My name is Laurent Doguin. I work as Director of Developer Relations at Couchbase. Couchbase is the developer data platform for critical applications in an AI world. That's the official messaging. It's just an incredible database. If you want to see my writings, it's on this website, ldoguin.name. Might talk about Couchbase here and there, but not really.
What's a FaaS Anyway?
What is a Function as a Service anyway? To me, a Function as a Service is all about running application functions, so function in a context, when needed on a serverless runtime. Usually what happened after this phrase is, now what is a serverless anyway? Which is a good question. To me, serverless means it's an execution model. It's how you run code without having to care about servers. Servers in the wide term, not just racking machines, I used to do that when I worked for a cloud company. Without managing servers.
The extremely important part about this is also allocating resources when you need them. Meaning you don't pay for something that's not running and you'll only use something when you don't need to, which is pretty cool when everybody is generating images with GPUs. Function as a Service is different from serverless. I just want to put that out there because I have lots of people that are saying it's the same thing. Function as a Service is a subset of serverless. It's serverless, but running one very specific little bit of code that inscribed itself into a broader context.
Now, if we talk about execution models, usually you get those five, that's probably CaaS in the middle that's missing because that slide was too small. CaaS being Container as a Service. Basically, you have different layers that you have to think about, maybe manage, depending on what you want to do. What people used to do, and people might do again because of all the stuff that are going on, is on-premise. On-premise you have to buy your own hardware, you have to write your own hardware, you have to take care of the network, you have to take care of your virtualization layer, KVM, QEMU, whatever. You have to manage your own operating system. Usually there's the operating system that manages your virtualization layer, and then there's the operating system that runs on every of your VM. All of those need to be managed, updated, and all that stuff, which is quite a bit of work.
Then you have the container layer on top of it. The container also has its own system, so you also need to think about this, manage this, update it, all the things. Then you have the actual runtime, so let's say the JVM, if you're doing Java, then you have the Java application, and then you have maybe functions. If you're on-prem, you do everything. If you do infrastructure as a service, you go to your good old cloud provider, and maybe you use just EC2 on Amazon, or maybe you use other cloud that people have forgotten about, that were here before AWS, like OVH in France, or Hetzner. There's lots of local clouds in every country. Then you have Container as a Service, or Platform as a Service. Instead of managing operating system or containers and runtime, now the only thing you have to care about is your code, in theory. Your code as an application, and your code as a function, then you have Function as a Service, or you use function.
Then SaaS, which is like, whatever, I'm just going to get a subscription and not manage anything. Instead of having a very narrow and precise definition of Function as a Service, or serverless, what I want to talk about, and what I want to bring you to understand, is it's more of a mindset shift. Instead of managing all those stuff, you try to manage only what brings business value, which has been what I've been saying to my customers when I was running a Platform as a Service. Just write your app, you don't care about the other stuff. Then sometimes it's not just about the apps, it's also about the function. Just care about what brings value to your business, instead of infrastructure.
Function as a Service Characteristics
Advantages and disadvantages of Function as a Service. The great thing about having a FaaS or serverless runtime is you don't have to manage server in the broad sense. You don't have to manage your infrastructure, in the broad sense. I don't know if you all have a software forge, if you use containers, do you rebuild your containers all the time, just to make sure they're always updated? Do you do all of that? If you do, that's great. I don't, and I don't want to. I don't even want to care about this, because that's a waste of time for me. It might not be a waste of time for other people, because what could be business for me could be infrastructure for you. Coming from the database space, most of our customers, we do a lot of cache, they implement their own user session store. Most people don't need their own user session store, they use Okta or whatever, Keycloak, anything, and then just use that. Here, you are basically serverless. It doesn't bring any business value, so you just use an off-the-shelf solution.
For some of our customers, we had Sky TV. For Sky TV, it was very important to build our own user management system, because when everybody logged in at the same time to watch Game of Thrones, at the exact same time, you had to check the rights and the permission of every single customer of Sky TV, and so the usual off-the-shelf user management service that they use, blown up. What brings business value to you might actually be an infrastructure and something that people have to build on their own. I don't want to care about this. I don't want to care about orchestration, don't want to care about scaling up and scaling down, especially if you're managing small bits of code that have a very short life duration. If you manage a server, maybe it doesn't need to scale up and down so often, but if you want a pay-as-you-go model, and you don't want to manage the up and down of your function all the time, that sounds like a lot of work, so you want something that does it for you, and of course, if it does it for you, you want it to be highly available. Things in the cloud fail all the time. All the time might be a strong word, but it happens.
If it happens, you need to be ready, so you need to have a distributed architecture so that if this node goes down there's still some other nodes, some other hosts that can deploy those functions. Event-driven support is an interesting one. What most people think about event-driven is, for instance, Kafka or any messaging system. There's a message that shows up, and then you deal with it. It's also an HTTP call. You can think of an HTTP call as an event happening on your system. Then you need to react to that event, you need to do something, so basically, you have nginx, any proxy. There's a call that happens, and then you need to start a function, execute that function, and then do your thing. Most people want to do polyglot. Most people, as in big companies, and you have people that do Java, some people do JavaScript, some people do C#. The great thing about a Function as a Service is that usually you can run any kind of code, which make this very cool and very polyglot.
Basically, they all talk together through a messaging system most of the time, so you get one function that goes up, gets the result, send it back to another message stream, and then you get another function that picks it up and does something. Or maybe it's just writing something on S3, and when something is written on S3 or something happens on that bucket, then you can execute another function with another language. Something that most people put forward, doesn't necessarily mean it's absolutely needed to do a good Function as a Service, is a rapid way of developing, a rapid cycle of deployment, which makes sense. It's supposed to be a small subset of code, so you shouldn't have as much constraint when you're shipping smaller bits of code than an actual big application, or a [inaudible 00:09:26], or whatever.
One of the problems you could have with FaaS is cold starts. When that HTTP function comes in, usually there's a user at the other end, and the user doesn't want to wait. If you have to start a new function, which is basically start whatever you're using to execute that function, could be a simple process, a VM, a container, whatever, I'm going to talk more about this, then you want this to happen as fast as possible.
Cold starts are important. It's hard to run a very long function in most existing Function as a Service runtime for a variety of reasons, but mostly when you're building a FaaS and you're selling a FaaS, the idea is to resell the same hardware as much as possible so it doesn't cost you too much money. You try to have a timeout to reduce the duration of the execution of the function because that's how you make money. All depending on how you cut your hardware. Are you using VM? VMs are heavy. If you're just using a process, you can probably have many processes running instead of many VMs, but then is it a good idea? We'll talk about it later. Stateless is an interesting one. It might be seen as a problem or as a good thing. To me, stateless is always a good thing because I come from writing code and operating software, and so stateless means it can scale indefinitely because I can just start a new actor and then I can start as many actors, many components, many functions as I want. It doesn't matter. There's no state. It's not going to talk to a database or anything.
On one end, it's great. On the other end, someone has to take care of all the other stuff, like the database, the S3 bucket, the messaging system, all those things. You have to integrate into all those things. Usually that ends up being a very distributed system. We all know that distributed systems, first of all, the more you're distributed, the higher is your failure rate for a variety of reasons, but basically the more hardware you have, the more chances you have to do that so that the hardware fails. Then vendor lock-in. There's lots of open-source solutions, but basically the way you write functions for this solution might be different than for this solution, which sucks. It's hard to operate. It's hard to orchestrate, all of that.
From my perspective, a great FaaS platform will give you the following, a very short cold start. It's polyglot mostly because everybody wants to. You don't have to, but come on, it's 2025. Integrate with all your existing systems. I talked a lot about database, about stateful stuff, key value or S3. The other big part about integrating with other systems is security, which people tend to forget. You can't just run code on its own somewhere. There are always guardrails, security, stuff like this all around, especially now that people are building agents, and agents tend to do what they want to do. You want to have as much guardrails as you want. It's absolutely important to be integrated with your security systems. I want something that automatically scales up and down, scales to zero. If I'm not using it, I shouldn't pay for it. That is secure, observable, always on, all that stuff.
What Makes a Good FaaS Runtime?
What makes a good FaaS runtime? Runtime being the thing that your Function as a Service is running on. The early choices that we had when thinking about Function as a Service. I don't know when as an industry we started thinking about Function as a Service. My guess would be there was the cloud, commoditization of hardware. It became much cheaper. If it's cheaper and very well-orchestrated, we all have APIs to start new things the way we want to, then maybe we can start thinking about scaling up and down when we need it and all that stuff. At the time, three main ways to execute processes. The Google virtualization layer, which is super isolated because it uses all the fantastic instruction that the CPU gives you, which is like VT-x or ARM VE something. Basically, the hardware thing that makes it that your VMs are completely isolated from other VMs. It's great. It's very slow. It's not that slow to be fair. When I used to work at Clever Cloud, Platform as a Service, we used VM all the time. We didn't want to use container, more on that later. If you remove all the unnecessary things from your VM, it starts in a matter of seconds.
At the time we were thinking, it's virtual box. It's going to be so slow, or it's going to be VMware. No, KVM actually pretty good. Starts very fast. Then you have to manage the OS that runs the VM and then the OS of all the VM and all that stuff. Then you have containers. Containers, LXC, Linux namespace, everybody is sharing a kernel. It starts much faster than a VM, but then you still have to manage the OS of the container and everything that's on it. Also, it's not as isolated as a VM, which is a terrible idea when you run code that you don't know. I'm not going to have a machine and run your code and your code. If you don't know each other, there's probably a good chance that you're going to tell me you should not do that. You'd be right, depending on your paranoia level.
The good thing is most of those VM and containers are pretty easy to move around. You have something that knows how to run VM. You can run this everywhere, same for containers. It's pretty portable. The overhead of a container is super low, that's why it's faster. You're just using the same kernel and you're segmenting things. Whereas with a VM, you emulate the whole hardware and you do all that layer. It's not as lightweight. Big overhead. Then you get native process, which is a terrible idea because you have zero isolation. You have a system. Someone has to run a process. You just run a process. Really, why would you do that? Actually, no one's building any solution, I think, on top of that.
What does it have to do with George Costanza? It's a security problem.
Timmy: What are you doing?
Costanza: What?
Timmy: Did you just double-dip that chip?
Costanza: Excuse me?
Timmy: You double-dipped the chip.
Costanza: Double-dipped? What are you talking about?
Timmy: You dipped the chip. You took a bite, and you dipped again.
Costanza: So?
Timmy: That's like putting your whole mouth right in the dip. From now on, when you take a chip, just take one dip and end it.
Costanza: I'm sorry, Timmy, but I don't dip that way.
Timmy: You don't, huh?
Costanza: No. You dip the way you want to dip. I'll dip the way I want to dip.
Timmy: Give me the chip.
Costanza: Hey, hey, hey.
Laurent Doguin: All of that to tell you that if you think about the dip as a Linux kernel, and if you think about the chip as a container, which is exactly what it is, then that's what happened. All of that is to say that, basically, security is all about the acceptable paranoia level you can have when executing code of other people. That's the moment where you're all angry, because I ask you if you have trust and confidence into the code that your colleagues ship, and would you trust the code that your colleague has shipped, his function, to run it on the same runtime as your function? What if that function just kills the kernel that's running down there? Then nothing works anymore. How paranoid do you want to be? Do you want to be the CIO like George who doesn't care, or do you want to be like Timmy and try to isolate stuff? Turns out most people want to be like Timmy in tech, which is a good thing. There are newer choices than just VMs and containers and straight-up processes.
The first one that got really popular, mostly because of AWS Lambda, was MicroVirtualization when they created this thing called Firecracker. What Firecracker is, is basically a microVM, so a VM that starts much faster than a traditional VM, more lightweight, still gives you full isolation with all the cool CPU instruction that you get from VT-x and ARM VE. Extremely portable, it's just a VM, whatever. The overhead is not as big as a traditional VM, still bigger than a container, but you still get that super isolation. This is what AWS Lambda uses, and I'm sure there's also people using Firecracker.
The other thing, containers. Containers is what you just saw before. It's the dip. When you have two things sharing the same kernel and this container basically [inaudible 00:19:07] the kernel, anything that was running on the same kernel, gone. To avoid this kind of stuff, you can do hardened containerization, and there's a couple of solutions that exist. There's gVisor that's used by Google on their FaaS. You get Kata Containers that's maintained by mostly Intel and a bunch of other people. The way it works is trying to provide a stronger level of isolation than just cgroups and namespace that you have on Linux.
It's still as good as a container, and it's better in terms of security. It's hardened. Then you get WebAssembly. It's like a process, but it's not a native process. It's completely isolated. I don't know why I have the VT-x here. That's a mistake, but whatever. The security premises of a Wasm module is basically, here's a buffer of memory, and whatever happens, you won't be able to get out of that memory buffer. By default, it has the right to do nothing. It's like OpenBSD or FreeBSD. It's completely closed by default, and then you can enable the feature that you want, based on the Wasm runtime that you use. It is so much faster, because it's just a process. It's fully isolated, because again, it's just one chunk of memory, instead of a whole OS. You don't have to care about the OS and the runtime and the app, which is also something that I should have removed from that slide.
WebAssembly is amazing. If you take a score on all the things that I just showed you, and you ask ChatGPT to do something, then that's what goes out, which is a little cheeky, but basically, Wasm scored the highest in terms of portability and security and everything. The question usually is, why are we not using Wasm everywhere? There's a very good reason for that, and that's this. If you want to move a string from a Wasm module to another, or from a Wasm module to a host, or from the thing that's executing the Wasm function, that's what you have to do.
Basically, you only have integer and float, and so basically pointers and numbers. What you have to do with pointer and numbers, basically you get the slice of the buffer that you're interested in, and hopefully, the way it's encoded in that memory buffer is the way you will decode a string. This change is based on the language that you're using. It's a mess, really, if you want to do more than a hello world. That's just for a string. Can you imagine what that would be if it was a JSON document, for instance, a JSON string? It would probably be the same thing, actually, but if it's different structure, different object.
WebAssembly Component Model
This is where things get interesting. This is new, this is called a WebAssembly Component Model, and this is an official specification created by the Bytecode Alliance, which is a bunch of people that basically standardized the way we're supposed to use WebAssembly. What it is, is a set of things. It's, first of all, the WebAssembly Interface Type, which is a way to describe types and function signature. You can leave it at that. Then you have an ABI, Application Binary Interface, that knows how to manage string lists, structs, basically all the basic types that you can describe with your WIT.
Then they also allow you to define how you link different Wasm components together. Also, that's the big part, you don't have to manage memory anymore, which is great, because I don't want to. That's the exact same thing that I showed you earlier, but with the Wasm Component Model. Instead of doing this, to parse the string — the Rust version is actually much easier to read than the JavaScript or the Go version — all you have to do is this. You create a namespace called my_func. You get your function definition, it's called process, it's a function. There's one parameter, it's called message, and it's a string. What it's going to do, you get the tooling that's going to generate the binding automatically, so that each time you use this, you import or export this into your WIT-compatible language or project SDK, then you can just do this, instead of this.
Basically, everybody in the Wasm world is ecstatic about this, and has high hopes that this is going to allow for a much broader adoption of WebAssembly outside the browser, to do a Function as a Service, for instance. Basically, the whole goal of this is to increase portability between Wasm modules, Wasm components, and across languages as well. Because, again, the way you encode or decode a string might be different depending on the language, but with this, it doesn't matter anymore, so it's super easy to go polyglot with something like this. It's also super easy to have those modules talk to each other, instead of just playing with byte buffer and stuff.
A Practical Example with CNCF wasmCloud
Practical example with CNCF wasmCloud. One of the reasons I'm here today working at Couchbase is because one of our dear customers, American Express, and a company called Cosmonic, which is the main vendor behind wasmCloud, came to us and said, American Express is platforming its own internal FaaS onto wasmCloud. They use Couchbase everywhere, so they need Couchbase support, so that's why they came to us. I'm going to tell you how it works, and what you can do with it, and what we had to do to make that work. I'm going to tell you about wasmCloud. wasmCloud is a set of software that allow you to build and manage a polyglot Wasm application orchestrated across anywhere, really. It says across cloud, Kube containers, Kubernetes data center, edge environments. It has an extremely low overhead, and it just runs Wasm stuff, so it's very lightweight. It's basically two things, two CNCF official projects: wasmCloud, bottom left, NATS, bottom right. NATS is a messaging system. wasmCloud is a thing that allows you to manage cross-platform polyglot applications.
The goal is to have just one pretty fine way of explaining what is a function or an application. If you can do an application, you can do a function. What's the manifest? How would you describe an application that's running in a serverless environment or a Function as a Service environment? How you make sure that it can scale easily, that it can do that safely, and all those things. Lots of different ways to run this. I already talked about this. That's the most important thing. Why would you even do something like this? The great thing about a Function as a Service and what wasmCloud allows you to do also is you build components. Components can be like actors. It's the thing that will execute code. It's the stateless bit. If it's stateless, you can scale. It's reusable. It can run anywhere. You can use any language because it's Wasm. It's fully distributed.
Observability in a fully distributed system sucks, but you get full support of OpenTelemetry, so it's super easy to get your traces, your logs, your metrics, everything out of wasmCloud. Everything is basically made for that. They have one cool UI that allows you to see everything. I don't know if it's production appropriate. I think most people would build their own custom UI to do this. It's there, and it's pretty cool. It can integrate with everything. You still have to write the code to do that, but it can integrate with everything. Of course, it runs anywhere. What it is is basically one software, the control plane, the thing that decides how to run your software, how to talk to each other through a local version of NATS, so messaging system, and that's NATS that's going to do all the clustering and all the messaging, and on top of that, you have actors and providers. I haven't talked about providers yet, but here we are.
wasmCloud Cluster is called the Lattice, and Lattice is made of several hosts, and on several hosts, all those hosts, you can run two things. You can run components, and you can run what we call capability providers. Here you see one event. There's an HTTP request that's coming in. The HTTP request is coming in to the HTTP provider. Once that happens, there's a message that goes through the capability provider to the messaging system, NATS. NATS sends it to the components that is linked to that HTTP provider, and it's probably going to manage that request.
Apparently, that request needs to do a key value, get into another provider called the key-value provider, and this key-value provider is going to go outside of the Lattice, and it's going to talk to whatever can implement a key-value interface, Redis, Memcached, Couchbase, you name it, and then it's going to go back, and then it's going to answer. You can have as many hosts as you want in your cluster. Four main components. I shouldn't use the word components because it's already a thing. Four main things into a Lattice. You get as many hosts as you want, which is basically an instance of wasmCloud.
An instance of wasmCloud will deploy components and providers. Components are the stateless bit, providers are the stateful stuff. A provider doesn't have to be Wasm. It could be anything that can talk to what you want. It's a process that also import and export and uses the Wasm component model so that it can also talk to the component through interface, which is basically the WIT definition thing I showed you earlier, how you define function, how you define types, how you define all those things. Everything goes inside a Lattice, which is a mesh of all those hosts that can deploy any component or providers, and interfaces or links between the two.
The role of NATS in all of that is to be the messaging system, the way things talk to each other. They don't talk directly through each other, unless you're on the same node. They will go through NATS. NATS is a messaging system. If you're familiar with Kafka, with Redpanda, with all those things, then that's pretty much what it is. You get a NATS server, and some clients connected to that NATS server. All those NATS servers will create topics, basically messaging bus that are tagged with a name that you can send as many messages as you want on those, and those messages will be gathered by clients. You can have any sort of clustering that you want. That's the cool thing about NATS, and that's why I think it's an amazing product. It is super lightweight. It's extremely simple.
On the website, they say we are a topology enabler, and I love that. You can have all sorts of network topologies with NATS. They have two, three different interesting concepts. Single server, you all know what it is, a cluster, so several servers together. You can do cluster of cluster. Basically, you have a gateway node, and another gateway node, and a full cluster behind those two gateway nodes. All the traffic goes through one or several gateways, so that you don't have each server to each server communication, because there would be a lot of network, but instead you just basically proxy, they call that, gateways, that allow you to do clusters of clusters. You have this amazing field called a leaf node.
The way cluster of clusters and gateway works is you have to have bidirectional communication, which in the cloud means egress, basically. Egress is expensive. With leaf nodes, you only need one-way communication. When we did a workshop, we had one cluster in the cloud, and then everybody was using GitHub Codespaces, it's like a developer environment online, and they would stand their own wasmCloud instance on their Codespaces environment.
It's TCP, so I cannot expose anything as Codespaces, but I can have a unidirectional connection between my Codespaces and my cluster. With NATS, I can create what's called a leaf node, and basically it allows you to do the same thing, cluster of cluster, but instead of having the two-way direction, you have only one-way connection, because it all goes through the NATS messaging system, and that allow you to do all sorts of cool stuff. Messaging is good. Messaging with persistence is better. They have this thing called jetStream. jetStream is the persistence layer of NATS. It provides several things, persistence for your messages, key value, and object storage, object store like this. Literally what this thing is going to do is it's going to split whatever object you send it, and it's going to basically distribute it into several key-value pairs, and it will allow you to store anything, like a movie, and then key value, which is a bit like Redis, with the same interface.
Building a Distributed Application
How would you build something with wasmCloud? How do you basically split your existing things, for us it's mostly a function, but it could be an application, into components, and providers, and all that? What I like about what Cosmonic is building, the people behind wasmCloud, is they try to have capability standards. On the first slide you saw, there was an HTTP server capability provider, and a key-value capability provider. HTTP server is just an HTTP server, it's just like whatever, it's a server.
The key value one is just something that can do, get, and read, and touch, and what you expect from a key value. It's not tied to a particular datastore, it's just a generic capability, so a generic contract, so if you think in terms of WIT, the definition of your function, there's going to be a function that's called get, there's going to be a function that's called read. It's going to be the same, but the implementation can be different. You can have different capability provider, that have the same key value WIT definition, the same capabilities. That means that if your Wasm component uses the key-value interface, then you can interchange whatever capability provider you have that also can export the key-value interface. You can switch from Redis to Memcached to whatever. That's the case for key value, that's also the case for messaging, that's also the case for SQL, anything you can think of. This is the basic way of seeing things. If you're in big companies, I'm assuming you don't usually use your datastore straight up.
Usually there's some sorts of middleware that go and sit down in the middle to manage security and a bunch of other things. You could very well also have capability providers that maps all your middleware and that makes it available to all the developers in your organization for just writing one bit of code, which is super powerful, because then you can build your own platform with your own stuff without having to rely on the crazy architecture that you can see in other clouds that makes you use all that generic stuff.
One way of seeing this could be, what is my application or function going to do? Identify all those capabilities, then choose the capability provider depending on your platform, depending on where you're running it. There's a label support in Wasm host, so you can put a label on a host and say, this host is labeled EC2 and it's going to basically only be deployed in the host that have label EC2. Or whatever else label you can think of. You can basically choose where the components and provider are going to go, based on what you have available. Once you have identified your capability provider, then all you need to do is deploy a wasmCloud application. Let's say you have an incoming request that shows up. There's a component that's going to get that request. Most likely the first thing is going to be an authentication component, an authorization component.
Then it's going to either send it back or go through the next component. That next component can do a whole bunch of things. It can talk directly to a capability provider or it can also go through the inventory management component, which itself can talk to a capability provider or another component. If you're familiar with AWS Lambda, this should feel familiar. Of course, lots of different capabilities are available: HTTP, key value, messaging, Blob storage. Really anything you can think of. This is an attempt to have generic contracts so that you can easily replace capability providers through whatever's available on your cloud, which makes it a great solution to do multi-cloud, cloud-to-cloud. Really, capability can be anything you want. They have a great tool called wash, you cloud go install wash or whatever. It's pretty easy to install. All it does is download NATS, download wasmCloud, download wadm, which is the thing they use to define applications. You can do all that stuff. Basically, think of wasmCloud as Kubernetes, but for WebAssembly instead of containers.
How it Works
I'm going to try to show you how it works. Opening Gitpod. My goal will be to show you how easy it is to deploy something and also to show you the deployment model, the specification that they go after. If you're familiar with Kubernetes, it should feel extremely familiar to you once it's available at some point in the near future, very soon, or on GitHub. This is my colleague, Ben. We did this workshop together. You can go on that repo, and there's a README to do the workshop. If you want to do it by yourself, you can. That's the bit I really wanted to show you. This is the manifest that you have to write to express what your deployment will be. An application will be your set of components and capability provider all linked together through what they call links. You can see basically a name, a description.
Again, if you know Kubernetes, that should be fairly familiar. You have a component, that's the stateless thing. That's my Wasm file that I've built. Here it's on the file system, but of course you can go to any OCI compatible repository. It has a trait, and this trait is basically a built-in capability. The trait is a spreadscaler. Basically, it's going to say, I'm going to deploy one component and only one component, there's only one instance. If I have 100 instead of one here, it would have deployed one to 100 components depending on the needs. Also, the important thing is the next one, which is the capability provider. That's the image instead of the file. It's going to go to an OCI, registry is going to get it.
Then we're defining a link. The link's basically saying, this component is linked to this capability provider. What this capability provider does is it's an incoming-handler. If there's anything that shows up into that IP address, then I'm going to take it, and I'm going to send it to that component. I think they have two different traits right now. They have spreadscaler, which is the auto-scalable one to whatever. Then they have a daemonscaler, which is making sure that you have one of each running on each host that are available on your cluster.
Let me show you a bit of the Hello World code. That was the manifest of my application, defining a component and a capability provider. That's the name of my component. When I'm going to build it, the tool wash build, which is a wasmCloud tool, has a target. I'm targeting WASI Preview 2, which is the runtime that runs my Wasm component. Language is Rust. It's a component. There's a WIT definition, which is my contract. It's an actor. It has a request payload. It's exporting. It's using the incoming-handler export, which is basically getting the data from the query. If I go to my code, it's fairly simple. I'm importing. Yes, there it is, the HTTP provider. I have the request. What it's doing is basically taking the request and sending back, Hello, wasmCloud, which is what I wanted to show you here, which might happen. Because of course, if you do the exact same thing, sometimes it works, because why not?
The point is, if you go here and scan this, it's going to take you to the exact same thing. There's a link to open this in Codespace. There's a link to open this in Gitpod, whatever is your favorite online IDE. The good thing about this online IDE, it comes with everything pre-set up. Everything's installed. You can just start using it. You can also clone this and install this yourself. Basically, you can be hands-on and not rely on Gitpod like I did.
Questions and Answers
Participant 1: How well does this scale when complexity goes up? Obviously, it's like small functions, so maybe some real-world use case that you can share?
Laurent Doguin: I agree. If you think about function, basically, it's microservices, but smaller. If you think about microservices, there are so many issues that come with distributing your services, whether it's orchestration, observability, making sure that everything works together, increasing the rates of failure, increasing reliance on network, and network is the less stable thing in the world. Basically, you're doing this, but worse? Yes, it will rise in complexity, that's for sure. The good thing about wasmCloud is that because you can have your own capability provider and your own middleware, you can get away with it, which is, again, strong quotes on that. It's probably easier to reason in terms of middleware and a bunch of capabilities that you can expose directly.
Basically, I'm thinking about Lambdas, and if you start scaling your Lambda architecture and you start using all the services, it's hard. Hopefully, you can hide that complexity by using this. There's no shortcut, like the moment you start decoupling, you'll have some class of problems. There's no magic. I come from a Platform as a Service background, or at the time where people would just deploy Rails in production for Heroku or Clever Cloud, it still works. Maybe we don't have to do all that stuff. If you do, and let's face it, American Express is a big enough company, so they need that, and most Couchbase customers are on the same side of things, then it's easier, and it's better.
Participant 2: I've been using Wasm for frontend, but the backend side is new for me. I've been touching bits of it, but there is a tendency to shift the cloud architecture towards FaaS, and how far is it away still that the containerization is being shifted to FaaS? Because I see as well that's a big opportunity.
Laurent Doguin: Everything's a cycle, so it's always like cycling. I know people are going back to monolith as well. People are distributing more and more software, and that's one way of doing it. Is this a good idea? Is this where everybody is going to go? I'm not sure. It depends. If you look at AI agents and MCP server, basically, that's a playbook for that stuff. People will want to have as much autoscalable from zero to whatever, small bits of code that can only answer one specific problem.
In that case, it probably makes sense, and people will go all-in on this. At the same time, as these use cases are more common, then you get the actual runtime to run that. If you would have tried to do this with just VM 15 years ago, that would have been a terrible idea, but now we have hardened containers, and now we have Wasm, so it enables people to do it. We're tech, if we're unable to do something, there's a good chance we'll do it anyway, because it's fun, and that's what we do.
Participant 2: I'm just thinking further that FaaS itself helps us solve the green software problem as well, because you're using the CPU that you need.
Laurent Doguin: I don't think everybody needs that, but I think in terms of CPU, in terms of being green, it's great, because you only use what you need to use. At the same time, if your job is to sell hardware, basically, Clever Cloud has a Function as a Service based on Wasm, and we tried to do the pricing of this thing, the pricing unit was so small, that it was absolutely not profitable for us to do it. It's great, because on the one hand, it means that it's going to be cheaper for you to run stuff in the cloud, because it's going to allow us to increase the density of workload running on one machine dramatically.
Basically, it's great for the planet, it's great for your money, but when your job is to sell as much hardware as possible, and I call them retail cloud, retail cloud is just, I'm going to buy some square meters in hardware, and I'm going to sell it as much as possible, then it becomes a problem, because it's too cheap. Is that a problem for everyone? Not really, no.
Participant 3: Is there a good resource to go to to see the idioms and the standard ways of doing things. Is there a standard set of providers and stuff?
Laurent Doguin: Best resources, wasmCloud Slack, wasmCloud Community Call, every Wednesday, they all record it. You can search for all those wasmCloud Call, and basically, it's people joining from the community and showing what they're building. That's pretty great. The WasmCon YouTube channel, there's a bunch of case studies. Amex is one of them. It's a pretty good one. I invite you to take a look at this.
See more presentations with transcripts