BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Unikernels Aren’t Dead, They’re Just Not Containers

Unikernels Aren’t Dead, They’re Just Not Containers

Bookmarks
51:04

Summary

Per Buer looks in depth at one of the IncludeOS applications they have built, how they built it and how it has worked out in production.

Bio

Per Buer is CEO of IncludeOS. He founded Varnish Software ten years ago and he has spent his life working on infrastructure-related software that has been tied to performance in some way or another.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Buer: I thought I'd start you off by asking you a question. Have you ever considered why our computers don't have a control plane? To give you the background what I mean with control plane, it's a pretty well-known term if you're dealing with networking equipment. The control plane is firewall rules to your traffic. They decide how the traffic flows through your system. The data plane, on the other hand, is where the data actually moves. So if you can translate these terms to compute, we can have a compute plane and the control plane.

Why don't we distinguish between these? I think it would be a really neat architecture, if we could have something like this, where this is where we compute. This is where we do stuff. Clients talk, they send requests. We respond to them, how much is two plus two? It's four. And this is where we decide how that thing behaves.

So the real question is why have we granted the computers the power to modify themselves? Because generally, we have. We've given them the ability to mess around with their own internals. The reason is, as with everything else in software, because it's always been done like that. History goes back to these two guys, Thompson and Ritchie, working on the PDP 11 implementing Unix. Now, Unix was a third party system and it was running on bare metal. If you were to try to design a system where you separate the compute and the control, you'd actually probably would have to modify the hardware. Others weren't capable of modifying the hardware because it was a third party system.

So basically, they ended up with Unix being granted the ability to modify themselves. And 20 years later, the same thing happened with the other dominant operating system, Windows, which is also a third party operating system. A few of you might have dealt with other operating systems. I remember, in the early 2000, I bought a couple of mini-machines from IBM, and then I have to buy another machine because I got these two mini-machines and I turned them on and nothing happened. You need another computer to configure these two because they've actually done the proper work of separating the compute from the control. And we had the opportunity. We've given the opportunity now to do something about this. With virtualization, you can actually get away with separating control and compute without actually having to modify hardware.

So welcome to my talk, "Unikernels Aren't Dead. They're Just Not containers." My name is Per Buer. I run a company called IncludeOS. Before that I ran a company called Varnish software. And before that, I worked with Tomas over there in a little open source consultancy and product development called Linpro. We worked exclusively with open source from 1996. So I've been in a very privileged position to work exclusively with open source my entire career. Yes, so what I'll try to do in this talk is to share my experience working with Unikernels. Wat are they good at? What aren't they good at? What sort of workload could we put on them? Concretely, what is a Unikernel? How does it behave? Where can this be applied and what are the experiences?

Unikernel Primer

I’ll start off by giving a Unikernel primer. What is a Unikernel? Well, if you want to make something, you start off by making the operating system a library. You want to have a library function for sending a packet, you want a library function from reading a block from disk, you take your operating system, you'd deconstruct it and you put everything in libraries. So we have these functions that are now in the library.

Then as you build your application, you link that functionality into the application itself. So now you have an application that knows how to send packets over the internet. The application knows that. It doesn't need the operating system anymore. It actually has a NIC driver. It has actually memory management linked into it as well. The last thing you do is you add the boot loader or something on top of it. Add some code to initialize hardware at start and that's basically it.

You end up with something that looks like this. This is the memory space or it's a map of the ELF binary, where at the beginning here, you have the boot loader. There's some application code. Here there is a driver. There's some kernel code there, the memory management. There's a TLS library. Yes. Compare this to your typical stack. On the next we have the application, custom application libraries, those link with the system libraries, run on top of kernel which consists of all kind of stuff.

Yes. I should also mention that the underlying system, I don't make any assumptions about the underlying systems. So I think that is perhaps one of the things where we diverge a bit from people that have come before us and done work on the Unikernels, and that we have very few opinions. We don't really care where you run this. When we started development, we started development assuming VTD was there, so hardware virtualization. Some power based virtual machines from Intel are more or less identical to physical machines. So the first time we just took our operating system image, dumped it to a USB stick and stick it in computer and turn it on and it booted up quite fine.

We're not required to run in a virtual machine. I'm going to just try to do a very, very quick demonstration on what does this actually look? Let me see. I'll just try to put on. Can you see this? Is this readable? Yes. Yes, I'm writing just plain C here really. You can tell by where we got our name. So I think cat is underutilized as an editor. It has even basic line editing. So the VI and the Emacs wars I stay out of them, and stick with cat. Now, so this is our basic "Hello world." Yes. And the thing is we just grabbed the boot command because actually there was no other command on Linux that's named boots. So we just took that. That sort of built. It needs good privileges because it sets up a network. So this is the hardware initialization thing. And then it says, "Hello World." So that's super, super simple. I think building that image takes approximately three seconds. Most of the operating system is always compiled. It's also in libraries somewhere. We'll just basically link it. Mostly we're I/O bound on how fast we can do this thing.

Unikernel Characteristics

Let me see there. Was that reasonably clear? Yes? Let's talk about how do these systems behave? What do they do? How did they differ from our more traditional systems? What we've built basically is a bit it's kind of weird to talk about it as an operating system. You might as well talk about it as an operating system kit. So what I did was actually I wrote an application operating system that writes "Hello World." The Hello World operating system. That's what I actually wrote. You can view it in two different ways.

I would say one of the biggest differences is the system is fundamentally immutable. At some point, we sat down and we noticed that we, as operating system developers, we've never given the operating system the ability to modify itself. We hadn't written that. There is no code in the operating system that allows you to replace a function with another. So that makes them, in a very fundamental way, immutable. We have not granted them the power to modify themselves. Modifying an image is just not possible. Or it's really hard.

So it does change things a bit, like if we find an old running VM that's been running for a year, we're not scared of it. When I was Linux sys admin, I found a system that had been running for a year or two. We were treating it with a bit of caution and we would absolutely never ever reboot it, because God knows what lives inside there. We don't really know. With the Unikernel systems, you can be reasonably sure that what's running is what was running initially as well. I believe that this leads to greater security. In addition to this fundamental lack of being able to modify itself, a few practical implementations.

We do have operating system functionality, but the way we access it is through function calls. I take the 4-bit pointers that point to the read function. So even if you write a buggy application and you have remote code execution, potentially in that bug, what are you really going to do with that code? The shell code that you ship, what is it going to do? It'll be, okay, so you can spend the year trying to find the read call or the write call, but what are you going to read or write? There's not necessarily a file system here as well, because we typically just boot up a binary elf image and we don't load it off a file system. We just boot it up and it runs. I think that they're really, really, really immutable.

Another thing we found is that they are perfectly predictable. If an operation takes 5.3 microseconds to execute, it will take 5.3 microseconds every time. I mean, their memory caching will of course change this a bit. But if you compare this to Linux, where you have page faults, you have various internal locks, scheduling jitter, and other factors, that can lead to undefined behavior and it comes to timing. Stack I know, it does HFT or high frequency trading. They have this thing where, yes, it typically takes 2.3 microseconds. But every time all the moons of Jupiter align, it takes 100 milliseconds. 99.9% of the time it's fine. But once in a blue moon, it'll take forever. And that's because it's a page fault, and it that can't be resolved because there's a lock that is held and there are a ton of other things that happened.

Unikernels in general, there is no background processes, there's no task, there's no nothing. It's only the code that you put in there. Like my "Hello World" example, that's all the code that's there. Yes. They're self-contained and they're simple. The only thing we rely on is the underlying hardware. So unless you currently need to talk to hardware, it's not going to talk to anything else. Previously, today, we talked about Secure Enclaves in the operating system track at least. Maybe you're weren’t in other tracks. Anyway, I believe this is perfect. I know Jesse might not agree, but I believe this is pretty perfect to run inside the Secure Enclave.

For those of you who don't know, a Secure Enclave, the idea is that you have an encrypted part of your computer software defined where whatever happens inside of that is completely opaque to the rest of the host. The host has no idea what's running inside there. That's scary as well. But it also is a great way to protect all their secrets. But you need something to run inside to run that application code that's running inside there. And a tiny little operating system that only has some sort of predefined interface, so that you could, for instance, give it something, you give it to your email, it signs it, spits it back out signed, done.

Also, I would say a fundamental characteristic is a limited compatibility with Linux, and or generally, a very limited runtime. Normally, the Unikernels offer full POSIX compatibility because they can't, because Unikernels are single POSIX, single address space. In order to be fully POSIX compliant, you have to be able to clone your processes and that doesn't make sense. So there's nobody that's fully POSIX compliant. And personally, I don't believe that we should strive to do that as well, because POSIX is just an after the fact description of Unix, really. They didn't write POSIX and then let’s say... what would be your spin in the operating system? Describe it and then implement it. No, it was the other way around. We've started implementing it and then we described it 15 years later.

I just don't believe that it makes sense for Unikernels to try to do everything that these complex operating systems do. With Linux, you can do really crazy stuff. You could have your application. You could implement it halfway in Haskell and halfway in x86 assembler. That system supports that perfectly. But you have a place to run those crazy things and limiting the scope of what you can do, I believe, is in general a good thing.

Until now, I've talked about Unikernels in general. I will not talk a bit more about more specific things about IncludeOS. There are a couple of things that are different. The most, I would say, best known Unikernels are probably Mirage written in OCaml. So IncludeOS came from an engineering college. So it's very pragmatic about everything. It's written in C++ or C++ 17, because C++ is performant and it's industry standard, and you can do it to solve real problems.

There are some good things about C++, or I would say many good things, but for an operating system in particular, being implemented in a system that allows you to ingest a lot of other runtimes. One of the things that we hope to do this year is implement support for node. So since the V8 engine is written in C++, we basically need libuv and then we should be able to compile and run node. Their event model aligns perfectly with ours, which is not by accident. Yes. We're also multicore. We don't have opinions about threads and multicore systems. Well, in essence, I think we're as much a library operating system as we are a Unikernel. So our library operating system has no limitations on what you can do with it. We provide you with the tools to do whatever you need to do. Yes, and that's basically it.

There are a few practical things as well. So I would say we have a party trick. This is what I do at cocktail parties and stuff. This happened because developing for Unikernels has not always been unicorns and ponies. At some point, we were developing on Google compute and it was a pain in the ass. It was horrible because every time we would change something, we would need to shut down the VM, replace the scrimmage, turn it back up again. That adds five minutes. If you add five minutes on top of compilation, you basically have developers with pitchforks pretty soon.

What we created as a pragmatic approach was this. We call this system live update. It basically relies on the fact that we run in a single address space. On Linux, you don't have a single address space. You have your server, it accepts the connection. You have some state in your application. And the Linux kernel has some state for that same connection. The TCP socket resides both in your application and in the kernel. You don't have access to the bits that are in the kernel. We're in a single address space. We have access to everything. The TCP connection is the C++ object. We can sterilize it. That gives us the ability to do the following.

This has been basically a map of memory on a system and we have my application one all running on the system. Now, what happens is when this thing boots up, it connects to a node service somewhere on the internet or on the local net and that service guides the system, it's the control plane. It tells you what should this system do. Because we don't have a shell, there's no shell. If you want to change something, you basically have to do it on the application node. And what's happening here is that the system decides that that we need to update. So it pushes down an update and it gets split into three chunks, because we couldn't find continuous memory. Then what happens is that we have functionality to serialize all the state in the application/operating system, like list of TCP connections, or if it's a firewall, the list of the connection tracking table, open files, or whatever. And we write that somewhere here.

Now, what we do then is that we have a little handcrafted piece of C code. That's actually stuck way down in low memory. We'll just overwrite the binary. Then we boot it up, we run it. We basically run through the whole system, except that we now know because there's stuff here in high memory, that indicates that this system has run before. And once it's done initializing, it will actually restore the state and it will continue to run. Then we discard the state. And we have now replaced 100% of the running code on the system without downtime. There's some downtime. Downtime is between 5 and 100 milliseconds, depending on how slow your PCI emulation is. I think there's room for optimizations there. But in general over internet connection, you should not be able to detect that this happened. Is that reasonably clear? Yes? Is it pretty cool? For me this is the coolest thing I've ever seen.

Configuration

An interesting thing, of course, is when you do build an operating system based on other principles, you start questioning a lot of the design decisions of your current systems. Why do we have configuration files? Well, we have configuration files mostly because our operating system is vendor supplied. Program code is vendor supplied. You need somewhere to have local adaptations so that you can retain the local adaptations across upgrades. There's a cost to configuration files. Every time you add a non-trivial configuration, you add complexity to the system.

Do you guys remember in the '90s, we used to have a lot of Unix applications that didn't have configuration files? They would be config.h and you would edit config.h. You would compile and install. And that would be actually it. 3D printer has the same thing actually. I punch in what stepper drivers I have and how it should behave and compile it. A configuration file isn't always the only way to solve that problem. I mean, there's a lot of great stuff about configuration files, but I'm not necessarily sure they're always the right answer.

NaCI: an Alternative to Configuration Files

So I'm going to show you something that we've done that gave us an alternative way of solving what many people solve the configuration files. So at some point, we had to create a firewall. This was early in 2017. The nice things about firewalls is you just shovel packets back and forth. If you can route packets, you just disable that and you basically have a firewall, because firewalls, the default position is not throughout the packet. Also, we weren't really that sure how robust our TCP stack was at the time, and it turns out it wasn't very robust. But the thing about firewalls is they can actually just push back and forth and you don't exercise your TCP implementation as much as your words would have to. It's a lot simpler to push TCP packets than to receive them.

So that's why we've wrote a firewall. We started out by looking at Netfilter. So Netfilter is the firewall that lives inside the Linux kernel. We thought that what we want is semantically quite close to that. And we started doing it, and we started creating these chains of rules and populating them. It struck me that I've seen this thing before, having things move along rules, in the performant dependent situation. One of the really cool things about Varnish is the way that it is configured. Varnish is configured. They don't have traditional configuration file. It has a VCL file, and VCL stands for Varnish Configuration Language. So what it does basically, is that file is a high level, non-Turing complete language. And when we load it, we transpile it to C code and we throw GCC at it. And it creates a shared object. That shared object isn't loaded and executed. It's a really interesting pattern which I really implore you to study a bit if you haven't seen it. I think there's great things you can do with it.

But we thought what we're trying to do here is more or less the same thing. Which we're trying to actually to take a packet, write a bunch of rules to describe how the packet flows through it. And then take actions. So the language itself looks something like this. There are some definitions up in the top. I don't know if you can see them. Bastion host is defined with an IP address. There's a list of allowed services. Those are just ain't. Allowed host is a range. And then there are some rules there, which if connection tracking state is established, then syslog it and accept the packets.

This was actually really great. I don't know, I really hate reading IP table scripts because they missed this one crucial thing, this thing. IF statement. IF statement, best shit ever. It allows you to simplify things. There's so much stuff that becomes simpler if you have IF statements. So we can emulate IF statements with sub-chains and stuff, but it it's not as nice as this. This is perfectly human readable. You've never seen this before and you can likely understand everything on it. And that's nice. For security, that's also quite important that your people actually understand what they're doing.

Now, we wrote it. I would say it was a naive implementation. The implementation took between two to three months. It hadn't struck me just until last week, perhaps the most important takeaway from it was the fact that we actually were able to implement a firewall in two to three months. The Netfilter team with Rusty Russell and the people around him, I think they spent two years writing Netfilter. I mean, and they were quite experienced people, like on Annika. She wrote the firewall. She has never touched networking code before. And she wrote something that was semantically quite close. That performance-wise beat the crap out of Netfilter.

This graph was created by a student at the local engineering college. It adds rules to the firewall script, more and more rules, and then sees how it impacts performance. So performance here is just throughputs. It would have maybe made more sense to do packets per second instead of gigabits, but yes, whatever. This is our firewall and it has a completely flat line. I think there's a 3% slow down when we have 5,000 firewall rules, which is where they sent. This is Linux. This is source filter. So here we filter on the source of the IP package, and this is our destination port. Anyone here want to take a guess of why it's slower to filter on destination port rather than IP source address?

I think it's just at least one layer of indirection. TCP is the module in that filter. You also have to parse the TCP part of the packet, which you can skip if you're just filtering on source. But you see that it dramatically slows down. And I feel bad about doing this. I really like Netfilter. I was a huge fan when it came out, because it was so good, that much better than IP Change which was pretty horrible. But I think it was interesting how we were able to write an implementation so quickly without any experience in that field. And it has almost been completely bug free.

I can talk for hours about just that factor and why I believe that we were able to do it so quickly but, yes. I should also notice that this thing here is scheduled to be taken up behind the barn and shot soon. I think at least nftables or eBPF will replace iptables pretty soon. And it has the exact same characteristic. It's almost as flat as we are. Of course, it's like 15% further down but, of course, yes. When they built Netfilter, they had to build the runtime for the system. You have to create all these data structures and push the package through the data structures, and there's all this complexity that you have to do. And we just basically created just an ingestion point in where there are C++ codes that just accept the packet, runs through it, and it's just spits it out in the other end. It's so much simpler.

When Are Unikernels Relevant?

I'm trying to get to a conclusion here. So when are Unikernels relevant? I think there are a couple of things. I think for me currently, I think the most exciting thing is the predictability. We're able to do build predictable systems that are able to perform the same operation again and again, without this long tail of latency. I think we've looked at systems that are using FPGAs today where we can perhaps replace FPGAs because we are as predictable as FPGA-based systems, because we can use small operating systems. We can use all kinds of weird stuff with it. We can turn off interrupts. We don't need the actual interrupts in order to run. And likely, there's nothing else happening on that core so we might as well pull.

It's also quite performance, specifically, when you use these tricks, so that's translation. I think that's interesting. I think security could be an interesting thing as well. It used to be our software was here and our infrastructure was here. There's this thing now where we are embedding software into our infrastructure. There are people building houses. They ship with the Linux box inside them. That house is estimated to last 30, 40, 50 years with the Linux box in them. And that thing controls power, heating, maybe the door locks. I think that's great and everything, but would I buy a house that was controlled by a 25-year-old Linux computer in the basement? Not necessarily sure about that. Of course, I have no idea how well IncludeOS will stand up to 25 years of scrutiny. But I suspect it will do it better than then the next world because of the fundamental immutability of the design.

Of course, there will be denial of service vectors to it, which would be unfortunate if you can't get in and it's minus 15 degrees. But at least an attacker won't necessarily have the ability to make some of the power relays jitter so that you have these power spikes that will necessarily make the relay catch fire and burn down your house. At least that would be a lot harder. Security, I think, is an important thing. For some people, they like this. There's no kernel user boundary. I struggle to come up with a good example here of why that is relevant. There's nothing keeping you from snooping raw Ethernet frames. If that's what you want to do, then that's super simple to do. If you want to hook something that processes the raw Ethernet frames as they come onto the wire, it's super simple. If you want to build a system that consists of two machines on the same network with a good interconnect, that cooperatively work under the same TCP connection, I have no idea why would you do such a messed up thing. But it's possible because there are very few boundaries of what you can do.

Now that I've come to the end of the talk, I should, of course, mention that you can interrupt me with questions anytime. That's some blue sky stuff. Since we had those 10 minutes, we have this internal concept, which is just a concept at the moment which we call Shadowkernels. Shadowkernels basically are the ability to run multiple Unikernels on a single VM. We could do some really interesting stuff when we start doing that. We can start to really prop up security even further. The idea here is to load the kernel, kernel zero for Unikernel zero. It runs in privileged mode, ring zero. Then what that does is boot up another kernel on another virtual CPU. Unprivileged runs in ring three. It's completely read only. It runs in ring three, so it doesn't have the ability to modify its own page tables. It could also be running in whatever next generation of Intel Secure Enclaves is, so that this thing can't snoop on it.

And then it will fire up another one, which is also a non-privileged one, runs in also read only memory, which is the load balancer, or the one that actually takes TLS connections and terminates them. This is where your TLS keys resides. This is what talks to strangers at the Internet. And this one is the only one that has the hardware capability to modify these ones. We have right not execute on stuff. I think that would be interesting thing. I'll skip this one. And I think, I'll just take your questions if there are any.

Questions & Answers

Participant 1: One thing I was wondering about was, I think, you possibly paused for questions at that point when you were talking about the live updates?

Buer: Yes.

Participant 1: The state in that case, was that just a state of where things were laid out in memory, or was it the actual state of the application?

Buer: That was the state of the application.

Participant 1: So when I do an update, say, well, I guess a proxy or cache is good as example as any, it will come back up with the cache hot.

Buer: Yes. The thing is, you'd have to write a method that would implement serializable so that that object would be serializable. Operating system will just give it a pointer to where it should dump its data.

Participant 1: Yes. That's exactly what my question was going to be, because when you overload work the data structure changes?

Buer: Yes, it does. And of course, if you have breaking changes, you get to resolve them before it works, naturally.

Participant 1: Can I ask a quick second question? Specific topic, but I guess, when you have everything laid out in memory, as you said, it's that some things are a security issue as well, because an attacker can figure out where things are in memory or?

Buer: I don't think so. I mean, it's virtual memory so I have no idea how things end up in physical memory. We used to have really simplistic randomization of, not ASLR, but the static one where we would mangle the address space a bit when we compiled. That didn't really add anything. We removed it but still the linker does dramatically change where everything is every time we rebuild it. Things are fairly randomly placed. I think we will try to do proper ASLR at some point, so that every time we reboot we come up with slightly a bit different. I don't see it as a vector. Yes.

Participant 2: What's the relation of the IncludeOS to containers? Will it possible in the future to run containers like Docker on Kubernetes on the OS, or would you run the OS within the container?

Buer: So the question was how do we deal with what we view as containers? When we started out, there was this infamous blog post how Unikernels will kill containers in five years. Then Docker had this knee-jerk reaction and went and bought Unikernel systems. One thing is talking about control plane and compute plane, but then other ways of looking at it is, is Unikernels are perfect for these predefined systems. I want a system that behaves like this, this, this, this, and this. This is what I want. Now I build it. That's very much akin to Linux kit, by the way.

So whereas these very, very generic tools like containers are runtime defined, I think, they rely a lot on being that when you have things, tools, like Chef and Puppet that rely only on that. And I think, trying for us to catch up with Linux and being compatible with Linux, I don't think there's ever been a successful operating system that tried to emulate another. OS/2 died. One of the reasons why it died was because it had this brilliant win32 emulation that would allow vendors to just skip writing support for OS/2 because it emulated Windows strategy perfectly. Freebies emulates Linux. It's probably not going to win.

And I know that there are efforts, I think [inaudible 00:45:23]. I think it's really interesting the way they try to take a Linux elf binary and create a Unikernel around it. I think that's technologically impressive. But what I'm afraid of is that being 99.9% compatible with another systems is not probably going to make it. So for us, I think, it was important to find the exact things that we can do. We can do this, Linux can't do that. So that was what this goal of the talk was, try to share that experience with you. If you want this, if you want predictable ultralow latency, you can try do it on Linux. It's going to be painful and is going to cost you a lot of resources and 1 in 10,000 transactions are still going to be hit over the head with a baseball bat. Yes.

Participant 3 That's a good feed into my question. Where in the real world do you see actual pickup or interest? Is it, as you mentioned, build time versus runtime? Is it load balancer or firewall builders as a high frequency trading people? Who expresses real interest in it?

Buer: I think the high frequency trading peoples are the ones that express real interest. We've tried doing the firewall and there was some limited interest in it. First and foremost, it's super neat the way we do firewalls with this live update thing. But the only reason that it's neat is because I tell you what's happening on the backend. If you just saw that system and you gave it another set of rules and it just changes the rules, that's not really that impressive. But if I tell you that we build a new operating system and it hot swapped your operating system, it's a lot cooler. But that doesn't really give you any business value. So I think, there's been a lot of work on figuring out exactly what we can do that you can't do on other operating systems. Yes?

Participant 4: Yes. It's kind of tied into who was going to use you.

Buer: Can I supplement that a bit?

Participant 4: Yes, sure.

Buer: I think HFT is one thing. Probably also might be also telcos because there's lots of really latency sensitive applications inside there. I thought that in really big data centers, if you have more than 100,000 cores, there might be latency sensitivity as well. But I'm not entirely sure. If you have more than 100,000 cores in a single network and you have latency problems, I'd like to hear how that manifests itself. But I think the second thing is going to be appliance. The appliances, IoT or whatever you call appliances these days, because they do one thing, "Do one thing." And Unikernels have been around actually for forever. Just microcontroller systems. They're free actors and the other things. The library operating system was built as a single image. My 3D printer, for all practical purposes, runs a Unikernel at home.

It's just that as people need more and more CPU power, and the need to leverage GPUs, that's not going to cut it, what microcontrollers anymore. And hopefully, some of those people who jump over on the CPU based platform will like to retain the control that they used to have over their microcontroller systems. And, I think, that could potentially be where we go. It was a very long-winded answer, sorry.

Participant 5: I spent the last half hour trying to figure out what to compare you to. Should I compare you to a container or should I compare you to the JVM? I think JVM is a bit fairer. Sure, you end up being polyweb, if compared to Growl, it would actually be multiple languages, multiple support. So do you have any big advantage other than obviously, speed and the predictability?

Buer: The security. Yes.

Participant 5: Yes. Securities needs to be proven though but you definitely have the advantage of how do you attack your system, someone needs to figure that out first obviously...

Buer: Yes. That's just true.

Participant 5: So is there mostly a speed that ends up being ...

Buer: I think that predictability is much more important than speed, actually. Because currently, we're not real-time capable. Basically, that is because for an operating system to be real-time capable, you need to have a sensor goals pi. And you need to be able to run code immediately, like throw whatever is running on the CPU away and then put on the brakes; literally put on the brakes, because if not, you're going to kill that poor lady that's being detected by the LiDAR. Yes. I think that the stop sign was off. So I think if there are any more questions, please come forward and talk to me. I really, really like to hear your questions. I'll be here until Wednesday. So if you will have other questions and can see me later, please come and find me.

 

See more presentations with transcripts

 

Recorded at:

May 10, 2019

BT