BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations A Journey into Intel’s SGX

A Journey into Intel’s SGX

Bookmarks
38:50

Summary

Jessie Frazelle takes a deep dive into Intel's SGX technology. Frazelle covers an overview of computer architecture as background, then walks the audience through one version of the hardware and its flaws, as well as what changed in the next version.

Bio

Jessie Frazelle is an infrastructure engineer at GitHub. She has served as a Maintainer of Docker, Contributor to Runc and Golang as well as other open source projects. She loves building systems and is typecasted as the person who runs everything in containers. She is writing a book on eBPF with David Calavera.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

[Note: Please be advised this transcript might contain strong language]

Transcript

Frazelle: I'm Jessie [Frazelle]. I'm going to talk to you about this journey I took it into Intel's SGX, the hardware kind of module that does secure execution. It's also like a secure enclave, and I'll go over what that means. A little bit about me - I'm currently unemployed, maybe doing consulting, maybe not, I haven't set up the whole LLC thing. A little bit about how this all unfolded and a little bit about Enclaves. For those that don't know, a secure enclave is something that's run in encrypted memory. So it's for the use case where, the original use case actually was for DRM for like Netflix or Microsoft, and so you could store certificates in there and have your digital rights management in this secure place. It's pretty cool for that. And it's made by Intel, and ARM has one as well that does the same thing and other manufacturers have things as well.

How I started looking into this at first, was for the use case of running a container in there, but I think it's almost better to tell the story and start it - this paper that's really good that Intel SGX explained. So, if you want to get into this yourself, I would say start here. The cool thing about this is the writers, or one of them, I'm not sure which, I forget the story, but they wrote this 118-page paper, which is crazy, that goes over, not just SGX, but computer architecture, because they were trying to prove that SGX was bad so that they could go and then after this, make a secure enclave for RISC-V, which is what they're working on now because they finished this 118-page paper.

That's really cool, but also just what's great about this paper in general, is that half of it is just on computer architecture because to explain about SGX, they first had to explain all of computers, which is a little bit insane. If you just ever want to know about all of computers and SGX, you’ve got it all in one paper, but you have to read 118 pages - that's pretty cool. They have some really good diagrams and stuff and I'll just go over some of them. This is kind of what secure remote computation looks like, and this hasn't been solved. This problem has not been solved where you are trying to run something and you trust your data, obviously your computer, all that, your verification, but you don't trust the infrastructure owner, so consider it maybe like the cloud or whatever. This problem hasn't been solved and this isn't what's solved by SGX, basically, is what they're trying to say.

But what they do solve is trusted computing, where you trust the manufacturer of the hardware, in this case, Intel, but you don't trust the cloud provider like Microsoft or Google or Amazon. You want to give some code and run it in this cloud provider, but you're like, “I don't trust them, so I'm going to use this Intel license that I have and run it in SGX.” So that's what this ends up looking like. You're on the cloud infrastructure, but you're running in the trusted hardware in encrypted memory. This is the dream, but this isn't actually reality is what turns out, and this is what the paper is trying to show. I'll go over all the problems with it and stuff like that, but this is overall what problem they're trying to solve, which is cool. But again, if you think about it, in the case of DRM, it's still the same, still looks like this as well.

This is how the software privilege levels pan out. You start with really privileged, you're in the Bios, you own everything. That's also the Intel management engine, which people seem to be paranoid about it. I also am now, because I read all these papers about that too. Never read things, I guess. Also, I'm not an expert in SGX. I just read a bunch of shit and entered the rabbit hole and I couldn't get out. So this is me taking you with me. Maybe we can all get out together. That'd be cool. After Bios, then you have your hypervisor entering zero and then it goes up and you're in the kernel. And then on top of that, is your application and SGX. So you're in the less privileged space, or so you think. All these things are so you think until you realize that everything is vulnerable and broken.

How I got into SGX and how this all started, like I said, it was originally for DRM. But then Microsoft researched this small kind of group of people inside Microsoft research. We're like, “Let's run some code in there.” Then they wrote this paper called the Haven paper. The way that this sets up how they're going to run code in there is that they just shoved an entire Windows OS in there, which is a little bit insane. If you think about it then, if Windows has a bug, your bug still exists inside the enclave, which kind of sucks, but this is cool in the fact that no one had done this before. They ignored the entire complete original use case and they're like, “We're going to shove in an entire OS in there and do code execution”, which is pretty cool.

Challenges in the Design

Some challenges in the design of trying to execute code in one of these, there's a lot and you have to make, a lot of trade-offs. Now you see there's a bunch of different ways that people are going about this. And there's been a bunch of different papers that I've read, and they all kind of make these trade-offs in different ways. If you're thinking of using this, you just have to keep all these things in mind, which is crazy. You want to keep the code small. (Sorry, I just realized that this was one slide behind me every single time.) You want to keep the code small. In the case of shoving an entire OS in there, that's not really a small trusted base. But yes, you keep the trust and confusion bay small with very few dependencies so that you lesson your risk of actually having a vulnerability inside your code base. If everything's inside the sandbox then it's not really a sandbox, kind of.

Then there are performance issues. One of the ways that people solved this in one of the papers was called SCONE and that was running an actual container inside the Enclave, and that's kind of how I got interested in this because I was like, "Whoa, containers inside the enclave. That's cool." But the way they did it wasn't like Haven, they didn't shove an entire OS in there. They decided to leave and come back from the enclave. So you have your host OS where it's kind of untrusted, especially if they're thinking the cloud is untrusted, your hosted OS is also untrusted, but the enclave is secure. Then you're like, “I'm going to take the syscalls and run them back in the untrusted OS.” At that point, you have to do a bunch of stuff which I'll get into, but it's a little bit insane and you also have to encrypt every single IO call going in and out of this enclave. And then also, where is your boundary if you're popping in and out of the enclave? Either put everything in there; you put half in there. It gets very complicated.

Memory pages for an enclave live inside the enclave page cache, which I'll also get into because it turns out, with Spectre Meltdown, if you were to patch your kernel for Spectre in Meltdown, it doesn't apply to where memory is for the enclaves. So you're still vulnerable to Spectra and Meltdown because Kernel page table isolation doesn't work for it, which is horrible. After a cache miss, the cache lines are decrypted and they're fetched from memory.

The threat model for this is usually in the world of container runtimes and everything else today. Once you have superuser access to the computer, in this case, the cloud, it's game over. That's also the whole premise of not trusting in the cloud. You're running the VMs. I don't know if I trust you. So there's that. But the promise with SCONE or anything else, it's that it's not game over if someone has access to your host environment or the environment that you're running in. I mean, or so the promises. All these things assume that an attacker had superuser access. They also have access to your physical hardware. If you think of the cloud provider, they do. They control the entire software stack, cloud providers do, and then they can run privileged code, like the hypervisor and all that.

Design Trade-offs

What it doesn't cover and what SGX in general doesn't cover is denial of service attacks and then side channel attacks, like timing page faults. So you could consider a Spectre Meltdown to be in those as well. The design trade-offs that they made for the SCONE paper also goes over the Haven paper, which was the first one for Microsoft. So I'll go over those.

This is what the Haven paper it looks like. You have your untrusted host OS, and then everything that's in the trusted thing is basically your entire OS. They put the entire OS, but for what SCONE tried to do, they tried to do this weird shielding layer, which I'll get into in this next one. But they tried to minimize the external interface by placing everything inside the enclave. But this is the tradeoff you're making against having a very small base. Then, if you start trying to pull more things out, you're, like, “Maybe let's have the C library in the untrusted and then we'll have some Shim code.” And then you're, like, "Well, if I start going outside the enclave, everything will be decrypted and you can read it anyways, right?" Then that kind of is not great since the whole point is having this all encrypted and good.

What they ended up doing for this paper is they made this shielding layer where everything gets encrypted as it gets passed back into the host OS. This is super slow. Even if you execute inside the enclave, like in the Haven paper, it's slow as well. But passing the syscalls outside to the OS is just super slow. It's really a trade-off of what you want when it comes to that. So, anything that's calling IO needs to be encrypted and then you also can't fork and you can't use a lot of syscalls that you'd want to use. System calls being performed outside of the enclave, super expensive. Memory page halts have a huge overhead, all three cache misses were 12 times as slow which is just absurd.

This is kind of what they come up with. It just shows that you have these cues where the syscalls get added, and then they pass back the responses back over to the actual enclave. Then they have all this crazy ass software where they basically reinvent all of computing inside the enclave just to do these passages. I don't know, it's just a bunch of software that you can't trust. I mean, it's the topper that they wrote so they trust it, but anyone else who uses it, it's like, “Do I trust this?” I don't know. This probably has vulnerabilities too, so it's really something you’ve got to think about.

There are further iterations on this and one of the cool ones was by Joanna Rutkowska. I don't know how to say her name, but she's really well-known in the community and she wrote a series of blog posts on her tradeoffs that she was making. She went over a lot of those pain points that I went over earlier. And what they ended up doing was actually more in line with the Haven paper, in that they shoved an entire OS in there because of the fact that they wanted to put more emphasis on creating the boundary around the enclave, and making sure that nothing else got out, which I think is a really good trade-off to make at the end of the day, because if you can trust all the code that you put into the enclave, that seems fine.

I almost feel like it's way more insecure to do the toss back and forth because that's a lot of data being transferred that maybe you can't always guarantee that it's secure with any sort of side channel or timing attack. Although it's not like enclaves even protect against that anyways, but you know. So that's their project, which is pretty cool. And she works at a startup that does all untrusted cloud computing and they pay for it with Bitcoin. They have a real use case for this.

The Weird Thing about Launch Control

There's another thing about SGX that's super weird and this is also when I was starting to dive into this again. I got super nerd sniped by it, which is not normal because it's this really weird detail. But it's called launch control. In order to actually launch an enclave, you need an Intel key. So a lot of these cloud providers are creating software for allowing people to launch enclaves. Google has one and Microsoft has one, but you also just need the Intel license to actually launch it. That actually ties you into a hardware vendor and you have to go through this whole launch control step. So at the end of the day, everything's going through Intel, and developers and everyone kind of freaked out about that.

One of the main points in the paper was, even if Intel was to fix all the bugs and make SGX secure, you still have this problem with launch control, where they're controlling the market for this thing. That's really interesting to me because it seems like if you're going to make a product, which is what these cloud providers are doing around SGX, you're still having this user interaction through Intel, and also if, say, your customer, doesn't actually want to have that user interaction with the license, with Intel you can't get around that.

Intel fired back and they made this thing called flexible launch control and it allows you to do this token process yourself to verify that you can spin up SGX-like enclaves. It's questionable to me. Although I didn't really even try it or anything like that, it's questionable to me how this would work in solving the problems for the cloud provider, because they can't do this for a user, because at that point why do I trust the cloud if they're doing launch control for me? The user would have to do this flexible launch control thing themselves and they'd have to clone this repo and follow all these steps and do all the things.

It's super weird to me that in order to actually launch an enclave, you need a whole Intel license key and you need to go through their whole process for getting a token, and then you can finally launch the enclave. The whole thing is super complex and it seems like, if you're going to build a product on this, that's asking a lot of customers to actually do. I'm not actually sure if they are doing that in reality, but it depends.

Another really actually cool use case of SGX was Signal the app. Signal uses SGX to solve that problem where, when you add all your contacts knowing which ones use signal and which don't, in a way that makes Signal not know whose contacts you have. They wrote a whole paper on that as well and it's super interesting, their use cases. A lot of these use cases don't really necessarily touch on the attacks on SGX in terms of side channels and stuff like that. It's not technically, I guess, a part of their threat today because in some cases it's not solvable, and there are a bunch of attacks which I'll go over, that still haven't been solved with SGX.

Attacks on SGX

Let's see some of those. I tried to put these in chronological order and I'm not sure if they are. Foreshadow is interesting in the fact that it uses the same exact attack as Meltdown did. But the problem is since like I said, the enclave address space is different than your actual Kernel address space. So the mitigations against Spectre and Meltdown don't work against this, because Kernel page table isolation doesn't cover the enclave address space. In the paper, they end up stealing secrets from inside the enclave, which in most cases that's game over. You get the secrets from inside the enclave and you're done. Everything's gone because at that point, they have your keys. But no, they actually took it a step further and they ended up getting the private keys for the enclave, which means that then they can mock any enclave. They can be like, "Hey, I'm an enclave and put your data in me," and then they'll actually have all the control over what's happening. That's really bad. And they create all these fake added stations and stuff like that. So they basically can also then become Intel. It's not good.

Then it escalated actually even further than that. There's this other one based on Foreshadow, but what they actually did instead is the Meltdown attacks, usually they can only read the privileged data within their virtual address space, so they can't pass into other address spaces, until this attack when you actually could bypass all of virtual memory and you get every single thing that's in virtual memory, which is again, absolutely insane. They got all the cache memory, just not just their own virtual memory, so things are escalating pretty quickly.

Then there is another one. People started putting malware in these, so you can actually start using them as a feature and conceal your own cache attacks by running them inside the enclave. I got this paper from someone at Amazon because I think, a lot of the cloud providers are looking into this. And it's interesting to me that Google launched this kind of experiment event for hackers that's from now until a month from now. So, if you want to play with SGX, you can submit your experiments that you do to Google. Although you can't actually run it in the Google cloud yet; you have to find somewhere else to run it.

But I'm just curious why they're doing that when you can't run it in Google cloud yet. It's almost like they're trying to see if they want to actually implement this or not. But Microsoft actually has SGX as a feature if you're in the Beta, and then Amazon doesn't have it yet, or they don't have an answer for it. So it was interesting to me that a bunch of the papers came from Amazon people. This was one of them because they want to make sure if you are using an enclave, you're not using it for malicious reasons to conceal your own attacks on other things. Because if you can imagine, running this in the cloud and then having access to all of virtual memory and then concealing your own attack in it, it would be pretty insane.

Another one is cache attacks as well. This one just amplifies the actual attack. When you get it, it makes it more performant. Everything's better. So, there are a lot of hidden features in this technology that are not good. They were more like features for hackers. And then the last one that I saw that came out was this one. It's another malware. Although this one claims to be the first malware one, but I'm pretty sure there was malware in other ones before, but I'm not sure. Maybe this one is just practical and the other ones weren't practical enclave malware. Just using this as a method of attack when it is running in a cloud would then give you access to a lot of privileged information in a cloud provider scenario.

My main takeaways from this talk are not that SGX is bad or anything, but it's actually interesting to me that things got so bad when the original use case was actually just for DRM. This escalated so quickly after the Haven paper that people started using it for remote code execution. So it would be nice if Intel started iterating on the actual design of this, so that it was more realistic against side channel attacks like Spectre, Meltdown, and then also against a lot of these things that people are trying to do, because there should be a solution. If we go back to the beginning of trusted computing, just today the solutions were actually made for a different use case.

My hopeful outlook for the future is that the author who wrote the 118-page paper is going to now be working on a RISC-V architecture for secure enclaves. It will be really cool to see what he comes up with in terms of that, because with RISC-V, you can better design your hardware against certain things and you can choose your trade-offs in what you want. I think that will be a better answer to this than using Intel's, not only just because of this and all the problems with the side channel attacks, but also the licensing issues and having to deal with launch control. So that was my fun story and I guess I will take questions now. I talked really fast.

Questions & Answers

Participant 1: I think I've tried to explain secure enclaves to people and I tried to view it as software-defined, these trusted computing modules, except that I think Apple has the T2 chip or something that does a lot of the same things. I don't know, maybe there's speculation as all other alternatives than the pure, software-defined methods of achieving the same things. What are the more specific things we're trying to solve? Are there other ways of doing it?

Frazelle: Yes, that's interesting. Because if you have your TLS keys or something like that, and you want to store them somewhere secure, you would want a hardware secure unit. There's a hardware security module, maybe just use one of those. But at that point, that would be way easier to solve this problem actually, and I think that a lot of people obviously use those. I've worked at companies that did. But when you're trying to run code execution, you need something that can do that. So I don't know. Could you run code execution in a hardware security module? Maybe, depending on the actual implementation of it, I'm not sure actually.

Participant 1: Yes, I think you could just pop a microcontroller in there and you could have ways of loading and unloading the code into it. So it could be like a RISC-V CPU or something.

Participant 2: Do you think it's realistic, really, in the long run to not trust your cloud provider, or do you think that that's something that you should accept as your trust boundary? It's a kind of nice idea, but I can see so many practical areas where there could be problems with this.

Frazelle: Even when I worked for the cloud provider and they said that they had this product coming, I was like, "But why?" Because it just doesn't make sense to me that someone's like, "Here I am, going to do this in the cloud," when they could just run it on their own. And then if you're going to go so far as to be like, "I don't trust Intel and this launch control thing, I'm going to do that on my own," it's like, “Then just host the service on your own. I don't know why you have to do it in the cloud because it's absurd. You were doing so much work, you could totally just host it on your own.” A lot of these places that kind of want this thing, they also have on-premise and then they kind of do hybrid cloud thing. So I'm like, “Why are you doing it in the cloud if you also have on-premise? You have the place to run the thing.” It seems really weird to me. Honestly, I'm very skeptical. I also haven't met a person that's like, "I can only run in the cloud," so I don't know.

Participant 3: Do you think there is an instruction set revolution coming? There's a lot of talk about RISC-V these days and other kind of novel ISAs coming out. Is that something that cloud providers are going to start adopting, do you think?

Frazelle: Shout out to the RISC-V talk that's later in the day. You should go to that. I'm a huge fan though. I want that to win, and for it to win, in my opinion, someone's going to have to adopt it, someone big, so that they get a lot of the integrations into all the code languages. Although, a lot are picking it up now. I saw Rust and Go did or are working on it. But it would be nice to actually know that Apple is putting this into their computer, so now all the developer tooling and all of that is going to work. I would love to see it, but from what I've seen from cloud providers and such, they're more into making their own thing which is an anti-pattern. Sorry, I feel like I'm really bleak against the cloud right now, having just left a job there. I swear, it's not related. Maybe.

Participant 4: Is there anything in the ARM space that's comparable to SGX?

Frazelle: Yes. They have one, I forget the name.

Participant 5: Trust.Zone.

Frazelle: I think it has the same problems. Trust.Zone. I'm not sure. Someone was telling me about one of them. It's either AMD or ARM that has the same problems. They probably both do. Don't quote me on that.

Participant 5: You don't have to get permission from Intel to use it though.

Frazelle: No, see then that's way better actually for them.

Participant 5: Or from ARM.

Frazelle: That's good.

Participant 5: Do you kind of feel that there's this desire for hardware to fix all the problems, and well, “Let's get a hardware solution for this problem”? It sounds better than a, “Let's get a software solution for these problems.” Is this what's sort of driving people to want to use these things?

Frazelle: I'm not sure actually. I think most people would be like, “Is hardware better?” I mean, is it? It really depends on the use case. From what I've seen when it comes to just firmware by Intel in general - I've heard from Trammell who's a really big firmware hacker, that the Intel teams that work on firmware, they don't even talk to each other, which is a little bit insane. He's had to relay messages from one firmware team to another to be like, “Your thing is broken.” I'm not sure if the hardware vendors are necessarily better than anyone else. I mean, everyone has those problems in organizations, but, it's weird.

Participant 6: You actually mentioned the cloud providers in terms of offering it for customers, which is the customer saying, "I don't want to trust you." Do you think the cloud providers have any interest one way or the other in being able to essentially run zero knowledge to say, "I would prefer that I don't even know what you're doing so that I can make a plausible deniability. I can reject government requests"? I'm thinking of things like managed databases. They're huge. Do you think cloud providers would actually prefer to be able to say, "I don't even know what you've got there," or is there a desire the other way, or is it just sort of neutral? It's one of these, "Well, if customers want it, we'll give it to them, but we don't care one way or the other."

Frazelle: I think it depends on the cloud provider. Azure, okay. I worked at Google and Azure, and my first week at Google I wrote in a Tor Relay on the cloud, because I was like, "YOLO, I work here now." Not Great. I got so many emails from people. I was like, “Oh I'm going to be fired in my first week.” I was just messing around, and they knew. There was monitoring on the network and stuff like that. And I, of course, was like, “I'm going to switch on all the controls. Even Port 80 and 442.” And usually with Relays, you're like, "I'm smart, I'm not going to do that." No, I turned it all on.

They definitely knew. I did the same exact thing when I joined Microsoft and they had no clue. For the longest time, they didn't know. I just ended up shutting it down because I was like, “Whatever”. But you can also mine Bitcoin on Azure. I'm not sure if I should be saying this, but people have done it. So startups get this huge amount of cloud compute kind of credits. One of them went under and I guess they just used all their credits to mine Bitcoin, and they made a bunch of money in the end, even though they were basically bankrupt. They did it all on Azure because they don't have any sort of systems that check that stuff, which is crazy to me. Maybe watch that, because I don't know what they're doing exactly with Hyper V, but you have to think that at some point the neighbors are going to know that you're kind of stealing all the CPU, although maybe Hyper V [inaudible 00:32:53] that, but honestly, I don't know.

It really depends on the provider and what they choose to do, because I filed multiple bugs internally and I was like, "Look, maybe you all should stop these things I'm trying to do," and they're like, "We don't care." But then there are the actual cloud providers, the one that Joanna works on, that they really, really don't want to know. So that's why they're doing everything inside Enclaves, because they want plausible deniability against things. But real plausible deniability, not hands-off, “We give no shits”. So, it is interesting to see the different opinions in cloud providers and what they value. Maybe if people line up their values with the cloud providers, I don't know. Also don't do what I do if you join clouds, that's a terrible thing. I don't know why I don't get fired.

Participant 7: My question is, from the realistic standpoint, is this problem worth solving, as in, you're describing a super hard problem that's in hardware, which is this really hard to iterate over, and what are the actual big use cases that you can't solve in any other way? I mean, sure enough, trust in your cloud provider, but as you mentioned, you should probably have it on-premise if you have that, those kinds of things. What are the big targets?

Frazelle: Well, trusting in computing in general. Any use case where you don't want the host to know about what you're doing, which is like, if the host knows about what you're doing in any other scenario, it's game over. I also am just a fan of hard problems. If I see one that's unsolved, I'm like, "Whoa, that's a hard problem." I mean, I want it to be solved just because closure.

Participant 8: That there are also software legislations to the whole [inaudible [00:35:05] encryption thing, which has also been this amazing dream for decades now of, you can do computations without knowing what the results mean and those kinds of things. I mean, are those equally implausible?

Frazelle: I haven't looked into that, but now I will. Definitely going to.

Participant 9: You mentioned something that just piqued my interest. Most of the Spectre Meltdown type attacks, you can get the data inside your virtual address space, but at a more privileged level, but there are now some attacks when you can get to another address space. That was kind of scary.

Frazelle: You can get all the cache from all the address spaces, which yes, it's pretty intense. But I think it's only with SGX that you can do that.

Participant 9: So from all of the SGX enclave memory?

Frazelle: Yes.

Participant 10: SGX seems to be implemented in a mixture of hardware and software. One criticism was there were thousands of pages of amendments, the Intel manuals when they released it. I mean, is it just too complex?

Frazelle: I think that's it. The first paper that was 118 pages, they wrote that before it was in the manual. So they reverse-engineered basically all of this, which I think is why they also needed, “Let me first show you how computers work.” But now that it's in the manual, yes, you have thousands of pages of trying to figure out how this thing works. It's just insane. It's way too complex. They probably could have gotten rid of a bunch of the complexity if they actually knew that the use case was just, like, “Run this thing”. I don't know.

Participant 10: But then you could have only used it for DRM. It would have been boring.

Frazelle: Yes. Then people would've been like, “But I want DRM,” and I'd be like, “Really?”

Participant 10: Does anyone use SGX for DRM?

Frazelle: I don't actually know. I haven't seen those use cases, which is weird because I didn't actually even know that it was made for DRM until after I had done a lot of this. And then I met up with Trammell, who does firmware stuff, and he was like, "Did you know it was actually for DRM?" And I was like, "Whoa. That's a real thing." Then I was like, “Actually, I'm going to give them now the benefit of the doubt on it being shitty because that was not their original use case.” So I decided to be a nicer person.

Participant 11: Do you think SGX will still exist in five years’ time? Or will Intel just can it, because it sounds fairly terrible?

Frazelle: Intel just got a new CEO, so I feel like maybe he's going to clean up all the messes. That's the dream. But to do that he'd also have to can a bunch of things. They're chips and all this. I just hope that they iterate further and make things better, but the iteration process on hardware, it's going to be four years or whatever. I mean, I don't know how slow their development cycle is, but it's probably a long time before it gets to a place where this is actually usable or people trust it.

 

See more presentations with transcripts

 

Recorded at:

May 21, 2019

BT