BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Making 'npm install' Safe

Making 'npm install' Safe

Bookmarks
45:42

Summary

Kate Sills talks about some of the security issues using NPM packages, the EventStream incident that created a security breach in a package, and Realms and SES (Secure ECMAScript) as possible solutions to NPM package security vulnerabilities.

Bio

Kate Sills is a software engineer at Agoric, building composable smart contract components in a secure subset of JavaScript. Previously, she has researched and written on the potential uses of smart contracts to enforce agreements and create institutions orthogonal to legal jurisdictions.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Sills: Today I want to talk about making NPM install safe. Code has power, there's this great quote from the structure and interpretation of computer programs and it says, "In effect, we conjure the spirits of the computer with our spells." As programmers, we have tremendous power, and this power can be used for good and it can be used for evil. Code can read our files, it can delete our files, it can send all of our data over the network, it can steal our identity, drain our bank account, and much more.

My name is Kate Sills and I work for a startup called Agoric. There are two things that you should know about Agoric. Number one, we're building a smart contract framework that we hope can handle millions of assets, or millions of dollars in assets. Number two, we're doing it in JavaScript. This probably sounds like a terrible idea and we'll talk about how we're able to do that.

At Agoric, we're at this intersection of cryptocurrencies and third-party JavaScript code. We use a lot of Javascript packages, and it turns out that this intersection is just really an A-plus target for attackers. It's like Italian-chef-kissing-fingers type of situation. What exactly is going on here? NPM have some great stats, they say that there are over 1.3 billion downloads of NPM packages on an average Tuesday. That's a lot of downloads. JavaScript has this rich culture of code reuse. Here are some more stats from NPM, there are over 800,000 packages in NPM, making it the largest open source code repository in the world. The average modern web application has over 1,000 dependencies. If we look at something like a create-react-app, which is supposedly bundling together all of the dependencies that you need as a beginning react developer, that has over 1,700 dependencies, so there's a lot of code reuse going on here.

NPM has this great quote, they say that 97% of the code and a modern web application comes from NPM, and an individual developer is responsible only for the final 3% that makes their application useful and unique. We can think of all the time that saved by not having to reinvent the wheel. That's hundreds of millions of coding hours saved. I think this is a really beautiful example of the kind of cooperation that humankind is capable of. We should be proud, NPM should be proud of everything that they've accomplished. We have this rich civilization of code, we have specialization, we have experts. We don't have to build everything ourselves. As long as we can rely on all of these package developers to be good people, we're fine. But not everyone is good, not everyone is good all the time, and people make mistakes. What happens when it goes bad?

When It Goes Bad

Using other people's code is risky, and it's risky because whatever package we install can do whatever it wants. There are no restrictions and we may not find out what happens until it's too late. In real life, we also have this rich ecosystem. We're able to buy and use the things that other people create. We don't have to grow our own food or sew our own clothes. Imagine if everything that you bought, all of the interactions that you had throughout the day, that coffee that you bought at the airport, imagine if that had the power to completely take over your life. That would be ridiculous, it'd be absurd, but that's the situation that we're in right now with JavaScript packages. We can't safely interact with the things that other people have made without it potentially ruining us.

Let's look at how authority works in Node.js. I'm going to focus on Node.js for a bit here. By authority, I mean the ability to do things. Authority in node comes through imports through these require and it comes through global variables. It turns out that anyone or anything can import modules, especially the node built-in modules, and they can use global variables. The effects of this use, it's often opaque to the user. There's no notification that says, "Alert. Your files are being sent over the network." It's all opaque to the user. Imports can happen in independencies that are many levels deep. This means that all packages are potentially risky, so something that seems like it's doing something simple and confined, could have malicious code that's actually doing a lot more. Node provides no mechanisms to prevent this kind of access.

To illustrate this, let's imagine that we're building a web application and we want to add a little excitement. Let's say we install this package at excitement that just takes in a string and it adds an exclamation point to it. If this notation isn't familiar, that's just a template literal, so it's adding an exclamation point to a string. Our "hello" turns into "hello!", this is very simple, it's just string manipulation. It turns out that add excitement could actually be written like this. We have the same functionality, we're adding an exclamation point to a string. From the user's perspective, we get the same effects. "Hello" gives us "hello!" but we are also importing the file system. FS is the file system in node, and we are also importing https, so both fs and https are node built-in modules.

What we're able to do, or what this attacker is able to do, is to say "fs.readFile" and let's say that's our bitcoin private key file. It's able to read that and send it over the network. This code is simplified a little bit, but I think you can see what's going on here. Just through access to fs and access to https, any attacker through the installation of any package could be able to read all of our data and send all of it over the network. In the cryptocurrency world where we're dealing with bitcoin wallets that have private keys, this is a big deal. This is a problem.

Let's go over the steps to read any file. All we have to do is we have to get the user or another package to install our package. Then step two, we have to import the node built in module fs. Step three, we have to know, or we can guess, the file path, and there are no penalties for guessing. That's it, that's all we have to do. When we look at this, we see that all we had to do, we just had to import fs and then we were able to read the file and send it over the network.

A Pattern of Attacks

This actually is a pattern of attacks. You might have heard of the event stream incident that happened last November. What happened there was that there was an open source JavaScript developer, Dominic Tarr, who had over a hundred open source packages that he was maintaining. A volunteer came up to him and said, "You know what? I would really like to take over the event stream package. I'll take that off your hands." Dominic Tarr said, "Great. That sounds great." This volunteer made some changes and he actually added a malicious dependency, and that malicious dependency to the venturing and package targeted a specific cryptocurrency wallet. What it did was that it tried take the bitcoin private keys, send it over the network, exfiltrate them and ultimately steal bitcoin.

We saw this pattern again just this month. The electron native notified package also had a malicious dependency that tried to target cryptocurrency wallets. It added a malicious package as a dependency and it required access to the file system and the network. NPM was able to stop this attack, luckily, but I'm sure there are many attacks like this that we're going to see in the future, that if we don't do something to prevent it, we're going to see this pattern again and again.

Solutions

What are the solutions? One of the solutions is to write everything yourself. Believe it or not, there are actually companies out there that are doing just this in the cryptocurrency world because the stakes are so high. They are writing everything themselves, this is a real solution. Obviously, it's not a very good one and it's not very practical for all of us here. It's not very scalable, if I had to write everything myself, I would not get very much done, and we would lose the millions of coding hours that we've saved by reusing other people's code. I don't think this is a very practical solution.

Another solution that people have proposed is paying open-source maintainers, so at least there's someone responsible for the security of the package. I think this is a good solution, I'm not against it. Even when someone is paid to maintain a package, they still may be compromised, people make mistakes. I think we will still see that this attack will happen even if we have people who are responsible for the security of the packages. Then lastly, the solution that people recommend is code audits. Code audits are great, but they're not a panacea. They don't solve everything, there are things that code audit smiths.

Here's some code courtesy of David Gilbertson. Does anyone know what this code does? Any guesses? What this code is actually doing? It's doing a fetch, it's doing a window.fetch, this is accessing the network. If we were doing a code review, we would probably never guess that. How it's able to do this is that a "const I" is actually fetch shifted over one character. Then a self is an alias for window. If we have thousands of dependencies and we're just trying to do a code audit, maybe we've even gotten better and we have manual tools that are grepping for certain things, we would probably never find this. Code audits alone are not going to solve our problem.

If we go back to the steps to read any file, we see that a lot of these solutions are focused on number one. They want us to only install packages that we trust, and they want us to be able to trust the open-source code maintainers. I think this is admirable and I don't want to stop people from doing this, but I think what we should really be focusing on is step number two and step number three. What would happen if the malicious code tries to import fs and it just can't? Or, it knows the file path, it knows the file name, but it can't use it? It just can't access those files. There's a great quote from Alan Karp that goes towards this point. He says, "The mistake is in asking, ‘How can we prevent attacks?’ when we should be asking, ‘How can we limit the damage that can be done when an attack succeeds?’" The former assumes infallibility, the latter recognizes that building systems is a human process.

What We Need: Code Isolation

If we were actually able to focus on preventing the import of fs and making sure that even if someone does know the file path or they can guess the file path, that it's of no use to them, how would we do that? What we actually need is code isolation. It turns out, through an accident of history, JavaScript is especially good at code isolation. JavaScript has a clear separation between pure computation and access to the outside world. If we severed that connection to the outside world, we can actually get rid of a lot of the harmful effects. This is not true of other languages, if you look at something like Java, it doesn't have this clear separation. As JavaScript developers, we're actually in a really good place here for being able to do this code isolation.

In JavaScript, we already have the concept of a realm. Each webpage is its own realm, and JavaScript that executes in one webpage can't affect JavaScript that executes in another. A realm is roughly the environment in which code gets executed. It consists of objects that must exist before code starts running. These are objects like object, object.prototype, array.prototype.push. It also consists of a global object and global scope. Historically, one way that people have isolated third-party code was to isolate it in a same origin, iframe, but using iframe is clunky, and we need to be able to create realms without the overhead of iframes.

This was the concept behind TC39 standards proposal called realms. What if we could actually create these realms without the overhead of the iframe? Without the DOM? What if we could do it in a very lightweight and easy way? Creating a new realm creates a new set of these primordials, these objects that we get at the start of our code running as well as a new global object and global scope. A realm is an almost perfect sandbox, whatever code is isolated in a realm, is isolated from the rest of the world. It has no effect or no ability to cause effects in the world outside itself. Malicious code that runs in a realm can do no damage.

The realms API allows us to create another type of realm, known as a featherweight compartment. Rather than duplicating all of the primordials - array, object, prototype, that sort of thing - a featherweight compartment just shares them, and this makes the compartment much lighter. This is a proposal before TC39, which is the JavaScript standards committee, it's at stage two. This is the stage at which it's still a draft. We're looking for input on the syntax and the semantics and things like that. We're hoping to push it forward so that it actually gets put into the JavaScript language itself. Even though it's still at the proposal stage, there's a realm Shim, and you can use this now. The realm Shim has really been a team effort between the company that I worked for Agoric and Salesforce. Mark Miller, JF [Paradis] and Caridy [Patiño] have been working on this, so you can use this Realm Shim now.

Let's see if this will work. I want to show what happens when we try to execute that piece of code that we saw earlier, inside a realm. We've stringified it, let's just evaluate it to see what happens. That actually worked fine, it was just that the attacker URL wasn't defined, but that attacks succeeded. Now, if we use realm and we make a compartment, and then we evaluate that code and that compartment, it turns out that self-window, all of those things, they're just not defined. That fetch just doesn't work.

We have these featherweight compartments and we're sharing these primordials, but we still have a problem, because we're sharing these primordials and there's a thing called prototype poisoning. What prototype poisoning does is that it actually resets these objects that we get, these primordials, and it sets them to other values. Here's an example of an attack. Our attacker is taking an array.prototype.map and it's setting it to something else and it's saving the original functionality. From the user's perspective everything looks fine, our array map is still working, no problems here. What it's actually doing in the background is that in addition to actually doing the mapping, it's sending all of the data in the array over the network.

Prototype poisoning is a huge problem. To solve the problem of prototype poisoning, there's a thing called SES or a Secure EcmaScript, and you can think of SES as realms plus transitive freezing or hardening. What this means is that when someone tries to make changes to the primordial, when they try to do the attack that we saw, when they try to set array.prototype.map to something else, it just simply won't work. This object is frozen, you can't change the values.

Using SES is very easy, it's a package right now, so you can do NPM install SES, and then you just require it. You say makes us root realm and then you evaluate the code. You might have noticed in the video that I showed that you actually have to stringify the code. This is a developer ergonomics issue that we're still working out, but you do have to stringify anything that you want to try to evaluate in a safe way. You might be saying, "Well, that's great, isolation is great, but my code needs to do things. It needs access to the file system, and it needs access to the network."

POLA

What do we do when our code actually does need a lot of authority? There's this thing called POLA, POLA stands for the Principle of Least Authority. You may have also heard this as the Principle of Least Privilege, but POLP doesn't sound as great, so we're going to stick with POLA. What POLA means is that we should grant only the authority that is needed and no more. We should eliminate ambient and excess authority.

What does no ambient authority mean? Ambient authority is easy access without any specific grants. In our example with the ad excitement package that we installed, we saw that access to fs and access to https, because they were built in node modules, we didn't have to ask anyone. The attacker could just say import, or require, and it got byte authority. That's an example of ambient authority. We also should not have excess authority, in that example, in the add excitement example, it didn't need any at all. It was taking in a string and it was returning a string. It didn't need to access to anything, but it had access to the file system and the network. So if we were to actually rewrite this under PoLA, it wouldn't have the authority to do any of that.

To illustrate this, let's use an example, let's use the command line to do app. It's a very simple, pretty stupid app. What it does is that it adds and displays tasks, these tasks are just saved to a file and it uses two packages, it uses Chalk and Minimist. You might have heard of them, they're very widely used JavaScript packages, they have over 25 million downloads each. What Chalk does is that it just adds color to whatever you're logging to the terminal, and Minimist parses command line arguments.

Here's an example of how you might use it, I thought it was very simple and pretty stupid. First, we want to add it to-do, pay bills, and then we want to add another to-do, do laundry, and we add a third to-do, pack for QCon. This is priority high, because it's really important. We're able to display it, when we display it through Chalk, we see we have pay bills, do laundry, and pack for QCOn is in red because it's really important. If we analyze the authority, things get a little interesting. Our simple commands line to-do app, it needs to use fs. It needs to use the file system because it's saving to the file is reading from the file. That makes sense. It also uses Minimist, and Minimist just takes in arguments. It's pretty much a pure function, it doesn't need a lot of authority.

Chalk, on the other hand, does something really interesting and this is a very widely used NPM package. It uses a package, “supports-color,” which needs to use process and it needs to use OS, and processes a global variable provided by node, OS is one of the built-in node modules. These are both a little dangerous, let's look at that. If you have access to process, then you can call process.kill. You can kill any process that you know the process idea of, and you can send various signals as well. That sounds great, this simple package that's just supposed to turn the color of our logs to green, it's able to do this. If we have access to OS, we're also able to set the priority of any process. All we need to do is just know the process id and we can set the priority. This is crazy, but this is how the JavaScript package ecosystem works right now, is that things that we think are really can be doing all of these things.

If we want to enforce POLA, then we're going to need to attenuate access, and we want to attenuate access in two ways. We want to attenuate our own access to fs and we want to attenuate Chalks access to OSM process. To attenuate our own access to fs, let's first create a function that just checks the file name. If it's not the file that we expect, if it's not the to-do app path, then we'll throw an error, that's pretty simple. We'll use this function when we actually attenuate fs. This is a pattern that you'll see again and again in this kind of attenuation practice. What it's doing is it's taking the original fs from node and it's saying, "You know what? We don't need the whole fs. What we actually need is a pen file and we need to create read stream, we only need those two things. Not only do we only need those two things, we only need them for this one file."

What we're able to do is we're able to create a new object. We harden it using SES. That means that it's frozen, it can't be modified. It only has two methods, it has a pen file and it has create read stream. Within those methods, it checks for the file name. It checks to see if it's what we expect. Then if it's not, it throws an error. This is an attenuation pattern that we're actually able to use on our own access to fs. You might ask, "Why do we actually want to restrict our own access? That seems crazy. I trust myself." It turns out that people slip up, I may do something that compromises my computer. It's important to attenuate even your own access to things.

How do we attenuate Chalk's access to OSM process? First, let's solve the ambient authority issue. Instead of Chalk or supports color just being to import OS and use process, let's actually pass those in. Let's make it a functional design here. Let's change Chalk to take in OSM process and let's change supports color to take in OSM process. it turns out that chalk only needs OS. This really powerful package only needs OS to know the release, and release is just a stream that returns a string identifying the operating system release. The reason why it does this is because there's a certain color of blue that doesn't show up well on certain windows computers. That's the only reason why it needs this authority. That means that we can attenuate this, and we can reduce it down to a much smaller authority. We can say that we're going to attenuate OS, we take the original OS module and we create a new object that just gives that string, that just gives the release.

We can do the same thing to process, but it does need a bit more of process. We see the same pattern. We provide it within with platform, with versions and so forth. There's something else interesting that we can do here, we can lie. We don't have to tell the truth. I have a Mac right now, we can say that our platform is actually Win-32 and this dependency won't know the difference. We can virtualize things, which is another really important pattern in enforcing PoLA.

If you've liked these patterns, if you've liked the attenuation in the virtualization, you might really like a programming paradigm known as object capabilities. The main message and object capabilities is that we don't separate designation from authority. It's an access control model that is not identity-based. You can think of object capabilities as a key, versus having a list of people who are proved to do certain things, it's like the car key.

What's really great about object capabilities is that it makes it very easy for us to enforce POLA and to do it in creative ways. We can give just the authority that we need to people, and we don't have to have complicated permissionless, we can just do it with objects. It also makes it really easy to reason about authority, because it turns out that the reference graph, what things have access to you, is the graph of authority, so we can see if this actually has no access to this, it doesn't have that authority, we're protected from that. If you're interested in more on object capabilities, there's this great post by Chip Morningstar at habitatchronicles.com, I really encourage you to check it out. "Object capabilities allow us to do some really cool things in enforcing POLA."

SES as Used Today

I want to go over how SES is used today, it's a stage two proposal at TC39, so it's still in progress, but there is a Shim for realms and SES, and people have already started using it. First, Moddable. Moddable is a company that does JavaScript for the internet of things. Your Whirlpool washing machine might have JavaScript running on it and they have an engine access. It's the only complete EcmaScript 2018 engine optimized for embedded devices. They're the first engine to implement SES, or Secure EcmaScript. What's really cool about this is that because they have the security, because they have this code isolation, they can actually allow users to safely install apps written in JavaScript on their iot devices. You can imagine you have your oven, your washer, your light bulb, and you're able to control all of those things with code that you write. The manufacturer can let you do that because it's safe, because they've isolated your code and it can only do certain things, like make the light change color. That's really cool, that's really exciting.

Another company that has been using realms SES is known as MetaMask. MetaMask is one of the main ethereum wallets. What they're able to do is that they're able to allow you to run ethereum maps without having to run an ethereum full node and they actually have over 200,000 JavaScript dependencies. You can tell that they're very eager to be able to isolate this third-party code and make this safe. What they've done is that they've created a browser five plugin that puts every dependency into its own SES realm. They also have permissions that are tightly confined to a declarative access file. If you're interested in Browserify, you use Browserify, check out Sesify.

Lastly, Salesforce, who has been one of the primary coauthors of realms SES, they're using SES right now in their locker service. This is an ecosystem of over five million developers. Their locker service is a plugin platform. You can create third-party apps, people can install them in salesforce. It's really important to them since this handles a lot of people's business data, that these third-party apps are safe. They're using a version of this right now in locker service.

I want to be clear about the limitations, this is a work in progress, we're still solidifying the API, we're still working on performance. There are definitely developer ergonomics issues, you actually have to stringify the modules to be able to use them right now, we're working on fixing that. It's still stage two in the TC39 process, what that means is that SES is able to provide nearly perfect code isolation. It's scalable, it's resilient, it doesn't depend on trust. It doesn't depend on us trusting these open source developers. It enables object capability patterns like attenuation that I think are very powerful.

Most importantly, it allows us to safely interact with other people's code and we can use your help. Here are the three repos that if you're interested, you can look at there's proposal rounds at TC39. There's the Realm Shim at Agoric, and SES at Agoric. Please take a look at these things, play around with them. We would appreciate and love your comments, your POLA requests, all of that. We could definitely use your help.

Questions and Answers

Participant 1: With content security policy like report only, I can say, here are places where I've set a policy, but we're asking for something that's beyond, and that's going to trigger potentially problems down the road. Is there anything with SES where I could say, here are the things that are requesting file system or whatnot?

Sills: Yes. If I understand the question, it’s is there any way with SES that we can we can record the intention or what we're allowing things to do? It's still a work in progress, but people have been building tools to be able to do that. There is a tool called TOFU, or Trusted on First Use. What it actually does is it creates a declarative manifest of all of the authorities that all of the packages that you're using actually use. When something uses fs, it says this package uses fs.

It's like a package lock Json or something like that. You can see where if a package does all of a sudden request more authority, well, first of all, it won't get it; it's restricted only to what's declared in the file. If you do want to try to grant it, that authority, then that actually if you're doing a POLA request or something like that, that'll show up in diff and you can say, “Hold on, do we really want the simple string function to have access to this?” People are working on that. It's still a work in progress, but if you Google SES TOFU, it should come up.

Participant 2: Thanks for your talk. I'm new to JavaScript, so maybe this question won't make sense, but how are some of the key capabilities of SES different from what we already have? I'm used to using object.freeze to something similar to heart and I guess here. The second part of my question is, for the particular package that you talked about, Chalk, how do you figure out that the package itself only needs OS.release without going through the packages code?

Sills: First, the question was how is SES different than object.freeze? Object.freeze only freezes that particular object. JavaScript has prototypal inheritance and it doesn't go up the prototype chain to actually freeze any of the primordials or the prototypes, things like that. When we see prototype poisoning, when we do object.freeze, it doesn't actually prevent that. What you need to do is you need transitive freezing. You need to freeze that object, then freeze everything that it's connected to and so forth until you frozen everything. Can you repeat the second question?

Participant 2: You talked about the Chalk package and that we could prevent access to its always start release by just virtually telling it that you're on Win-32 or something. How do I know that that's the only thing that it needs the OS package for, without going through the court? Then would that mean that I need to analyze every entire code of all the packages that I'm importing?

Sills: The question was, how do I know the authorities that the packages are using? Do I have to go through all of the code in that package or is there a simpler way that I can do that? We're still developing the patterns of use for these kinds of tools. I think this goes back to the TOFU tool that has been built. Let's say that you don't want to spend any time on security whatsoever, you just want to lock things down to where they are, so that when Chalk is using OS, we don't actually give it OS, we just give it os.release. Something like TOFU, something that creates this declarative manifest based on the authority, that your packages are using, would actually be able to attenuate it to that point automatically, which is great.

The other thing that you can do if you want to try to really attenuate things further is that you can just try running it in complete isolation and see what fails. It's not a perfect solution, it'll take some time. You may have to rewrite the module, you may have to ask the maintainers to rewrite the module in a way that actually allows you to attenuate it. That is what you can do.

Participant 3: I'm not at JavaScript developer as well, but I have some basic questions. When you said the three things that we can do to prevent malicious code from running on our machine, one was to write code ourselves, others to pay or maintain, and third is to limit permissions. With NPM, how we do NPM install and add a dependency and run? In other language types, for example, Python, we do a PIP install, add a dependency and run. We don't hear this with Python as much as we hear it with NPM. What is Python or PIP doing in this industry that NPM can?

Sills: The question was, we hear a lot about problems with NPM install, but there are other languages that have other package managers and why don't we hear about attacks that go through them? Why don't we hear about that? I think it's a number of things. The first thing is that those languages and those package managers, they are not protected. They still have problems. They haven't actually solved the problem. What's more is that given the language, it may be harder to achieve this kind of code isolation. I mentioned Java for instance, it doesn't have this clear separation of pure computation and outside effects. It can be harder to achieve that kind of code isolation.

I think the reason why we hear about it so much in the news, why we hear about NPM install attack so much, is that JavaScript is so widely used. It's the most widely used language and it has this culture of creating really tiny packages and creating a lot of them. There's a lot of code reuse, which I think is a really good thing, I don't think we should get rid of that practice. The fact that it's so widely used and the fact that it has so many dependencies, I think contributes.

Participant 4: Was this possible to happen before anything? There's nothing additional that they are doing in the industry to prevent malicious code from being [inaudible 00:40:00]?

Sills: Not that I know of, but I am not the subject matter expert on that.

Participant 5: I want to respond to that question for Java. Java does have protection Sonatype, which is one of the large binary repositories. Monitors may have been central and large feeder repositories. It's part of their business model to do it for certain customers, and that winds up protecting the ecosystem. While they can't protect it from getting into the repository, they can protect it from spreading very quickly. I do not work for Sonatype, but it's a nice benefit.

Sills: Thank you.

Participant 6: Do you think that the security will be pushed to the package maintainers to be able to say, "My package requires only these few things and you can see that when you're installing." Or, do you think it will be on the consumer to say, "These are the only things I want to allow,” like a secure Linux kind of a thing or anything? Or do you think it could be a combination of both going forward?

Sills: I actually think it will be both, and I think that's there's this quote that says - I'm going to butcher it - but it's, someone who can look into the future and see cars and all those things, that's great. Someone who can see into the future and see traffic, then they're really looking ahead. I think you're really looking ahead here. I think there's going to be this burden on the package maintainers to explain the authorities that they're using, and it's going to be seen as really sloppy; just really gross to be doing something simple and to be using all of these authorities and allowing that kind of access, the ambient and excess authority that I was talking about.

I think we're going to see a programming practice come to pass, where if you're not doing this kind of attenuation, if you're not being careful about the authorities that you're using, then you're going to be a bad developer. I think that's going to be driven by usage. We could imagine a world in which NPM highlights what kinds of authorities are being used. If you have a choice between two packages as a user, and one of them takes all of these authorities and then other doesn't, you're going to choose the one that's safer. I think different code repositories, whether it's NPM or GitHub, or what have you, they can surface the information about the authorities that are being used and that will help us make those decisions and push the process forward.

Participant 7: You answered my question with the previous question, but just elaborating on that. Our web stack and our team is very angular-focused. Do you see any potential for this type of security to be bolts into web frameworks like that?

Sills: The question was will this kind of security be bolted onto web frameworks, like angular react, that sort of thing? I haven't seen development towards that point, but I hope it will happen. If we see this pattern in the greater JavaScript, ecosystem then anywhere where there's the use of third-party code, whether it's in a spreadsheet that just uses a JavaScript for manipulating cells or whether it's a plugin architecture, like Salesforce's locker service, anything that could possibly use packages or third-party code, we're going to see a push towards practices like this. I hope so.

Participant 8: I tried Googling SES TOFU and it didn't work. I'm guessing I spelled it wrong. Can you either spell it or add more words?

Sills: Let's see. You probably did spell it right. It's just TOFU. I can try to provide a link in the slides afterwards.

Participant 9: I saw that recently the JavaScript class was added to TC39 and TC39 approved it. Does that help with the abstraction and maybe writing secure code? Do you have any opinion on that?

Sills: The question was that Java has added class two onto the JavaScript standard recently, and does that help with code security? It's funny that you should mention that. Our chief scientist at Agoric who was previously at Google, Mark Miller, is on the TC39 Standards Committee, and Agoric is part of the TC39 standards committee. Mark really hates class, for some reason. I'm not exactly sure why, but I can tell you from his perspective, it's not helping security wise. I wish I could give a better answer, but we'll have to ask him.

Participant 10: It's still prototype within the [inaudible 00:45:09].

Sills: It's still prototyped under the hood so that explains it. You could do something like prototype poisoning through class. It's obscuring certain attacks that could happen.

 

See more presentations with transcripts

 

Recorded at:

Jul 29, 2019

BT