Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Securing Serverless by Breaking in

Securing Serverless by Breaking in



Guy Podjarny breaks into a vulnerable serverless application and exploits multiple weaknesses, helping better understand some of the mistakes people make, their implications, and how to avoid them.


Guy Podjarny is a cofounder at, focusing on open source and cloud security. He was previously CTO at Akamai following their acquisition of his startup,, and worked on the first web app firewall & security code analyzer. He is a frequent conference speaker, the author of "Responsive & Fast”, “High Performance Images” and the upcoming “Securing Open Source Code”.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Hi, everybody, thank you for tuning in and for joining us. We will spend time in terminal and browser a bit and playing around with it. will talk about serverless security; it is this annoying pair of words because it is, are you doing security in a serverless fashion, or are you securing serverless, so we call it securing serverless by breaking in.

And a little bit of context about me: I have a previous security background, I was in the cyber security part of the Israeli army, and then I moved to the dev ops side, building performance tools, and then back into security and here at Snyk with open source security, and I will be doing an AMA on open source security today.

Serverless Security: The Theory

I give a talk on serverless security that is more theory this is more of what is left to protect, if you want to check that out to have a little bit more structure around the concepts, you can check it out, it is on YouTube and I have it with the blog post that compliments it.

But today, we are going to get our hands dirty. So what we are going to do, we are going to use a demo serverless application that is vulnerable that has a bunch of security mistakes in it, we will break into it, we will hack in, and we will talk about all sorts of security flaws, and talk about how we did it, the mistake and the flaw that allowed us to do and how you can avoid it. There are some bits that require audience participation, I'm replying on it and then we will do a summary and Q&A. Sound good?

Going Terminal

So we will go to terminal. And so, with that, we will go straight to the terminal, we are going to first start in the code here. So let's sort of introduce our serverless application. And this is serverless; so it is not as visual, a lot of it is API-driven, I have a to-do application that has a bunch of functions, and this is copied from the serverless framework examples and then modified just a little bit, and most of these come in with built-in vulnerabilities, I'm not sure if it is intentionally. And then, we will talk a little bit about the options here. And so, it is very simple to do an application; you can create the item, you can list the items that are in here, you can get them, you can update, delete, etc. So just a few basic actions around to do lists, and then I will look a little bit about the list.

So is the font size big enough? I will zoom in more, I will see how much of this is accessive. So I have some helpers for me. So we are going to curl this little entry point, which just gives us the list of to-do items, but we don't have to dos on the list. And so, I will say curl and add a text, the JSON input, call mom, very important, add that item. And I will say, maybe I want to, maybe, like learn serverless.

And so, that is another to-do item. And it also has nifty features we will talk about, it has a reminder capability, maybe I want to learn serverless in two days, if that is sufficient time. So we will run it. And then once we have done that, we will do the initial query again and see if the items are added. So populated a very simple type of application. And then in case you are wondering, this application, which is vulnerable and we will break into it, is not deployed on the SNEAK network, but on my wife's Amazon account, she is vaguely aware of it. And then, yes, so no SNEAK servers were hurt during the making of this demo.

So we have the application and it has a bunch of components. And like most applications today, this is in node one, the same premise, I will say, the same concerns are language-agnostic in the world of serverless; JavaScript is the dominant language, so my demoes are JavaScript related, it has a package of JSON and uses a bunch of dependencies in it. So the first thing we are going to talk about are those dependencies. I will get back a little bit into the deck for this, so it is readable for you.

Vulnerable Libraries

The first security concern is vulnerable libraries, serverless is serverless; you are not running the server, you are not imaging the server, so there's a bunch of security concerns and dependencies that are vulnerable, or handled by the platform provider- AWS, Google, or Microsoft, they handle patching the servers. And what they do not do, they do not touch the Maven libraries and npm libraries you are deploying as a part of the function, and that introduces a risk we will see in a moment.

I will show the scale of this; this is an example of a simple serverless function from the examples. It fetches a file, stores it in S3, and it does so in 19 lines of code. You don't have to read it, it is awesome, you can do it in a logical action in a small amount of code. And to help it out, it is using a couple of dependencies, the AWS in node, and these dependencies use other libraries, there are 19 in there. Any guesses how many lines of code the application, including the dependencies, have? A little bit of participation here.




You are all such optimists. 191,000 lines of code. So there's a little bit more, there's a bit more code in the dependencies than is in the original lines of the application. And, you know, that's not a bad thing. It means that you can take advantage of 190,000 lines of code of value easily by including them to libraries; you don't need to write them which is a good thing. But it might have concerns; we have a vivid example of this in the Java world where Equifax was breached, maybe you have heard of that, maybe of you were calling the data center talking about how to secure their data. They were breached through a vulnerable Struts2 library, an outdated library they were using. If you have not updated your Struts library, you should, it was vulnerable in March, they were breached in May, and they announced in July they were leaked. It shows how problematic the libraries are.

And serverless addresses the heart bleeds, the shell shocks of the world, but it does not protect you at the app level. So let's try to tackle this. We will try to uncover, this is the portion that I am going to be using my own service here, using Sneak to do so to connect to lambda and download the zip files and expand and find the vulnerable libraries and do something about them. So I connected with Sneak and configured it with the credentials to access, I will add all of them because I'm lazy, and add them to Sneak. And what this does is it goes off and downloads the zip files and expands them and sees which libraries are inside of them and intersect with them to see if they are vulnerable.

And so, this is an interesting point around the vulnerable library and the role in serverless. Application libraries as a whole are in a twilight zone between infrastructure and code, they are packages you put into the app and they are a part of the function, they are not something that the application or the platform provider thinks of as something in their jurisdiction; for you the consumer, you don't think of them, you roll it out. In fact, Google and Microsoft also can build it, like a Heroku or cloud foundry style, they will build the code and fetch the dependencies and build them for you and disappear forever. They will build the zip pile that is packaged up in there and will not have visibility. So you want to elevate that visibility.

This is a project that was created, which is the render function. And so let me introduce that function. I guess I have, we were playing in the terminal and I want to make it accessible and I wanted to make customers browse the function that I created, or the to-do list. So to do so, this is the JSON example, this is the view of -- whoops. This is the view of the JSON functions that I have, there is another function in there that is called render, and render is an HTML render of these to do items. It is pretty, I know, you can stop. And in case it wasn't obvious, it is also a mobile-first application, and I can add a device equals desktop, and I will get the desktop version of it. And this is basically the extent of my CSS skills.

And so, to achieve this wonder, we are using a templating library, we don't want to write this ourselves, so we are using, where is this, in the to-dos, we have a little templating library, we use Dust JS, it is a library, or a templating environment that was created by LinkedIn and then was used on PayPal and a bunch of other services, it is the first one to have, like, made the claim to fame for running server-side templates at the time. And so, you know, there's a question here that says, if the device equals desktop, and then we will use this style, otherwise use the other style. That is good and well. And somewhere lurking behind this is an eval statement that evaluated those two conditions.

And so, if we look at the report here, where was I, it is over here. What we can see over here, it is uncovered in this case, a bunch of libraries, there's a set of dependencies used by this application, some have vulnerabilities, and we will look at this dust JS vulnerability. And to be fair, Dust JS is a very secure platform, it had this security bug and had this vulnerability and allowed an eval to be exploited, and I will explain the vulnerability; it is not the core of what we are dealing with here, but just the fact that it had the vulnerability to support the eval component. It had good standardization but missed some components.

This is the application; device equals desktop. If I tried to add a single quote here, or tried to break out of it or get into the eval, there's a lot of protection for it. f I do this, it will not succeed, I just got the mobile version back again, maybe not that obvious. And however, as it so happens, this -- this component, or this -- the control flow, the execution flow, used a library called QS to parse the query string and, as part of that, QS allows us to control the type of the incoming parameter.

And so I can do something like this, device equals desktop, and then, on the service, nothing really happens. It just gives me a different value, it converts the string in JavaScript easily. And if it turns out it is an array, LinkedIn, the real vulnerability omitted Dust JS, the standardization. So the question if this is a legit input or not, it did not ask if it was in array, and if I put the single quote here, I get the error. So I will go back to the terminal and this is the foundation for the subsequent hacks that we have. I have some helpers here for me.

And so this is just sort of repeating curl, HTTP, and getting escapes to the exception that we had. And we will use Dust 4, there's a lot of text here. I don't want to repeat that every time. And what we will do, we will put that single quote in, we know this is, after some experimentation, we have an encoded single quote every here, and then it does a console.log and says that Guy is here, very humble of me, and run it. And we didn't get the exception, something ran. So if I go back to the cloud watch logs over here, and I will open the render functions logs, and I will -- what is the latest, this is at the top, I will look over here, I will see that Guy was here. If the attackers have access to your AWS console, you have other problems, that is not useful for attackers to do. We will do what attackers actually would do here, run this JavaScript to perform or send data for them. To do that, we will run another little server here on the side, and we're just going to, of course, my SSH died. I will kill that. Got to love live demoes. This demonstrates that it is a live demo. I'm going to roll with that, it was entirely intentional. There we go, SSH, net cat, okay.

And so we have a listening server over here, and then we will do something more elaborate in dust 5. We will require the child process and dot check of the child process, which is a native module in Node, so it is available. And then we would run curl that sends the ETC, you know, if you can spot it amongst the encoding here, the ETC past WD to it. If you run this, voila. So a little bit more work than the console.log. And so this gets you into the land of trying to understand, okay, in this lambda instances, what does Amazon run, why are the users here, like the SBX users, and which other users are in here, and interesting to explore. I have wasted some time on that.

Cool. And so, this sort of demonstration is to say, you know, vulnerable libraries could hurt you, you need to understand whether you have the vulnerable libraries, this is an example, these are vulnerabilities in software, they come in many shapes and forms, and we will use it as a landing point to demonstrate a bunch of other vulnerabilities. So to conclude, be aware of vulnerable libraries, and I would be remiss, Sneak is not serverless only, it can connect to GitHub and other environments, plug done.

Denial of Service

And so, so far, so good. This is one. The second problem we will talk about is denial of service. Jared mentioned this a little bit before. To talk about the denial of service, we will look at another vulnerability over here, we will look at another function. And sorry, this heavy zoom is confusing me a little bit. And so we are going to look at the create function, which I use to create my to do items. And in here, there is another vulnerability in a library called MS. And MS has a regular expression denial of service vulnerability, AKA an Redos, that is just a type of denial of service, trying to get your server to be unavailable to other entities. If you put DDOS, the distributed denial of services, that make your data crumble, a lot of the attacks are around a single request that takes a long time for your server to process.

And, in the world of node, and especially in JavaScript, this can be done if you try to keep the event loop occupied. If you get an algorithm to run for a long period of time, it keeps the loop occupied and does not serve other clients.

In the case of serverless, less of an issue, each of these instances would serve a single client, but it is, you know, we will talk a little bit about it. And the most common algorithm that you would run, you don't think about it as an algorithm, is a regular expression. Regular expressions are algorithms, and they are a state machine, and they try to expand and process through some match on a string. And MS has a vulnerability that has to do with a regular expression DOS when you get the regular expression to run for a very long period of time.

So we are going to come back here, and we are going to use another set of aliases here, and once again, as we talked about before, we can do -- I will spare you. We will do this type of action, learn serverless in two days. What MS is used for the application, in two days; it is a library that converts that into the number of milliseconds and it uses a regular expression to achieve it. We have done this, we will run it again, it runs quickly. What we are going to do next is play a little bit with the string, we will run a long string in that action. So we will say 60, we will copy this, we will send 60 5s in this string, so we will print 60 5s, and if we do this, the field completes quickly. And the trick is that regular expressions, or, like, in general, when you look at state machines, as a whole, they typically take much longer to match, or to process, if they do not match. If they match, they go down a path, then they succeed and they are good. The problem happens when they do not match and then they need to exhaust the state machine that is relevant until they figure out that they did not match.

So we are going to do a small change here. We are going to small the S to the A, so the regular expression we get to the end, and then we back tract to say that maybe the five should belong to the other group or component. So we will run it, it completes quickly, we will run it a little bit more. So we will add a zero, runs pretty quick, add another zero, add another zero, and then it is going to get stuck.

And so, after a few seconds, we are going to see this in a moment, we are going to get an exception. And so this gets us into thinking about this vulnerability in the context of serverless. What happened? So, first of all, the re-DOS, or the extra compute ran, it took a longer time to process this vulnerability in this library and could be in your code as well. And unlike a re-DOS vulnerability, the time out occurred. And if we go back to the logs, we have a different function, this is the creates. Hold on. And so, it is the create function, we are going to open the logs, we are going to take a look here.

And we will see if this is properly loaded; we can see the sequence of activities that happened here. So the first request took 44 milliseconds, and then if we go further down, we can see the next one took 356 milliseconds, and then the next one took six seconds. So so what happened is, first of all, the re-DOS kicked in, it took a longer amount of time, and in six seconds, lambda cut it off. It was timed out. And in the meantime, if another client came along, it is tricky to demo this, unlike a regular server, capacity is provisioned right away. So DOS is a case where serverless helps security, and it reduces the risk of having no capacity because capacity is naturally elastic, it is provisioned for you as you need it, and that is true for good demand and for bad demand. The one catch to that is you are still paying for these time-outs, and components. My friend coined it, billing DOS, it denies access to your bank account by accumulating the bill. So you still care, you still need to worry about it. But it is not quite as severe as it would have been if you took down your server for real. So DOS is an interesting one; I demonstrated it in a vulnerable library, it could in your code, and re--DOS can be costly even though it is better than before.

Beware: Resource Exhaustion Attacks

And a couple caveats- Jared mentioned resource exhaustion, this is a perfect tee-up, when you are running the serverless application, not everything around you is serverless. Some of the components and serverless that you use a part of the serverless application are very stateful, have limits and are not elastic. You can cost them and effectively still get a DOS. So this might be dynamo DB, or some other third-party resource, I know there will be a mention of this in the talk later today about talking down a third-party API for it. You have to be careful with these functions, because they can scale up very, very fast but they might live in a world where not everything around them ask, so you have to worry about that. And so that was setting number two. I think I'm good for time so far. So far so good? Okay, we will keep playing.


Secrets; the next one, everybody's favorite secret, I know. Secrets. And so, let's go back to our remote command execution, because it is fun. And so, we had before, I believe, done this, right? And so, just to sort of get the shotgun again, we will get the accessibility, and so that is cool, we have the ETC past WD, and some of us are happy. What can we do with this? When we are on the machine, we can start goofing around and look at things. And one of the things that we can do is do just some LS. And so, we can curl, ta ta ta, LS-L dot. So I will run this, and we will have basically sent to here the folders. And as we browse around, we notice this interesting admin folder. If you are an attacker, you look for key words. Admin is good. And the reason admin is over here is because of a set of functions that I have yet to introduce to you, which is -- which are these functions over here.

And so, these internal back-up, internal restore and admin API are, as you might have guessed, a domain function. This is a simple to do application, there's not a lot of administration to do, but you need to back up and restore your database occasionally. hese two functions, note, do not have an API gateway in front of them, they are protected by the admin API, we are going to talk about that more in a second. And, the admin API has a secret, you know, and this is, hint hint, of what is going to happen next. And it will protect them; it will say that it only allows the invocation of the restore or the back-up in case the secret is correct, it uses lambda.invoke. And this is in a real-world scenario, it will go through an SSH or some server like that.

And we will continue, where was I, 6, so we will continue with dust 7. We will start exploring, what we are going to do, we are going to do cat of all the dot slash admin, and it will send it over to us. And many of these things can be done through a reverse shell that you set up, I'm trying to make them actionable items. So we are going to run dust 7, and then over here, I will zoom in a little bit, we have all of the dot admin files. If you scroll up, we are going to see our secrets. Hooray.

So, what does that mean? First of all, this is going to open up a whole new can of worms. One, do not store secrets in code. Don't. And you might think it is true that this is not really serverless-specific, but serverless makes everything easy, it is a creature of convenience. And it is great, we can roll out, we can just deploy these functions and get them going. However, you know, the secrets are com are cumbersome, you have to think about the mindset. When you are writing code, you want to run it. I don't care how that is operated, it is tempting to put secrets in the code. Don't. It makes it very fragile for you if somebody comes out to see it. So I don't know if we need to say it again, but avoid secrets in code.

Serverless Platforms Offer a Key Management System

And you should use a KMS, and a lambda has environment variables that are encrypted using KMS, so all you have to do do is set up a config method and it is stored. And if I have the listing, and I can leak information and not act, I can then extract, or storing the key inside config item and then make it available to the application would have hidden that key from an attacker even if they had access to the files on the system.

However, we have remote execution, we don't just have file browsing on it. So it is worth noting that, while if I looked at the files, storing secrets in the code would have helped me, not storing secrets would have helped me, if I run, I can use a simple command here, which is N, and then run this. And if I scroll down, I get the environment variables. And for emphasis base, I put a secret environment variable that can very much not be hidden. Don't let that discourage you, this means I need a more severe vulnerability to access the data, and it is around information leakage, it is still a big step in security to store this as an environment variable, versus storing the secrets in the code.

And so, so far, what happened to my mouse. Here we go. And so, fine, you know, this is secrets, this is a step three for us. And, the next thing that we're going to talk about is granularity. You know, I kind of present this realty to you that I have these functions. But you might stop and wonder why are these admin functions in the same serverless YAML as that render function? Why were these admin libraries, or this code, why was it even on my render function? This was a render function, it has nothing to do with the admin. And the answer to that, again, comes back to convenience.

Why? Because it is easy. Look, it is just a single, simple application, I have a single serverless YAML file. They run as individual function and it saves me some cents on storage to just deploy the whole thing as a single zip file. It is so easy to do. But, easy is not secure. There is really no reason why this function will be -- like why would I deploy the admin code, along with my render function. All I have done is exposed it and I increased the attack surface.

So keep that in mind, think about the granularity of your deployments.This is a very simple, but a very, very important point. Think about why are these functions in there. And remember that, whenever you configure policies or deployments, this might be easier, but this is safer. Think about each one of these functions as a perimeter; I will talk about that a little bit more of its own, think of it as an entity of its own, don't bucket it up with others.

And so, shifting from the deployment aspect to it, let's talk about a different at butte of the same problem that I rolled out, which is permission. I will do that through a demo versus just talk about it. So what I have done so far, I extracted a lot of information. And so, just for fun, we can do this as well. So we can run, and I don't know if you are quick enough to see, the require AWS SDK.config.credentials. Okay, curl. And that would actually give us the credentials on the access key an the - so definitely, I will make it pretty. This is my wife's Amazon account, if you want to buy her address, you can, but this does not compromise a production service. Just kidding, this is very limited in scope. So this video is recorded, any attacker in the world can target my wife's shopping habits. There is going to be hell to pay when I get back home.

Cool. And so this was just a bit of a distraction for us, though. Let's talk about what we do next. And so, as you may recall, that admin function that we had protected the API, it protected the internal back-up and internal restore. And it did so by this line, the way that we protected these two functions, was by not exposing them to the world. Right? We just didn't make them available. And that's not great. You know? That is not, like, a very, very strong protection. Because we think they are not accessible through the API gateway, and therefore an attacker can never get to them. But what we have achieved by doing so is exposed our selves to the weakest link. Anything that is deployed within access or reach of the lambda functions can invoke them, circumventing that key. Say I could not get to the key, and I could invoke those functions. We are going to do that, just for demonstration. We are going to -- this one is a little bit harder. So let me de-code this for you. I think I can do this.

So we are running a fairly elaborate piece of JavaScript, we are running new function, requiring the SDK, running lambda, and running internal explore. And as an attacker, I would have a remuneration before knowing the exact name of the function, but this is exactly what it does. And if we do this, nothing happens externally. If we come back in, we will see in the log groups that we, maybe we will get into the internal restore is what I just called, we can see that the restore function is just called now. It is just invoked. And so what I have done here, I have circumvented the protection mechanism that was do not create an API gateway.


And, again, this is just all-too-common. The only reason that was possible is because of convenience. There are a whole set of permissions that my render functions shouldn't have had. It shouldn't have been within network connectivity of the back-up function, it should not have had the permission to invoke another lambda function, it was not a part of its core functionality. And so, a lot of those concerns are there. Maybe it should not have even been deployed in the same VPC as the back-up functions.

Use Granular Policies

And so you want to use granular policies, when you bucket things together, the policy becomes the maximum of all of them. But that's not where the problem ends. Once you add multiple, it does not need to be many, just a handful of functions, of entities as a whole, to a single policy it can never shrink, it cannot shrink, you are going to look at a piece of permission in there, you don't know who is using it, and they keep it, and that is how permissions work, it expands until somebody adds an asterisk and they never shrink back.

A Function is a Perimeter

Think of every function as a perimeter; this is what most people deploy, but you have to think of each one as a perimeter. I'm really happy with my GraphQL skills on this, this is not my forte. This is core to the existence of serverless. Over time, we will deploy many functions, many, many functions, and those functions are going to be used in a very flexible way, in a permutation way, and that's the core reason, that's the core advantage of serverless, the flexibility that it offers. But inevitably, that implies if we don't secure each of the components independently, we are going to create broken chains. We will have functions that assume that another function ahead of them did something and it did not. We have to think about every one of these functions, and I do mean every function as something that is an independent entity that needs to be independently secured and has its own perimeter.


So we have a couple more things to do. Again, I think I'm okay with time. I might be getting into the full 50 minutes. The next thing I will tackle is immutability. The favorite myth of serverless is that it is serverless. So to demonstrate the lack of perfect immutability, let me introduce you to another small piece of code that I have in here. So in my render function, for no good reason at all, I store all items that come in into a slash GMPT file. I'm doing for no good reason, but the temp file is used for sensitive information, and as you extract and convert data, often times the TMP folder is being used for that.

So we have invoked render several times at this point, and I forget what number we are in right now. So let's show what it is. We are going to run a simple LS on slash GMP, so LS-L-TMP. We will see, over the course of this time, we have performed quite -- we have created quite a few files. These are on the same machine, every single vendor that does serverless keeps containers warm.

You know, it is just not feasible, latency wise and cost-wise, to spin up a new instance of every function -- on every function's invocation. So instead, the services, and specifically in this case lambda, they would launch a container, and they would manage wisely how long to keep that container alive, and you can kind of see also evidence of this in the logs; the logs often times are based on an entity that got created. And they would keep it warm for a while, if it is not being used, it is going to get taken down. And not only that, the same container is reused when sequential parallel requests come in. So the same server is being used for the same request, if I compromise that server, I will compromise that client. If I compromised the server as a whole, when I'm making that request, it is likely that other users are invoking code on the same machine. So if it could not be persistent on the machine, if I can run some TPC dump over here and capture data as another user comes in, I can see their data.

So serverless does reduce impact. So serverless keeps the malicious persistent, like, compromised server from existence. You cannot have a server that has been compromised and stays compromised for a very long time. However, it is not perfect in that regard. It does reuse that server for other clients; we shouldn't think of these services as perfectly mutable, or ones that are entirely disposable. If we do, we are making ourselves weak, and once there's a weakness, it can be expanded. It is hard; I keep thinking about how to demo the parallel requests in all that. Maybe I will have that by the next time. So don't rely on immutability.

Serveless User is Typically Low Prividedge

And one thing that I will give them credit is all of these cloud operators are the pros on how they run it. So if I wanted to modify the application, for instance, so the next person coming along would run it, I can't, the application is mounted on a read-only file system, it is managed well and a good operation of these services, it is not perfectly immutable.

So the last bit I want to talk to you about, sort of the last function, is -- I will go back here. The last one I want to talk about is here, it is in Java, and it shows that Java applications can be vulnerable. And I'm not entirely sure what this function is, it is here, must have deployed it at some point. And the reason that I deployed it is because, why not? There is no cost to deploying a serverless function. It reverts the question from saying, is this deployment worthy, to saying -- you go from asking, should I deploy, to asking, why wouldn't I deploy. The cost is minimal, it is marginal. If it is not used, it does not cost you. It costs a minute storage amount.

And over time, you have to understand that each function is an attack surface. Each one of these functions is a risk that you are persisting in your system. If you let that get out of control, wrangling servers is the least of your problems. You will have a ton of functions, and if you are not tracking it well, you do not know if you can delete it or not. You just added a function, and there are people who are talking about, hey, they moved their cron jobs to be a function, or some pet project to be a function. It is great, but you are looking at this function and don't know if you can delete it or not. Maybe it is used rarely, or a disaster recovery function; if there is no documentation or tracking of why the hell is this function there and what is it doing, then you are never going to be able to get rid of it.

And this is maybe the biggest concern I have around serverless security. All of the others are sort of actions, you set them up, you have to do it, there are good practices and you will get better at tooling. When I talk about serverless, you have an explosion of attack surface that are there and are not removed and get libraries that are easier to be expanded. Remember, the zero cost of deployment or operations is not the same as zero cost of ownership. Jared talked about that in terms of monitoring your application, and it is true of security and what you are exposed to over time.

Worry about All Functions

So just keep this in mind; put your practices around that and deploy the mission-critical functions in a different account or set-up to others, versus the ones that are more disposable. Just keep it in mind and track it. So that's sort of it for my talk today. This is a bit of a view, again, that is not exactly the flow I have right now about security risks that are in there. I encourage you to watch the more meta talk in there, it is less engaging, but important to know.

Serverless is Defined Now: Let’s Build Security in

And serverless is a concept that we are defining now, we have the opportunity to build the right security controls in and make it a natural part of how to develop serverless applications. Thank you.

Live captioning by Lindsay @stoker_lindsay at White Coat Captioning @whitecoatcapx.


See more presentations with transcripts


Recorded at:

Mar 10, 2018