Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Serverless Security: What's Left to Protect?

Serverless Security: What's Left to Protect?

This item in japanese

Key Takeaways

  • FaaS takes on the responsibility for “patching” the underlying servers, freeing you from OS patching
  • Denial of Service (DoS) attacks are naturally thwarted by the (presumed) infinite capacity Serverless offers.
  • With serverless, we deploy many small functions that can have their own permissions. However, managing granular permissions for hundreds or thousands of functions is very hard to do.
  • Since the OS is unreachable, attackers will shift their attention to the areas that remain exposed – and first amongst those would be the application itself.
  • Known vulnerabilities in application libraries are just as risky as those in the server dependencies, and the responsibility for addressing vulnerable app libraries falls to you – the function developer.

Devops Trends Report

Find out what technologies from the DevOps space you should keep an eye on this year. Be the innovator in your team and learn more about Kubernetes, Service Mesh and Chaos Engineering. Read the report.

Serverless is an exciting development in the modern infrastructure world. It brings with it the promise of dramatically reduced system costs, simpler and cheaper total cost of ownership, and highly elastic systems that can seamlessly scale to what old-timers (like me) call a “Slashdot moment” – a large and immediate spike in traffic.

The cost savings Serverless offers greatly accelerated its rate of adoption, and many companies are starting to use it in production, coping with less mature dev and monitoring practices to get the monthly bill down. Such a trade off makes sense when you balance effort vs reward, but one aspect of it is especially scary – security.

This article aims to provide a broad understanding of security in the Serverless world. We’ll consider the ways in which Serverless improves security, the areas where it changes security, and the security concerns it hurts.

Are we talking Serverless of FaaS?

Serverless  means different things to different people. It’s largely made up of two parts:

  • A Function-as-a-Service (FaaS) platform, allowing deployments of functions which the platform will provision on-demand (or similar platforms that elastically manage small compute units, like AWS Aurora or various hooks) ;
  • An eventbased execution model, used to trigger functions in response to a wide variety of activities.

While event-based communication does have some security implications, they tend to be quite specific and smaller in scope compared to the implications FaaS has on security. Therefore, this article will focus on FaaS security implications, not events. I will use the term Serverless and FaaS interchangeably to make the article easier to read. 

Now that we established our taxonomy, let’s dive in!

How does Serverless help security?

Let’s start positive – how does Serverless make us more secure?

No need to manage OS patches

Serverless is a highly controversial name. Since code needs to run somewhere, clearly it will always need some server to run on! A more accurate (if not as catchy) name may be Server-management-less. When using FaaS, the underlying platform handles the servers for you, offloading the need to provision, manage and monitor these beasts.

By offloading the servers from you, FaaS also takes on the responsibility for “patching” those servers – updating the operating system and its dependencies to safe versions when they’re affected by newly disclosed vulnerabilities. Known vulnerabilities in unpatched servers and apps are the primary vector through which systems are exploited, due to their frequency and broad deployment, along with the fact updating apps and servers at scale is hard.

Serverless takes the unpatched servers risk off your hands, moving it to the “pros” running the platform, and by doing so makes you substantially more secure overnight. Platform-as-a-Service (PaaS) solutions, such as Heroku and Cloud Foundry, have been doing the same for a while now, so this isn’t an entirely new concept, but this is still a substantial security boost you’ll get from Serverless out of the box.

It’s key to understand that while Serverless offloads patching OS dependencies, it does nothing to help with vulnerable application dependencies. We’ll discuss that in more depth a bit further down.

Short-lived servers don’t say compromised for long

Serverless offers an opinionated way of operating software, expressing these opinions through various constraints. One key constraint, arguably the most important one, is that your functions have to be stateless. In a FaaS environment, you don’t know – and shouldn’t need to care – which server is assigned to run your function. The platform provisions and de-provisions servers as it sees fit, often destroying a (virtual) server seconds after it was created.

Unlike what you may see in the movies, security breaches rarely happen in one pass. In most attacks, the hacker finds a breach, exploits it, and attempts to install a malicious agent on the target machine. This dormant agent then – as quietly as it can – further cements itself into the current system, and then starts probing to see what data and other systems it can reach. Big data leaks like those seen at Target and the Panama Papers rely on such slow penetration and exfiltration of data.

Serverless doesn’t give attackers the luxury of time. By repeatedly resetting its machines, it eliminates any compromised server, forcing attackers to compromise it again and again, risking failure or exposure each time. Stateless and short-lived systems, including all FaaS functions, are therefore inherently less likely to be compromised at any given point in time, a real and immediate win for your security posture.

Extreme elasticity means Denial of Service resistance

The last FaaS claim to fame is its immediate and seamless provisioning of functions. This automated setup leads to extreme elasticity, allowing us to have no servers running – and thus pay nothing – when our customers sleep, while being able to handle huge demand at the flip of a dime.

The same scalability that helps handle good demand can also cope with its evil equivalent. Attackers often try to take down systems by submitting a large volume of compute- or memory- intensive actions, maximizing server capacity and thus keeping legitimate users from using the application.

These Denial of Service (DoS) attacks are naturally thwarted by the (presumed) infinite capacity Serverless offers. More requests – good or bad – would simply make the platform provision more ad-hoc servers, and good users will continue to be served. That said, you will still be paying for all of those executions… so it’s still worth monitoring for such activity so you wouldn’t be denied access to your bank account!

It’s important to note that while Serverless helps DoS, it doesn’t completely eliminate it. Platforms don’t really have infinite capacity, and some types of DoS, for instance Distributed Denial of Service (DDoS) target the network bandwidth or DNS and not the application. To handle such concerns, you can consider a DDoS protection solution such as those offered by Akamai, Fastly, Incapsula and some of the cloud platforms themselves.

How does Serverless change security?

So far, we’ve discussed Security concerns which are mitigated – if not eliminated – by Serverless. In other areas, Serverless doesn’t help nor hinder security, but it does change it, shuffling security priorities or other details. Let’s review the key areas that change in this manner.

Highly granular permissions offer risk and opportunity

Serverless is, in some respect, an extreme version of microservices. Instead of deploying a large monolith app, we deploy many small(ish) functions, and combine them together.

This split allows us far better granularity over the permissions of each piece of code. For each function, we can explicitly state what data it’s allowed to access, which actions it can take, and who can invoke it in the first place. This better granularity lets us implement the “least privilege” concept well, minimizing the damage a compromised function can cause. In theory.

In practice, however, managing granular permissions for hundreds or thousands of functions is very hard to do. Instead, developers naturally gravitate to grouping functions together, giving all functions the sum of the permissions they all need.

Better permission granularity is an opportunity FaaS offers us, and I highly recommend investing in automated and scalable permission management of your functions. Until these practices are easy enough to do, I deem this a change in security, not an improvement.  If you’re not sure where to start, I recommend watching Aaron Kammerer’s talk explaining (amongst other Serverless Ops topics) how iRobot auto tunes Lambda permissions.

Stateless servers require better data security

Even though functions are stateless (except some caching), an app’s logic often requires data, such as session info or performance focused caches. In a stateful application, such info stays on the machine handling the request, at times even staying in memory and off disk. In a stateless function, however, we use external storage (e.g. Elasticache) to persist it across calls.

The performance implications of not having the data on the same machine are usually small, but storing sensitive data outside the server has significant security implications. The data is at risk when transferred, is likely to persist longer and be accessible to more machines, and if the data store is compromised, more users will be impacted at once. Simply put, data stored outside the machine is at higher risk than data stored within it.

While Serverless isn’t the only case where off-box storage is used, it does increase the frequency and therefore importance of securing such data. Consider encrypting data stored in session stores, using short lived caches, and carefully managing who has access to these repositories. Also make sure you encrypt data in transit, for instance by using AWS’s built in in-transit encryption abilities.

Application security rises in prominence

While it’s awesome FaaS auto-magically handles the server level security concerns, we can’t expect attackers to simply give up! Instead, attackers will shift their attention to the areas that remain exposed – and first amongst those would be the application itself.

The chance a given function will have a SQL Injection, Cross-Site Scripting or Command Injection vulnerability is neither higher nor lower with Serverless, but these vectors would get extra attention from attackers in the FaaS world – so be sure to give them extra defender attention too.

Since functions are typically smaller in scope, they offer a great opportunity to create stricter controls over which input is and isn’t allowed. Such controls can be written in code, but API gateways also offer an opportunity to create schemas and whitelists that further reduce the chance of a malicious input making its way in. When you need to support broader inputs, try to create automated tests for anticipated attack patterns and add them to your function deployment pipeline.

Vulnerable app dependencies hidden inside

Somewhere between the server, which the platform secures, and the app, which the developer secures, sits a third group – application dependencies. These are open source libraries pulled from the likes of npm, PyPI and Maven, offering great functionality at minimal effort and cost. The libraries fall in the twilight zone between app and code, as the platform neither manages nor secures them, but they’re pulled in blindly like a piece of infrastructure.

Given the fine-grained nature of functions, these libraries often make up the majority of code in the actual deployed function. For instance, this 25-line sample function that fetches a file and stores it in s3 uses 2 libraries, which in turn use 19 libraries, which total to over 190,000 lines of code! This difference in code volume typically means there’s a higher risk of vulnerabilities in your libraries than in your code.

Known vulnerabilities in application libraries are just as risky as those in the server dependencies, which FaaS automatically addresses. In addition, the natural protections FaaS offers at the OS level will draw attackers to focus on app dependencies more often, as they are the next “easy way in”, making them even more important to secure. The Equifax breach, caused by a Remote Command Execution in a Maven library, demonstrated how badly that can turn out.

The responsibility for addressing vulnerable app libraries falls to you – the function developer. When developing functions, make sure you continuously monitor for and fix known vulnerabilities in your libraries, using tools such as Snyk, Victims DB, or the OWASP Dependency check. You can learn more about this in my book, “Securing Open Source Libraries”, also posted in a series of blog posts on the O’Reilly blog.

How does Serverless hurt security?

While Serverless isn’t inherently bad for security, it’s sheer scale emphasizes some very real security risks. Here are the key ones to keep an eye out for.

Greater dependency on third party services

Serverless apps are practically never built only on FaaS. They typically rely on a mesh of services, connected through events and data. While some of these are your own functions, many are operated by somebody else. In fact, the smaller scope and statelessness of functions drives a substantial increase in the use of third party services, both those of the cloud platform and from external services.

Each third-party service is a potential point of compromise. These services receive and provide data, influence workflows, and provide rich and complex input into our system. If such a service turned out to be malicious, or perhaps just got compromised, it can often do substantial damage. If you’re using 10 services, and each only has a 0.1% chance of being hacked, your own system has a 1% chance of using a compromised service…

To control this risk, validate each service, and minimize the impact it can have if compromised. Here are a few suggested steps:

  • Require a valid TLS certificate to validate the service you’re talking to is indeed the one you think you are (and to secure data in transit!). With Let’s Encrypt, every service can reasonably offer a verified (not selfsigned) certificate.
  • Apply input validation on responses from third party services. Such responses are often processed blindly, even when user input is tightly managed.
  • Minimize and anonymize the data you send the service, keeping it to the information it needs to receive to properly operate

These are just a few examples of a broader mindset you should have, thinking of each third-party service as a potential malicious actor.

Every function expands the attack surface

While functions are technically independent of each other, most are only invoked in a handful of sequences within our apps. As a result, many functions start assuming another function ran before them and sanitized the data in some fashion. In other words, functions start trusting their input as they believe it came from a trusted source – another first party function.

This approach leaves your security extremely fragile. First, these functions may be invoked directly by an attacker, even if it doesn’t make sense for the business logic. Second, because the function may be added to a new flow tomorrow, which won’t sanitize the input. And third, because an attacker may compromise one of the other functions, and then have easy and direct access to such a poorly defended peer. Note the last two points are also true for network access – a function without an API gateway may get one tomorrow, and should still verify its inputs.

To avoid being as strong as your weakest function, make sure you treat every function as an independent entity with a secured perimeter. Since this is hard, try to make it easier for the dev team by creating shared libraries to validate input and output, access sensitive resources, and apply potentially risky operations. The easier it is to protect a function, the more likely your team is to do so.

Ease of deployment leads to explosion of functions

Deploying apps is costly, requiring people time, hardware costs and potentially unpleasant paperwork. Deploying functions, however, is easy and automated, and doesn’t cost anything unless the function is heavily used. With such low costs, we don’t ask why we should deploy, but rather why shouldn’t we?

Such a low threshold leads to a great many functions being deployed, even if many of them are very lightly used. Unless you tracked them well, deployed functions are very hard to remove, as you never know who may be relying on their existence. Top that with excessive permissions that are similarly hard to reduce, and you end up with an explosion of hard to remove, overly powerful functions, offering a rich and ever-growing attack surface to attackers… If you thought wrangling servers is hard, just wait until you deployed your 10,000s function!

To address this concern, we need to remember that (effectively) zero operational cost doesn’t mean zero cost of ownership, and that each function represents a security risk. We need to be diligent in our deployed function processes and management, to avoid future problems. Here are a few tips you can consider:

  • Don’t deploy all functions. Just because certain functionality can be a deployed function, doesn’t mean it should be.
  • Deploy noncritical functions to different accounts or regions
  • Use naming conventions and provide a README file that explains your function well
  • Make sure you monitor ALL functions for security risk, including their network and resources activity, app layer activity, and scanning them for vulnerable libraries within.


Serverless is an exciting evolution in the world of infrastructure. It isn’t inherently better or worse for security, compared to other infrastructure models, but it does change how we operate our software, and requires we adapt how we secure it.

We went over 10 aspects of security which FaaS helps, hurts or changes. Here’s a quick list:

  • How Serverless reduces security risks
    • No need to manage OS patches
    • Short-lived servers don’t say compromised for long
    • Extreme elasticity means Denial of Service resistance
  • How Serverless changes security risks
    • Highly granular permissions offer risk and opportunity
    • Stateless servers require better data security
    • Application security rises in prominence
    • Vulnerable app dependencies hidden inside
  • How Serverless increases security risks
    • Greater dependency on third party services
    • Every function expands the attack surface
    • Ease of deployment leads to explosion of functions

My recommendation is to not see security as the primary reason to decide whether or not to use FaaS. Its security advantages and disadvantages are comparable to other approaches, especially given the early stage of relevant security tools. However, I’d encourage to understand how Serverless changes security priorities, and invest your security resources accordingly.

I’ll leave you with one last thought. The Serverless ecosystem, including best practices and tools, is being shaped now. If we give it the right attention and collaborate as a community, we can make security a natural part of developing in a FaaS world. Achieving that would be the ultimate Serverless security win.

About the Author

Guy Podjarny (@guypod) is Snyk’s co-founder and CEO, focusing on using open source and staying secure. Guy was previously CTO at Akamai following their acquisition of his startup,, and worked on the first web app firewall & security code analyzer. Guy is a frequent conference speaker & the author of O’Reilly “Securing Open Source Libraries”, "Responsive & Fast” and “High Performance Images”.

Rate this Article