BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Serverless Platforms Compared for Performance

Serverless Platforms Compared for Performance

Leia em Português

This item in japanese

Lire ce contenu en français

Most major cloud providers have serverless platforms which offer functions as a service (FaaS). A recent benchmark studied the differences in their performance vis-à-vis the runtime, cold start times, dependencies and resource allocation.

AWS Lambda, Google Cloud Functions, Azure Functions and IBM Cloud Functions were the serverless providers tested by Bernd Strehl to compare their performance. Although these tests, performed using Node.js functions, reveal some facts about how these providers respond to varying request loads, the test methodology has been criticized for its small sample size, as well as not taking other factors like the underlying instance types into account. Tests by other teams have taken a different approach (PDF).

Serverless providers charge for not only CPU, memory and number of requests, but also for network and storage. Providers differ in how they adjust memory for specific CPU requirements. AWS, for example, gives more CPU cycles (PDF) to instances with higher memory. Google follows a similar strategy, whereas Azure varies in how CPU is allocated with "4-vCPU VMs tending to gain higher CPU shares".

Concurrent requests change the average response time of a function. For non-concurrent requests, the resource allocation remains almost same for all providers except for Google, where it varies around 30%. The compute time in AWS increased by 46% for concurrent requests when the same call was invoked 50 times at once. For Google and Azure it was 7% and 3% respectively, whereas it increased by 154% in IBM. Other tests reveal AWS to have the best performance in terms of concurrent execution.

Cold start time is the time required for a serverless function to respond to the first request after being unused for a while. Maintaining a constant performance is a challenge for all providers here according to the results.  A pool of unspecialized, i.e. generic, workers (instances) is kept running at all times by the cloud provider. The first incoming request configures it for that request and serves it. The instance is kept alive after the first request. However, the time for how long it’s kept running varies between providers. Mikhail Shilkov, in his article, measures this as 20 minutes for Azure and variable times for Google Cloud Functions. AWS officially mentions it as five minutes, but in practice it’s longer due to alleged tweaks by their engineering team. Cold starts can also occur when the service has to scale horizontally and new instances have to be brought up.

The choice of runtime also impacts performance. Node.js apps don’t need much CPU to start, whereas the .NET Core runtime requires more memory (in AWS Lambda). Cold start times drop with an increase in the allocated memory. For Javascript, the tests reveal that AWS is the fastest in cold start times, followed by GCP and Azure.

Rate this Article

Adoption
Style

BT