BT

Serverless Platforms Compared for Performance

| by Hrishikesh Barua Follow 15 Followers on Sep 23, 2018. Estimated reading time: 2 minutes |

Most major cloud providers have serverless platforms which offer functions as a service (FaaS). A recent benchmark studied the differences in their performance vis-à-vis the runtime, cold start times, dependencies and resource allocation.

AWS Lambda, Google Cloud Functions, Azure Functions and IBM Cloud Functions were the serverless providers tested by Bernd Strehl to compare their performance. Although these tests, performed using Node.js functions, reveal some facts about how these providers respond to varying request loads, the test methodology has been criticized for its small sample size, as well as not taking other factors like the underlying instance types into account. Tests by other teams have taken a different approach (PDF).

Serverless providers charge for not only CPU, memory and number of requests, but also for network and storage. Providers differ in how they adjust memory for specific CPU requirements. AWS, for example, gives more CPU cycles (PDF) to instances with higher memory. Google follows a similar strategy, whereas Azure varies in how CPU is allocated with "4-vCPU VMs tending to gain higher CPU shares".

Concurrent requests change the average response time of a function. For non-concurrent requests, the resource allocation remains almost same for all providers except for Google, where it varies around 30%. The compute time in AWS increased by 46% for concurrent requests when the same call was invoked 50 times at once. For Google and Azure it was 7% and 3% respectively, whereas it increased by 154% in IBM. Other tests reveal AWS to have the best performance in terms of concurrent execution.

Cold start time is the time required for a serverless function to respond to the first request after being unused for a while. Maintaining a constant performance is a challenge for all providers here according to the results.  A pool of unspecialized, i.e. generic, workers (instances) is kept running at all times by the cloud provider. The first incoming request configures it for that request and serves it. The instance is kept alive after the first request. However, the time for how long it’s kept running varies between providers. Mikhail Shilkov, in his article, measures this as 20 minutes for Azure and variable times for Google Cloud Functions. AWS officially mentions it as five minutes, but in practice it’s longer due to alleged tweaks by their engineering team. Cold starts can also occur when the service has to scale horizontally and new instances have to be brought up.

The choice of runtime also impacts performance. Node.js apps don’t need much CPU to start, whereas the .NET Core runtime requires more memory (in AWS Lambda). Cold start times drop with an increase in the allocated memory. For Javascript, the tests reveal that AWS is the fastest in cold start times, followed by GCP and Azure.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT