BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News AWS Lambda Updates from re:Invent - Cost Savings, More Memory Capacity and Container Image Support

AWS Lambda Updates from re:Invent - Cost Savings, More Memory Capacity and Container Image Support

This item in japanese

At the annual re:Invent, AWS announced several updates to its Function-as-a-Service offering Lambda. These newly announced features evolve around billing, memory capacity, and container image support.

The billing of any AWS service is an essential aspect with regards to TCO. The reduced billing granularity for Lambda function duration from 100ms down to 1ms will mean customers will be billed less for short-running functions. The company announced that the compute duration would be billed in 1ms increments instead of being rounded up to the nearest 100ms increment per invoke. The latter was the billing model since the launch of Lambda in 2014. 

Danilo Poccia, a chief evangelist (EMEA) at Amazon Web Services, provide a sample in his blog post on the new 1ms billing granularity:

The Lambda monthly compute charges with the old 100ms rounded up pricing would have been:

60 million invocations* 100ms * 1G memory * $0.0000166667 for every GB-second = $100

With the new 1ms billing granularity, the duration costs are:

60 million invocations * 28ms * 1G memory * $0.0000166667 for every GB-second = $28

Also, Corey Quinn, cloud economist at the Duckbill Group, said in a tweet:

Ah, it's public now! Billing granularity is now 1ms for Lambda invocations instead of 100ms. This will save a lot of money for small, frequently invoked functions, and almost none at all for my giant horrible ones because I am bad at programming. #reinvent

Source: https://www.reddit.com/r/aws/comments/k9svpy/why_would_we_save_79_with_the_switch_to_lambda/

From December onwards, the 1 ms billing granularity is available for Lambda functions in all regions where AWS Lambda is available except China regions.

Besides the new billing for AWS Lambda, the public cloud provider announced that customers now can provision Lambda functions with a maximum of 10,240 MB (10 GB) of memory, a more than 3x increase compared to the previous limit of 3,008 MB. For workloads such as batch, extract, transform, load (ETL) jobs, and media processing applications, performing memory-intensive operations, the increase in memory will help. Furthermore, since Lambda allocates CPU power proportional to the amount of memory provisioned, customers also have access to up to 6 vCPUs – useful for compute-intensive applications.

 
Source: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-functions-with-up-to-10-gb-of-memory-and-6-vcpus/

The additional CPU power and additional memory will decrease function invocations' duration and thus reduce the cost. In another blog post, Poccia explains the cost reduction:

Lambda charges are related to memory and duration, so if I increase memory, and this is reducing duration by the same proportion, the overall charges are about the same. For example, looking at the graph above, when I configure 5 GB of memory, I have the same costs as when I have 1 GB of memory (about $61 for one million invocations), but the function is 5x faster. If I need lower latency, I can increase memory up to 10 GB, where the function is 7.6x faster, and I pay a little more ($80 for one million invocations).

Through the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and Serverless Application Model, users can configure up to 10 GB of memory for new or existing Lambda functions. And the support for additional memory and compute is available in various AWS Regions.

Besides 1ms billing granularity and more memory for AWS Lambda, the public cloud provider also announced support for container images as a packaging format – allowing customers to package and deploy AWS Lambda functions as a container image of up to 10 GB. 

 
Source: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

With the ability to package up to 10 GB, AWS wants to help customers quickly build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data-intensive workloads. Furthermore, according to another blog post from Poccia, functions deployed as container images benefit from the same operational simplicity, automatic scaling, high availability, and native integrations with many services.

The public vendor will also provide base images for all the supported Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby) and base images for custom runtimes based on Amazon Linux that developers can extend to include their own runtime implementing the Lambda Runtime API. Moreover, to make building base images easier, a Lambda Runtime Interface Clients is available, which implements the Runtime API for all supported runtimes.

Next to the Lambda Runtime Interface clients, an open-source Lambda Runtime Interface Emulator is now available that enables developers to perform local testing of the container image and check that it will run when deployed to Lambda. The emulator is included in all AWS-provided base images and can be used with arbitrary images like, for example, Alpine or Debian Linux based images. Furthermore, developers can also use the Lambda Extensions API to integrate monitoring, security, and other tools with the Lambda execution environment within the images. 

A respondent on a Hacker News thread does comment on the AWS specific customizations in the images:

The entire point of containerization is portability across different services and platforms. It seems like a massive miss for the team to tout "container support" but then still require platform-specific customizations within the container.

Developers can use container image support in AWS Lambda with the console, AWS Command Line Interface (CLI), AWS SDKs, AWS Serverless Application Model (SAM), AWS Cloud Development Kit (CDK), AWS Toolkits for Visual Studio, VS Code, and JetBrains, and solutions from AWS Partners, including Datadog, HashiCorp Terraform, and Pulumi. Furthermore, the container support for Lambda is generally available and present in several AWS Regions. Pricing-wise customers only pay for the ECR repository and the standard Lambda pricing.

Lastly, AWS also released another service called AWS Proton, which enables users to manage Lambda assets and provide a single place to monitor the state of their infrastructure and roll out updates to their code. Furthermore, Amazon CloudWatch Lambda Insights is now generally available, enabling users to monitor, troubleshoot, and optimize AWS Lambda functions' performance.

Rate this Article

Adoption
Style

BT