Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Why AWS Lambda Pricing Has to Change for the Enterprise

Why AWS Lambda Pricing Has to Change for the Enterprise

This item in japanese

Key Takeaways

  • AWS Lambda supports 10GB of RAM and Container Images up to 10GB. This makes it comparable to container services like AWS ECS or Fargate.
  • For 2 CPUs and 4GB of RAM, AWS Lambda pricing per hour/unit is approximately 7.5 times greater than AWS Fargate Spot.
  • Enterprise batch processing workloads, such as financial modeling, require the capability to burst into 1000's of containers for daily processing.
  • Lambda can scale to 1500 containers in a second. With AWS Fargate, this can take up to 1 hour.
  • Selecting AWS Lambda for development, test, and production workloads will be challenging for enterprises performing batch processing. A cost comparison will lead customers to choose other services and lose out on Lambda's scalability and integrations.

AWS Lambda is, contrary to popular belief, ideally suited for many enterprise batch workloads like financial modelling, machine learning and big data processing. At scale, a cost comparison with instance or container-based platforms reveals that Lambda pricing and batch processing are not a perfect fit. If cost deters enterprise teams from adoption of serverless, an opportunity to increase agility and productivity is missed. In this article, I'll share thoughts on cost management and some stark pricing data.

Managing Cloud Cost

Cloud cost management has become a topic on the radar of developers and architects. One factor is the switch to on-demand, pay-per-use billing for managed cloud services. Autonomous teams following an innovative, DevOps culture are also empowered to create resources at will in both development and production cloud environments. While this practice has proven to help teams accelerate, there is a clear risk of cloud bill shock. This can be offset by combining good cost monitoring practices with a sound level of cloud cost awareness within development teams.

There is a difference however, between cost awareness and cost FUD (fear, uncertainty and doubt). Cost FUD is seen when the idea of the pay-per-use pricing of services like AWS Lambda keeps developers away despite the fact that the TCO (total cost of ownership) and productivity benefits can far outweigh service costs (source). When workloads are unpredictable, on-demand pricing is a good fit. You don't pay for what you don't use. There is a flipside however. When workloads require large-scale concurrency for lengthy periods of time, the cost can start to raise eyebrows. Cloud workflows with variable load running on AWS Lambda can easily use thousands of concurrent AWS Lambda invocations running over a period of one hour. This is not limited to batch processing. Spikes in web-based traffic can result in similar invocation patterns.

Comparing Pricing for Lambda, Fargate and EC2

We will compare pricing models for AWS Lambda, Fargate and EC2 before comparing the total running cost for an illustrative example. Pricing in the us-east-1 (Virginia) region is used here. Most regions have the same pricing but some can be more expensive so check before making any assumptions.

Lambda Pricing

  • AWS Lambda allows you to run functions in response to many supported events. Functions run for up to 15 minutes and avail of up to 10GB of memory and 512MB of temporary storage.
  • Lambda usage is billed per 1 million request and per duration, rounded up to a millisecond. The billing rate depends on the memory allocated to each function, a value that can range from 128MB to 10GB. vCPU and network bandwidth allocation in AWS Lambda is always proportional to the amount of memory allocated, with 1 vCPU equating to roughly 1765 MB. In us-east-1, current invocation cost is $0.0000166667 for every GB-second and $0.20 per 1 million requests. (Source)

Fargate Pricing

  • AWS Fargate is a managed environment for running containers (or container-based workloads), marketed by AWS as a Serverless Compute Engine because of the fact that you don't see or manage the underlying instances or container orchestration infrastructure. Instead, users create Clusters with Services comprising individual tasks or run individual tasks (containers) on demand.
  • Fargate is priced on two dimensions; vCPUs per hour and memory (GB) per hour, rounded up to the nearest second. (Source). CPU allocation is charged at $0.04048 per vCPU per hour and memory at $0.004445 per hour.
  • Fargate Spot offers up to 70% in cost savings if you are willing to accept that some workloads can be interrupted. It makes use of spare compute capacity. When this capacity is required by an on-demand workload, you are given a signal that the container will terminate in two minutes.

EC2 Pricing

  • EC2 provides virtual instances, allowing users to have much greater control over the underlying compute infrastructure. This comes with the tradeoff of a greater part of the shared responsibility. For example, using EC2 means you are responsible for creating and maintaining the network configuration, high availability, I/O configuration and operating system. EC2 pricing is also much more variable. There are various pricing models and a wealth of options to balance compute, memory, network and storage requirements.  For our simple comparison, we will stick to on-demand instances for now and use a general purpose instance type. It would be tempting to select a Graviton2-based t4g instance for a quick 20% saving and better performance but we'll keep it traditional and assume that, for now, most users are sticking with Intel-based CPUs rather than Arm. For a t3.medium, we'll get 4GB of memory and 2 vCPUs at $0.0336 per hour.
  • EC2 Spot is an option that makes use of spare capacity and, like Fargate Spot, means your workload must be interruptible. Cost savings can be up to 90%. The actual price depends on the demand at any given time.

To make these prices somewhat comparable, let's assume we have a workload requiring 2000 concurrent containers or instances with 2 vCPUs and 4GB of memory. With AWS Lambda, the nearest we can get is a 4096MB memory configuration, providing 2.41 vCPUs*. We will exclude Lambda's price per 1 million requests on the basis that the number of requests for batch processing workloads will be relatively low, making the cost impact negligible. Given that our batch processing workload is expected to run many processing jobs over the course of hours, we'll base our calculation on one hour of fully-utilized processing.


  AWS Lambda AWS Fargate Amazon EC2 Fargate Spot EC2 Spot
Unit Pricing $0.0000166667 per GB-second

$0.04048 per vCPU-hour +

$0.004445 per GB-hour

$0.0336 per hour (t3. medium) $0.01314179 per vCPU-hour + 0.00144306 per GB-hour $0.01008 per vCPU (based on estimated 70% spot saving)

$0.0000166667 x 4 GB x 60 seconds

x 60 minutes x 2000 containers

($0.04048 * 2 vCPUs +

$0.004445 * 4 GB) x 2000 tasks

$0.0336 x 2000 instances ($0.01314179 * 2 vCPUs + $0.004445 * 4 GB) x 2000 tasks $0.01008 x 2000 instances

Price per Hour for 2000 instances

(4GB memory / *2 vCPUs)

$480.00 $197.48 $67.20 $64.11 $20.16


Lambda is starting to look pretty pricey in this context. In our simplified calculation, AWS Lambda is:

  1. 2.4 times the cost of Fargate
  2. 7.1 times the cost of EC2
  3. 7.5 times the cost of Fargate Spot
  4. 23.8 times the cost of EC2 spot, albeit with the pricing variability of EC2 Spot. Spot instances may not be suitable if you require strict guarantees on time to complete the workload.

This is likely to prompt any or all the following three responses from people who are not yet used to large-scale Lambda workloads.

  1. “Yikes, Lambda is way too expensive! That's a hard no from me!”
  2. “This isn't a fair comparison. Lambda is designed for bursty, event-driven workloads where it works out much cheaper.”
  3. “Batch processing isn't suitable for Lambda. For that you need HPC clusters or big data processing frameworks distributed over large instances.”

Let's take a step back from pricing and think about why, in spite of all of this, Lambda is going to take over more and more batch processing.

Why consider Lambda for batch processing?

Sure, you have your Hadoops and your Sparks and what have you, but we are seeing batch processing and computationally intensive data processing running in AWS Lambda. By creating platform-agnostic business logic and splitting jobs into small, stateless units, you have maximal flexibility and run jobs in any environment. Compared to Fargate and EC2, Lambda will provide many benefits:

  1. Unlike any other environment, Lambda provides instant scalability to thousands of containers. By comparison, scaling to 1000s of containers in Fargate takes over an hour (source). When you can scale this fast, you can scale higher, giving a shorter aggregate time to execute the workload for the same cost.
  2. A Lambda execution environment provides the lowest level of isolation, with each small job running in a secure, short-lived environment with minimal privileges. This has a security benefit. Workload isolation at this level also means that the blast radius for failures is smaller. Individual failures can be isolated to a single event in a single container which can make troubleshooting much simpler.
  3. Lambda and serverless adoption encourage smaller, single-purpose components with minimal coupling to other elements of the system architecture. This is a step further than the isolation brought about by SOA and then microservices. The practice makes it simpler to reason about individual pieces of the system.
  4. You can now allocate 10GB of RAM and use Docker/OCI container images to deploy Lambda functions, making it easier to run existing workloads built for typical container-based runtimes.
  5. Lambda is tightly integrated with many event sources and services.
  6. Lambda delivers on the promise of removing undifferentiated heavy lifting, that burden teams take on in order to create, manage, maintain and secure clusters of instances or containers. Committing your company to offloading this kind of burden allows you to dedicate more time to business features that matter.

Assuming that the workload is designed to make money for your business, decreasing the time to produce results should equate to earnings. On top of that, reducing processing time opens up the possibility of moving from just daily batches to on-demand or near real-time computation, a competitive advantage and a path to new streams of revenue. For many workloads, you may decide that AWS Lambda is not a fit, particularly when a specific CPU architecture or operating system is required. For a guide to choosing the right compute service, see my article dedicated to this topic.

Does compute cost really matter?

Of course, cost matters, but how much of the total overall cost of building and running the application is it? This depends on the company and the context. In many enterprises, the cost of people is far greater. You have to take into account the opportunity cost of making a choice that can take the time away from skilled team members. If your organisation spends a lot of time keeping systems running and firefighting issues, you will instinctively know that this has a massive cost, even if you haven't tried to quantify it.

Besides, what's $480 to a large, profitable enterprise? Well, if the workload is successful in Lambda, the demand to run it will increase. This follows Jevons paradox, the concept in economics stating that. as efficiency increases, resource consumption will be counterintuitively greater due to increased demand. If the daily $480 workload runs 4 times a day, then gets adopted by an increasing number of users with different input data, the cost is suddenly $1000s per day. Don't forget to take into account test and development workloads. In an actively-maintained internal enterprise system, development and test environments can consume more resources than the production system. You can see how a $480 daily estimate can turn into a seven-figure annual bill.

Sure, that seven figure bill might be money well spent if it frees up millions of hours of developer time and generates enough revenue. Unfortunately, that doesn't really matter. Lambda costs are still seen as unpredictable and significantly greater than other options when you stack them up like we just did. These costs are visible. The time and cost of people working on cluster scaling and instance maintenance or troubleshooting complex issues in a distributed batch processing framework is as good as invisible. Regardless of the benefit, raw cost will always tip the scales in making key technology decisions.

In order for AWS Lambda to become a true, general-purpose enterprise compute service, the pricing model will have to adapt. This could take the form of a complete price reduction or bulk discounting for sustained workloads. Currently, Azure’s pricing model for Functions does not vary significantly from AWS Lambda but this is an area where we can expect competitors to differentiate. Steps to make pricing more comparable to alternatives will make the decision to adopt AWS Lambda a no-brainer for enterprises.


  1. Financial Engines Cuts Costs 90% Using AWS Lambda and Serverless Computing
  2. You are thinking about serverless costs all wrong, Yan Cui
  3. Amazon EC2 On-Demand Pricing
  4. AWS Fargate Pricing
  5. AWS Lambda Pricing
  6. Amazon EC2 Spot Instances Pricing
  7. Spot Instances Advisor
  8. Fighting COVID with serverless with Denis Bauer
  9. Pricing - Azure Functions
  10. Lambda, EC2 or Fargate? A Simple Approach to Choosing AWS Compute for Enterprise Workloads

About the Author

Eoin Shanaghy is a seasoned technology leader, architect and developer with experience in building and scaling systems for dynamic startups and large enterprises including 3G network management systems, Enterprise Java, designing and building real-time trading applications, Digital Video and e-Learning. Prior to co-founding fourTheorem, Shanaghy founded Showpiper, a video content marketing startup and he is the Published Author of AI as a Service

Rate this Article