Recently, AWS introduced a new open-source solution called AWS Virtual Waiting Room, allowing integration with existing web and mobile applications. In addition, the solution protects systems from resource exhaustion by buffering user requests during sudden traffic bursts.
The open-source solution buffers users in a waiting room until their turn arrives. Moreover, it shields the backend environment from traffic and avoids the backend's need to scale to accommodate all users at a single time. However, the integration of AWS Virtual Waiting Room with a web or mobile application depends on four scenarios, according to a recent AWS Compute blog post:
- Upstream traffic redirection from the primary target site to flow through AWS Virtual Waiting Room – an option that sends all user traffic through the waiting room with the initial capacity of users permitted to the protected system.
- Downstream redirection to the virtual waiting room from the target site – an option sending all traffic to the target site.
- Direct target site API integration for buffering users from an existing website without any redirection – an option that integrates the virtual waiting room at the API level.
- OpenID Connect (OIDC) adapter - an option that provides no-code native integration of the waiting room with OpenID Connect-enabled system components, such as the AWS Application Load Balancer (ALB).
The AWS Virtual Waiting Room solution implementation includes three main components:
- Core APIs provide the basic mechanisms for tracking clients entering the waiting room. The code APIs' primary resources include two Amazon API Gateway deployments, a VPC, several AWS Lambda functions, an Amazon DynamoDB table, and an Amazon ElastiCache cluster.
- A waiting room front-end website, a static site, is shown to users awaiting their turn. This site dynamically updates the position being served and their place in line on a configurable interval.
- A lambda authorizer for protected target system that wraps and protects the downstream protected target system's APIs, ensuring all user invocations have a validated time-limited token issued by the waiting room core API.
Source: https://aws.amazon.com/blogs/compute/introducing-aws-virtual-waiting-room/
Mark Nunnikhoven, a cloud strategist at Lacework, stated in a tweet:
Bottom line on the AWS Virtual Waiting Room is that if you need this pattern, you're probably going to end up building something like this anyway. A pre-built & well-testing solution from @awscloud can help jump-start your efforts here.
Yet, a developer advocate at Lumigo, Yan Cui, tweeted:
I love seeing this sort of thing, but I'd prefer a managed service, not a CFN stack I have to deploy and run in my account, esp given it's not a trivial stack - I end up owning the uptime, but not the underlying code.
And a respondent on a Reddit thread commented why the solution fits for some organizations:
The relevant customers for AWS are organizations like my employer, which pay AWS a nice 6+ figure sum each month. And why do we do that? Because it is still a heck of a lot cheaper than doing it all yourself. You probably would need our staff of backend engineers (a dozen people) just to manage the database aspect of it.
The WS Virtual Waiting Room solution is available at no additional cost and provided as open-source under the Apache 2 license. More details and guidance for the solution are available in the implementation guide.