Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Recap of AWS re:Invent 2019

Recap of AWS re:Invent 2019


Last week in Las Vegas, AWS held their annual re:Invent conference and unveiled a slew of new products, while updating many existing ones. Here's a review of announcements impacting compute, data and storage, app integration, networking, machine learning, identity management, enterprise services, and development.


Unlike in years past, the news about compute services had less to do with new virtual machine options—there are already 30 EC2 instance categories and over a hundred instance types—and more to do with a wide range of compute environments. First up, AWS Outputs are ready for purchase. This hardware goes into customer facilities and is managed by AWS. Initially, Outposts serves up Amazon EC2, Amazon Elastic Block Storage (EBS), Amazon Virtual Private Cloud, Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Amazon EMR. Relatedly, AWS also launched a new type infrastructure deployment called Local Zones. The first, based in Los Angeles, offers a slimmed down set of services accessible within this region at very low latency. Another Los Angeles-based Local Zone is slated for 2020, with AWS "giving considerations to other locations."

Container-based computing also received attention from AWS at re:Invent. First, AWS brought Amazon EKS, a hosted Kubernetes service, to AWS Fargate, a serverless computing option for containers. According to AWS, Amazon EKS on AWS Fargate aims to simplify provisioning and management of compute clusters running containers. 

With AWS Fargate, customers don’t need to be experts in Kubernetes operations to run a cost-optimized and highly-available cluster. Fargate eliminates the need for customers to create or manage EC2 instances for their Amazon EKS clusters.

Customers no longer have to worry about patching, scaling, or securing a cluster of EC2 instances to run Kubernetes applications in the cloud. Using Fargate, customers define and pay for resources at the pod-level. This makes it easy to right-size resource utilization for each application and allow customers to clearly see the cost of each pod.

Amazon also announced the availability of AWS Fargate Spot. This capability offers a 70% discount on compute cost, in exchange for giving AWS permission to take back the compute—and delete your instances—whenever they need to. Finally, for those running containers on Amazon ECS, there's a new auto-scaling functionality

AWS also shipped a handful of new features for their function-as-a-service platform, AWS Lambda. This is in addition to the capabilities shipped the week prior to re:Invent. Provisioned Concurrency keeps a configurable number of pre-warmed Lambda instances ready at all times. In exchange for paying for these always-on instances, the user eliminates the cold-start penalty for infrequently called functions. This functionality resembles behavior in the Azure Premium Tier released by Microsoft a few weeks back. Finally, AWS released AWS Step Function Express Workflows for those using AWS Step Functions for fast, high-volume scenarios that tolerate async communication and at-least-once delivery.

AWS also opened up a preview of Amazon Bracket, a managed service for quantum computing. See InfoQ's coverage here. Additionally, AWS unveiled the next generation of their Arm-based EC2 instances. They're powered by the AWS-built AWS Graviton processors that offer 7x the performance of the previous Arm-based instance (A1) and significant improvement over the Intel-based, general-purpose EC2 instances. Lastly, AWS launched four new instance sizes powered by the custom designed AWS Inferentia chips made for deep learning.

Storage and Database 

Amazon EBS volumes provide durable block storage for EC2 virtual machines. For quite a while, users have been able to take point-in-time snapshots of EBS volumes and stash them in Amazon S3. At re:Invent, AWS released EBS direct APIs that offer access to snapshot content. AWS says that these APIs are "designed for developers of backup/recovery, disaster recovery, and data management products & services, and will allow them to make their offerings faster and more cost-effective."

Amazon S3 is the original AWS service, and has a new feature called Amazon S3 Access Points. It's designed for shared data sets by providing "distinct permissions and network controls for any request made through the access point."

Next up, two database-related updates of note. First, AWS released an entirely new database service. Amazon Managed Apache Cassandra Service (MCS) is a fully-managed service that's labeled as Apache Cassandra-compatible, and implements the 3.11 CQL API. Like the AWS DocumentDB that's compatible with MongoDB but not actually running MongoDB software, Amazon MCS appears to be an AWS service that exposes a Cassandra API only. Their documentation outlines differences between MCS and Cassandra itself.

AWS also added a new storage tier to their Amazon Elasticsearch Service. The UltraWarm tier is available in preview and claims to help customers "economically retain large amounts of data while keeping the same interactive analysis experience." It's said to offer up to 900TB of storage while being 90% cheaper.

Amazon Redshift is the AWS data warehouse service, and at re:Invent it got an upgrade. First, AWS released a new compute instance with 48 vCPUs, 384 GiB of memory, and up to 64 TB of storage. Users can also leverage a new managed storage model that uses SSD-based storage backed by Amazon S3. AWS also released a new data export feature to unload data from a Redshift cluster to S3, and a federated query feature that looks across Redshift clusters, RDS instances, and S3.

Database connection management for serverless functions can be a pain point, especially when dealing with many, short-lived connections. The new Amazon RDS Proxy aims to relieve that tension.

Your Lambda functions interact with RDS Proxy instead of your database instance. It handles the connection pooling necessary for scaling many simultaneous connections created by concurrent Lambda functions. This allows your Lambda applications to reuse existing connections, rather than creating new connections for every function invocation.

The RDS Proxy scales automatically so that your database instance needs less memory and CPU resources for connection management. It also uses warm connection pools to increase performance.

App Integration 

AWS shared a preview release of Amazon EventBridge schema registry and discovery. EventBridge is positioned as "a serverless event bus that makes it easy to connect applications together using data from your own applications, integrated Software-as-a-Service (SaaS) applications, and AWS services." The new schema registry lets you search for AWS-provided schemas as well as custom schemas that describe your event payloads. The service also offers the ability to detect and generate schemas from sources. The available IDE integration means that devs using JetBrains IntelliJ or Microsoft Visual Studio Code can ingest bindings and get auto-complete when working with schemas in code.


At re:Invent, AWS shared a pair of updates to the AWS Transit Gateway. It's described by AWS as "a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway." One new feature of AWS Transit Gateway is inter-region peering

As customers expand workloads on AWS, they need to scale their networks across multiple accounts and VPCs, customers can connect pairs of VPCs using peering or use PrivateLink to expose private service endpoints from one VPC to another. However, managing this is complicated. AWS Transit Gateway inter-region peering addresses this and makes it easy to create secure and private global networks across multiple AWS regions. Using inter-region peering customers can create centralised routing policies between the different networks in their organisation and simplify management and reduce costs.

The new accelerated site-to-site VPN feature improves connection performance by routing VPN traffic through the nearest AWS edge location. And the new AWS Transit Gateway Network Manager experience gives users a global view of their private network. From this dashboard, administrators can visualize devices and sites on a map, and see details of VPN connections in each site. Users also get a monitoring view that aggregates network events.


Machine Learning 

Amazon SageMaker—the managed service to build, train and deploy machine learning (ML) models—received heavy attention at re:Invent. AWS launched Amazon SageMaker Studio which they claim unifies all the tools needed to develop ML models. The single IDE is meant to help developers "write code, track experiments, visualize data, and perform debugging and monitoring all within a single, integrated visual interface."

Amazon SageMaker Studio offers five experiences of note. First, Amazon SageMaker Notebooks helps users create and share Jupyter notebooks. The Amazon SageMaker Experiments feature lets devs organize, track, and compare ML jobs. AWS says that these jobs can be "training jobs, or data processing and model evaluation jobs run with Amazon SageMaker Processing." Next, the Amazon SageMaker Debugger is set up for debugging and analyzing model training issues, with real-time advice for optimizing models.  Related, Amazon SageMaker Model Monitor detects "quality deviations for deployed models." Finally, Amazon SageMaker Autopilot gives devs the option to build models automatically by inspecting data sets to generate models.


Identity Management 

AWS Identity and Access Management (IAM) is used to create and manage users and groups and define resource permissions. It's a ubiquitous part of most AWS environments. At re:Invent, AWS shipped the IAM Access Analyzer. It "mathematically analyzes access control policies attached to resources and determines which resources can be accessed publicly or from other accounts." It offers continuous policy monitoring against services like S3 and AWS Lambda, and provides all-up visibility for security professionals. 

SaaS and Enterprise Services

AWS also branched out with additional high-level services targeting end users. They announced Amazon Kendra, an enterprise search service that heavily uses machine learning. It uses a natural language interface that connects to a wide variety of interfaces. At this time, it can index S3, SharePoint Online, and databases, with promises of additional connectors at general availability.

Amazon Connect is a service for contact centers. AWS announced a preview of Contact Lens for Amazon Connect which integrates machine learning into Amazon Connect to deliver sentiment analysis, trends and more. It transcribes contact center calls to help supervisors quickly search or discover themes emerging from interactions.

Finally, AWS extended their Amazon Transcribe service—it offers developers a speech-to-text capability—to specifically empower healthcare professionals. Amazon Transcribe Medical serves physicians who dictate clinical notes and want an immediate conversion to text. AWS says that the managed service "is HIPAA eligible and integrates easily with clinical documentation applications and any device with a microphone."

Architecture and Coding

re:Invent delivered a pair of notable updates targeting developers and architects. First up, Amazon CodeGuru. This is a managed service that uses machine learning to recommend code quality and performance improvements. It has a CodeGuru Reviewer which analyzes pull requests for Java-based code, and CodeGuru Profiler which analyzes running apps. It requires read-only access to your existing code repositories in GitHub or an AWS CodeCommit environment. AWS says that CodeGuru profiler complements existing Application Performance Monitoring (APM) tools, and works with Java apps running in EC2, ECS, EKS, and Fargate. Venture capitalist and former Microsoft exec S. "Soma" Somasegar liked what he saw here.

"As a former DevDiv guy, this one definitely caught my attention," he said. "I know that Visual Studio team has been incorporating ML and AI into some of the core development tasks as a part of the IDE to make it easy for developers. This is a service that was exciting to see and I am eager to see the adoption and usage of this service."

The second important announcement here was the Amazon Builders Library. This collection of articles captures AWS best practices for building and running complex systems. The articles, categorized either as "Architecture" or "Software Delivery and Operations" touch on topics like continuous delivery, software rollbacks, caching strategies, leader election in distributed systems, health checks, and more.

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p