BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Amazon S3 Increases Request Rate Performance and Drops Randomized Prefix Requirement

Amazon S3 Increases Request Rate Performance and Drops Randomized Prefix Requirement

Bookmarks

Amazon Web Services (AWS) recently announced significantly increased S3 request rate performance and the ability to parallelize requests to scale to the desired throughput. Notably this performance increase also "removes any previous guidance to randomize object prefixes" and enables the use of "logical or sequential naming patterns in S3 object naming without any performance implications".

Amazon Simple Storage Service (Amazon S3) is a cloud object storage service "built to store and retrieve any amount of data from anywhere". Given its wide industry adoption as a storage backend for a huge variety of large-scale use cases, customers often have the need for very high throughput when transferring objects from and to the service. As per the S3 request rate and performance guidelines, applications can now achieve "at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second", up from the former "300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second".

An important aspect is that S3 now also automatically provides this increased throughput "per prefix in a bucket", and "there are no limits to the number of prefixes", which implies that applications can simply use as many prefixes as required in parallel to achieve the desired throughput and effectively scale their S3 performance "by the factor of your compute cluster". Other than before, such large-scale, high-performance S3 usage also does not require randomized object naming anymore.

The technical details of this notable change are not yet documented, but previous versions of the performance guidelines illustrate the underlying challenge based on the common scenario that "customers sometimes use sequential numbers or date and time values as part of their key names" when uploading a large number of objects (from 2017-12-29 as per the Internet Archive):

examplebucket/2013-26-05-15-00-00/cust1234234/photo1.jpg
examplebucket/2013-26-05-15-00-00/cust3857422/photo2.jpg
examplebucket/2013-26-05-15-00-00/cust1248473/photo2.jpg
examplebucket/2013-26-05-15-00-00/cust8474937/photo2.jpg
examplebucket/2013-26-05-15-00-00/cust1248473/photo3.jpg
...
examplebucket/2013-26-05-15-00-01/cust1248473/photo4.jpg
examplebucket/2013-26-05-15-00-01/cust1248473/photo5.jpg
examplebucket/2013-26-05-15-00-01/cust1248473/photo6.jpg
examplebucket/2013-26-05-15-00-01/cust1248473/photo7.jpg
...

Using many objects with sequential prefixes used to introduce performance problems, because it increased "the likelihood that Amazon S3 will target a specific partition for a large number of your keys, overwhelming the I/O capacity of the partition". This could only be alleviated with artificial naming conventions like adding a hash key prefix or reversing embedded IDs to randomize key names and thus partition access.

Such technical limitations are counter-intuitive for application design, and AWS also admitted that this "randomness does introduce some interesting challenges" in turn, for example "if you want to list object keys with specific date in the key name". Going forward, based on the apparent re-engineering of the S3 partitioning mechanics, architects and developers can now design and implement S3-backed applications with strictly use case oriented naming schemes.

Cloud economist and Last week in AWS author Corey Quinn lauds this improvement in his post "S3 is faster" doesn't do it justice:

[…] this was an atavistic relic from a time when implementation details unfortunately tended be presented to the customer. You shouldn’t have to know or understand how the service works under the hood in order to get acceptable performance. I’m very happy that this artifact has now been consigned to the dustbin of history […]

For GET-intensive workloads, AWS continues to recommend using its content delivery network (CDN) Amazon CloudFront to further optimize latency and transfer rates while also reducing cost.

According to respective guidance within the storage performance and scalability checklist, Microsoft Azure's Blob Storage uses a "range-based partitioning scheme to scale and load balance the system", and according to optimizing your Cloud Storage performance, Google Cloud Platform's Cloud Storage "auto-balances your upload connections to a number of backend shards […] through the name/path of the file". As a result, both services recommend using explicit hash prefix-based rather than sequential naming schemes to optimize performance for large-scale use.

In related news, Amazon S3 has recently announced selective cross-region replication based on object tags, and feature enhancements for S3 Select, both of which can further improve performance for particular use cases.

The Amazon S3 documentation features a developer guide, including a section on performance optimization, and the API reference. Besides supporting the regular S3 API, the AWS CLI also provides higher-level S3 commands to copy, move, and sync large numbers of objects in an efficient fashion. Support is provided via the Amazon Simple Storage Service (S3) forum. The referenced improvements are automatically available for all customers at no additional charge beyond the regular usage based Amazon S3 pricing.

Rate this Article

Adoption
Style

BT