BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Dropbox Improves Sync Performance Using a Modified Brotli

Dropbox Improves Sync Performance Using a Modified Brotli

This item in japanese

Bookmarks

After analyzing the performance of several common lossless compression algorithms, Dropbox engineers have slightly modified Google's Brotli encoder to improve their engine sync performance. They could thus reduce median latency and data transfer by more than 30%, Dropbox engineers Rishabh Jain and Daniel Reiter Horn maintain.

Dropbox has been using compression techniques to improve syncing performance since they wrote their original sync engine. However, recent advancements in the field of lossless compression motivated Dropbox engineers to explore the possibility of moving away from the old zlib algorithm they had originally adopted.

Broccoli is essentially an extension of Brotli which is able to concatenate Brotli files with one another. This enabled higher compression rates of up to 3x the rate of Brotli original implementation thanks to the use of multiples cores to compress chunks of a file and then concatenate them together. Jain and Reiter Horn enumerate a number of key advantages Brotli had in their view, including the possibility of dynamically selecting Huffman tables at runtime, a safe, Rust-based implementation, and being standard for HTTP.

To unlock multithreaded compression (and concatenate-ability), we would need to compress chunks of the file and generate a valid Brotli output. In our design process we discovered a subset of the original Brotli protocol, if modified, could allow files to be stitched together after being compressed.

One thing Dropbox engineers had to take into account was the possibility of data corruption while compressing a chunk on the client, which they tackled through an extra decompression call to make sure the compressed file actually encoded the original. This approach turned out to have a cost of about 300MiB per second per core though, but Dropbox engineers deemed it was worth it to accept it given the overall improvements in latency and storage cost.

Similarly, when decompressing, Dropbox engineers had to consider the possibility of a failure. In this case, instead of reporting an error to the user, or, worse, crashing the client, the server will just send back uncompressed data. The same approach is taken for uncompressible data, in which case sending back compressed data would imply a performance penalty on the client.

As mentioned, this effort brought significant benefits for Dropbox and its users, both when downloading and when uploading files.

For the upload path, we noticed that the the daily average percentage savings in bandwidth usage were around 33%. When aggregated daily and normalized that 50% of the users save around 13%, 25% of users save around 40%, and 10% of the users benefit the most by saving roughly 70% while uploading files.

With download compression, we noticed that the average daily savings for all requests was around 15%. When normalized by hosts, we saw 50% of the users saving 8%, 25% of the users saving around 20%, and 10% of users benefitting most from download compression saving around 50%.

Jain and Reiter Horn provide plenty of additional details in their article, including a lower-level overview of the protocol, so do not miss their writing if you are interested in optimizing remote file syncing.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT