Making the Case for RAMClouds

| by James Vastbinder on Jan 04, 2011. Estimated reading time: 2 minutes |

Since early 2008, researchers and technologists alike have been tantalized by the possibility of using DRAM to scale high-performance storage using In Memory Data Grids, IMDG.  In June 2008, our own Steven Robbins covered it as a hot topic.  How has the discussion progressed since that time?

Most prominent among the proponents is researcher John Ousterhout of Stanford who as a result authored "The Case for RAMClouds: Scalable High-Performance Storage Entirely in DRAM". Proponents make the case that disk-oriented approaches to online storage are problematic, do not scale gracefully and while disk capacity has exploded — access latency and bandwidth has not kept pace.

To solve this problem, the essential idea is to shift the home of online data from disk to DRAM, thereby creating a new class of storage above disk.  At present, Stanford is the current home to the RAMCloud project where they are building an open source implementation based on the premises of Ousterhout's original paper which currently runs on top of Linux/Unix.

The current proposed cluster will contain 40 nodes built on top of commodity hardware, configured with 24-32GB of RAM, CPU and disk, costing between $2000-$2500 per node.  The intent is provide a durable and available solution with the following goals:

  • 1M operations/sec per server
  • Low-latency access: 5-10 micro-second RPC
  • All data is always in RAM
  • Multi-tenancy
  • Automated management
  • Storage for datacenters


Opponents do not agree, of which Jeff Darcy has been the most vocal:

Simple fact: a real data-storage system that uses tried-and-true OS caching to serve most requests from memory will beat a system that was designed to be memory-only and then added spill-to-disk as an afterthought. It will perform as well, and it will have better behavior when it comes to protecting data. It will handle a full data-center power outage as well as a single server failure. It will allow the full range of backup and forensics and compliance behaviors that form part of a real data management strategy. That doesn’t mean any representative of category X is better than any representative of category Y for all times and places, but all of those fancy data-lookup algorithms and such can be – and often have – been implemented in a real storage system too.  It’s IMDGs that want to be real storage when they’re all grown up, not the other way around.

Continues Murat Demirbas in a review he recently published on The Case for RAMClouds:

I think cost trends and size trends have not been taken into account appropriately for the analysis in the paper. Also, there are several research challenges to be addressed before we can reap the benefits of the latency and bandwidth trends. So I contend that RAMCloud is not cost-effective now, and it may not be cost-effective for sometime soon.


Over the last 30 years latency on disk has only improved by a factor 2x, from around 20ms to 10ms.  This is a very tough problem space to solve, but the RAMCloud project is focused on just that, combining scale with very low access latency.  If successful the project would enable developers more powerful uses of information at Internet scale as well as provide significant advancement in database and storage research.

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Latency on Disks by uwe schaefer

While it might be true, that drive latency has not evolved tremendously, what about SSD Raids? Being not only limited to the SATA bus (as OCZ delivers cheap PCIE-x4-SSD cards) the sustained write on these is considerable, while the latency is pretty low compared to disks.
Using mem-mapped files based on such a media should perform radically different from using spinning harddrives. SSDs got really affordable by now, given that you can stay away from TBs per Node.

A solution based on that could be even faster than a purely DRam based one, because of the smaller number of nodes, hence less communication between the nodes. For sure, it is way cheaper.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

1 Discuss