BT

New Early adopter or innovator? InfoQ has been working on some new features for you. Learn more

Scalable System Design Patterns

| by Jean-Jacques Dubray Follow 0 Followers on Dec 01, 2010. Estimated reading time: 2 minutes |

One of the key achievements of the past decade has been both the wide availability of scalable systems for the masses, especially Cloud based systems, and the super-scalability of some Web applications. Facebook for instance serves 13 millions requests per second on average with peaks at 450 M/s. Even with such results, the concepts and architectures behind scalable systems are still rapidly evolving.  About three years ago, Ricky Ho, a software architect based in California, had written blog post detailing the state of the art of scalable systems. Three years later, he felt that it was time to revisit the question. 

Ricky defines scalability as:

Scalability is about reducing the adverse impact due to growth on performance, cost, maintainability and many other aspects

In his latest post, he lists patterns:

  • Load Balancer
  • Scatter and Gather
  • Result Cache
  • Shared Space (a.k.a the Blackboard)
  • Pipe and Filter
  • Map Reduce
  • Bulk Synchronous Parallel
  • Execution Orchestration

If Load Balancing, Result Cache and Map Reduce have  been around for a while, some patterns are targeting new problems introduced by social medias. For instance the Bulk Synchronous Parallel, which was invented in the 80s, is now used as part of the Google Pregel Graph Processing project which supports three general processing patterns:

  • Capture (e.g. When John is connected to Peter in a social network, a link is created between two Person nodes)
  • Query (e.g. Find out all of John's friends of friends whose age is less than 30 and is married)
  • Mining (e.g. Find out the most influential person in Silicon Valley)

Ricky introduces also the Execution Orchestrator pattern:

This model is based on an intelligent scheduler / orchestrator to schedule ready-to-run tasks (based on a dependency graph) across a clusters of dumb workers.

He reports that this pattern is used as part of the Microsoft Dryad project, which allows programmers "to use thousands of machines without knowing anything about concurrent programming".

A Dryad programmer writes several sequential programs and connects them using one-way channels. The computation is structured as a directed graph: programs are graph vertices, while the channels are graph edges. A Dryad job is a graph generator which can synthesize any directed acyclic graph. These graphs can even change during execution, in response to important events in the computation.

The kind of scalability that we routinely achieve today was unthinkable just 10 years ago. Where are the next limits? What is your experience in building scalable systems? What is missing?

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Scalability progress of the real-time web applications by Mihai Rotaru

We at Migratory Data Systems (migratory.ro) have implemented last years a number of patterns and strategies to create an extremely scalable Comet server used to build real-time web applications.

For a Comet server the scalability is mainly defined by the number of concurrent users receiving real-time data and by the quantity of data published in real-time.

We achieved real-time data publication up to 1,000,000 concurrent users and scaled up to 1Gbps from a single instance of Migratory Push Server running on a small server, while the data latency (the delta between the time the data is created on the server side and the time the data received by the user) is very low (milliseconds). Here you can see detailed benchmark results:

migratory.ro/data/MigratoryPushServerBenchmarks...

This is an important progress comparing to a traditional web server that cannot handle more than a few thousands of concurrent users on a small server when streaming real-time data.

What are the next limits? It depends on the use case. If you distribute large data to many users you will reach 1Gbps limit even if the data overhead introduced by Migratory Push Server is very low (~20 bytes). If you run Migratory Push Server on a small server and distribute data to more than 1 million users you will eventually reach the memory limit. If you publish data with a frequency of 1 million messages per second with Migratory Push Server from a small server, you will eventually reach the CPU limit.

To scale further, another important feature concerning scalability is clustering. We've implemented clustering so that multiple instances of Migratory Push Server installed on multiple machines act as a single push server by offering more scalability but also adding load balancing and fault tolerance to the system.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

1 Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT