BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Article: Scalability Principles

Article: Scalability Principles

Bookmarks
At the simplest level, scalability is about doing more of something. This could be responding to more user requests, executing more work or handling more data. While designing software has its complexities, making that software capable of doing lots of work presents its own set of problems. In this article, Simon Brown presents some principles and guidelines for building scalable software systems. 

Read Scalability Principles, by Simon Brown

The principles covered in the article include:
  1. Decrease processing time
  2. Partition
  3. Scalability is about concurrency
  4. Requirements must be known
  5. Test continuously
  6. Architect up front
  7. Look at the bigger picture
The majority of the principles for this article have been sourced from some notes taken during a scalability discussion that took place at a private summit for architects held in London, UK, in late 2005. The summit was organized by Alexis Richardson, Floyd Marinescu, Rod Johnson, John Davies, and Steve Ross-Talbot. The video entitled "JP Rangaswami on open source in the enterprise & the future of information" is also from the summit.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Up-front Concurrency Design

    by Luiz Esmiralha,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for your article, Simon. It will prove useful in my next projects!

    But, I am somewhat puzzled by your affirmation that "any design for concurrency must be done up-front". Can you ellaborate further on the reasons for that, if possible with a real-world scenario where postponing concurrency design to a later moment proves to be a very expensive decision?

  • Re: Up-front Concurrency Design

    by Simon Brown,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks Luiz, I'm glad that you found the article useful.

    Adding concurrency in later is definitely possible ... I just think it's trickier. If I write something with concurrency in mind, I tend to *test* it with concurrency in mind. If I add concurrency afterwards, I tend to not test it as thoroughly and/or introduce a bunch of nasty side-effects!

    This principle can be applied to data too. If you're building a big distributed system, thinking about concurrent data access (e.g. how data will be locked/synchronized/shared) is easier to do when you have a blank canvas. As another example, think about what you might need to do to add concurrency features to a GUI application - you'd need to figure out your concurrency strategy (e.g. pessimistic locking by the user vs optimistic locking by the application) and then modify code right from the GUI through to the back-end.

    At the end of the day, there's no "right answer". I just find that I make a better job of concurrency if I think about it up front.

  • good article

    by Surender Kumar,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for a good article Simon. I believe it states the points inline with KISS approach which we tend to overlook in quest for abstraction and loose-coupling. My current simple application gave me a throughput of ~20 requests/second on a single CPU linux server which I definitely want to improve. I may not know what's optimum. Is the count an embarrassment :-) or is it okay? Can you please cite some statistics with respect to this. Also increasing the servers in a cluster I'm sure of increasing it many folds.

  • Re: good article

    by Simon Brown,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks ... exactly, simple is good.



    Unfortunately, the answer to your question is "it depends". ~20 requests/sec doesn't sound much, but you don't say what each of those requests does. If they are very large in nature, then "20" might be an excellent result. Increasing the number of servers will help you scale this number, but it might not provide you with linear scalability. That too depends on things like shared state, contention and so on.



    The best advice I can give is this - if your project sponsors are happy with the performance/scalability of your system, then your job is done. If you need additional scale, then you need to get another server and see what sort of numbers you get out. Stats are useful, but not as useful as testing your software yourself. :-)

  • Good principles

    by Randy Shoup,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi, Simon --



    Good summary of some critical scaling principles. You will see your points on partitioning echoed in my article on Scalability Best Practices: Lessons from eBay.



    I particularly like the point that scaling is about concurrency. Very clearly stated. That is, after all, the fundamental reason why partitioning helps.



    Ditto the point that scaling out rarely comes for free. If your only option is to scale up, sooner or later you will run out of runway. eBay has seen this time and again in its history, and the rearchitecture efforts to remove those bottlenecks (first in the database, and then in search) were long and painful. What I would add, though, is that this does not necessarily mean that it is wrong to design such a system -- just that it is important to be aware that such a system will not scale. While it is inarguably cheaper to design in scaling from the beginning, the additional time and effort it requires may not be worth it at that moment. Just make that tradeoff in full recognition of the fact that when the time comes, it will be more expensive than it otherwise would have been.



    Take care,

    -- Randy

  • Re: good article

    by Surender Kumar,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    thanks for the comments simon. Will test/profile to gauge the optimal throughput. Good work on your site btw.
    keep up.

  • advises for junior programmers

    by Giuseppe Proment,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This article is enough just for junior programmers, what about something more profound like patterns for how to build a scalable domain model ?

  • Nope. Concurrency is not about Scalability

    by navr sale,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    You got it all wrong. A system can scale well within performance metrics (throughput, responsiveness) yet fail concurrency metric. For example, if the concurrency requirement is 1000 and system can only support 100, yet at the same time the system will scale linearly as the total number of users and data increases in time all the way to the projected targets. But If the concurrency was afterthought, it is too late to address it even when more servers are added to serve requests (adding servers is about throughput and not concurrency). There are inherent limitations in the system like internal TCP/IP queues, switches, routers, maximum number of threads in the pool etc. Once the system is delivered and the peak throughput is over the capacity, it is too late. A system should specify concurrency requirement specifically before the implementation starts. Most of the time it is not possible to remediate later on.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT