Article: Scalability Principles
Read Scalability Principles, by Simon Brown
The principles covered in the article include:
- Decrease processing time
- Scalability is about concurrency
- Requirements must be known
- Test continuously
- Architect up front
- Look at the bigger picture
Up-front Concurrency Design
But, I am somewhat puzzled by your affirmation that "any design for concurrency must be done up-front". Can you ellaborate further on the reasons for that, if possible with a real-world scenario where postponing concurrency design to a later moment proves to be a very expensive decision?
Re: Up-front Concurrency Design
Adding concurrency in later is definitely possible ... I just think it's trickier. If I write something with concurrency in mind, I tend to *test* it with concurrency in mind. If I add concurrency afterwards, I tend to not test it as thoroughly and/or introduce a bunch of nasty side-effects!
This principle can be applied to data too. If you're building a big distributed system, thinking about concurrent data access (e.g. how data will be locked/synchronized/shared) is easier to do when you have a blank canvas. As another example, think about what you might need to do to add concurrency features to a GUI application - you'd need to figure out your concurrency strategy (e.g. pessimistic locking by the user vs optimistic locking by the application) and then modify code right from the GUI through to the back-end.
At the end of the day, there's no "right answer". I just find that I make a better job of concurrency if I think about it up front.
Re: good article
Unfortunately, the answer to your question is "it depends". ~20 requests/sec doesn't sound much, but you don't say what each of those requests does. If they are very large in nature, then "20" might be an excellent result. Increasing the number of servers will help you scale this number, but it might not provide you with linear scalability. That too depends on things like shared state, contention and so on.
The best advice I can give is this - if your project sponsors are happy with the performance/scalability of your system, then your job is done. If you need additional scale, then you need to get another server and see what sort of numbers you get out. Stats are useful, but not as useful as testing your software yourself. :-)
Good summary of some critical scaling principles. You will see your points on partitioning echoed in my article on Scalability Best Practices: Lessons from eBay.
I particularly like the point that scaling is about concurrency. Very clearly stated. That is, after all, the fundamental reason why partitioning helps.
Ditto the point that scaling out rarely comes for free. If your only option is to scale up, sooner or later you will run out of runway. eBay has seen this time and again in its history, and the rearchitecture efforts to remove those bottlenecks (first in the database, and then in search) were long and painful. What I would add, though, is that this does not necessarily mean that it is wrong to design such a system -- just that it is important to be aware that such a system will not scale. While it is inarguably cheaper to design in scaling from the beginning, the additional time and effort it requires may not be worth it at that moment. Just make that tradeoff in full recognition of the fact that when the time comes, it will be more expensive than it otherwise would have been.
Re: good article
Christian Legnitto Dec 12, 2013
Ian Culling, Andy Powell & Lee Cunningham Dec 11, 2013