Scalability in a Reactive World
Jonas defines performance as the capability of a system to provide a certain response time and is essentially about three things: latency, throughput and scalability. Scalability he defines as the capability of a system to maintain that response time.
Scale up means we need in an efficient way utilize multi-core architecture and for Jonas clean code and good practices matters to accomplish this, using small methods doing one thing and simple logic. Two things really matters: Maximising locality reference making sure data stays local to its processing context is extremely important. Contention is the second scalability killer e.g. overuse of synchronous locks.
A general rule is to never block. By going asynchronous, fully embracing asynchronous messaging passing, you will get a system that is concurrent by design. Instead of dealing with low level threads and locks you can raise the abstraction level to work with events flowing through the system.
Jonas thinks that starting with asynchronous programming has a higher initial hit of essential complexity to get started, but then it usually stays fairly constant with a very low accidental complexity. In a synchronous system it feels familiar from the beginning with a low essential complexity, but as the system grows it can easily get out of hand, you may get drowned in accidental complexity of using shared mutual state and tightly coupled systems.
To truly scale we also need a way of adding resources to our system, scale out is about managing elasticity, adding nodes or resources.
Scale on demand is an important factor, efficient utilization of cluster cloud computing architecture, being able to add resources on the fly, thus enabling our applications react to increasing as well as decreasing load.
Jonas’ conclusion is that by adhering to core principles that have been proven to work for ages we can write really scalable systems.
Brandon Holt, Preston Briggs, Luis Ceze, Mark Oskin May 21, 2015