Scaling Web Applications using Cache Farms and Read Pools
Michael Nygard, author and No Fluff Just Stuff speaker, recently wrote on two alternative approaches to scaling web application performance and scalability: Cache Farms and Read Pools.
The idea behind Cache Farms is that application nodes in a cluster share an external cache instead of each maintaining their own. This eliminates redundancy and gives back heap space to the application server:
By moving the cache out of the app server process, you can access the same cache from multiple instances, reducing duplication. Getting those objects out of the heap, You can make the app server heap smaller, which will also reduce garbage collection pauses. If you make the cache distributed, as well as external, then you can reduce duplication even further.
Read Pools take advantage of the fact that most data driven applications perform many more read operations than writes. By having the reads performed against a dedicated set of read only replicated databases, you can relieve the burden on the write operation databases:
How do you create a read pool? Good news! It uses nothing more than built-in replication features of the database itself. Basically, you just configure the write master to ship its archive logs (or whatever your DB calls them) to the read pool databases.
Michael points out that updating the read hosts may not happen in real time depending on what database you are using, but notes that this might be a perfectly acceptable tradeoff. MySQL users can take advantage of Read/Write Splitting with MySQL-Proxy.
The reflexive answer to scaling is, "Scale out at the web and app tiers, scale up in the data tier." I hope this shows that there are other avenues to improving performance and capacity.
Coherence cache farms
* Dynamic scale-out, i.e. easily adding and removing servers without interruption to the application (and without losing any data).
* Configure any level of redundancy, including no redundancy.
* By using dynamic partitioning, Coherence linearly scales out both cache capacity and throughput.
* Read-through and read-coalescing for database access.
* Write-through and write-behind for database updates, including write-coalescing.
* Ability to layer caches, e.g. small on-heap caches layered on top of a large out-of-VM partitioned cache.
Oracle Coherence: Clustered Caching for Java and .NET
On the commercial side, GigaSpaces provides distributed, external, clustered caching. It adapts to the "hot item" problem dynamically to keep a good distribution of traffic, and it can be configured to move cached items closer to the servers that use them, reducing network hops to the cache.
And in another post he writes:
What can I say about GigaSpaces? Anyone who's heard me speak knows that I adore tuple-spaces. GigaSpaces is a tuple-space in the same way that Tibco is a pub-sub messaging system. That is to say, the foundation is a tuple-space, but they've added high-level capabilities based on their core transport mechanism.
So, they now have a distributed caching system. (They call it an "in-memory data grid". Um, OK.) There's a database gateway, so your front end can put a tuple into memory (fast) while a back-end process takes the tuple and writes it into the database.
Just this week, they announced that their entire stack is free for startups. (Interesting twist: most companies offer the free stuff to open-source projects.)...
I love the technology. I love the architecture.
To check out our free offer to start-ups and individuals go here.
GigaSpaces: The Scale-Out Application Platform
Delivering Performance Under Schedule and Resource Pressure: Lessons Learned at Google and Microsoft
Ivan Filho Mar 06, 2014