How to Make Your In-memory NoSQL Datastores Enterprise-Ready
In-memory NoSQL datastores such as open source Redis and Memcached are becoming the de-facto standard for every web/mobile application that cares about its user’s experience. Still, large enterprises have struggled to adopt these databases in recent years due to challenges with performance, scalability and availability.
Fortunately, modern application languages (Ruby, Node.js, Python, etc.) and platforms (Rails, Sinatra, Django, etc.) have already built a set of tools and libraries that leverage the performance and variety of command types for in-memory datastores (especially in the Redis case) to implement a set of very popular use cases.
These open source software project use cases include job management, forums, real-time analytics, Twitter clone, geo search and advanced caching.
However, for each of these applications (and many use cases we may not have even imagined yet), the availability, scalability and performance of the datastore is extremely critical to success.
This article will outline how to make an in-memory NoSQL datastore enterprise-ready, with tips and recommendations on how to overcome the top seven challenges associated with managing these databases in the cloud.
No matter what you do, your dataset should always be available for your application. This is extremely important for in-memory datastores because, without the right mechanisms in place, you will lose part or all of your dataset in one of the following events:
- node failure (frequently happens in the cloud);
- process restart (you may need this from time to time); or
- scaling event (let’s hope you will need this).
There are two main mechanisms that must be implemented in order to support cases #1 and #2 above (case #3 will be discussed later in this article):
- Replication: You should make sure you have at least one copy of your dataset hosted on another cloud instance, preferably in a different data center if you want to be protected against data center failure events, like those that happened at least four times to Amazon Web Services (AWS) during 2012. Is that easy? Unfortunately, it’s not. Here is just one scenario that makes replication challenging:
- Once your application “write” throughput increases, you may find that your application servers are writing faster than your replication speed, especially if there are network congestion issues between your primary and replica nodes. And once this starts happening, if your dataset is large enough, there is a great chance that your replica will never be synced again.
- Auto failover: Why is this needed? Well, your in-memory datastore usually processes 100 times more requests/second than your other databases, so every additional second of downtime means more delayed processes for your app and bad experiences for your users. Be sure to follow this list when implementing your auto-failover mechanism:
- Make sure the switchover to your replica node is done instantly as soon as your primary node dies. It should be based on a robust watchdog mechanism that constantly monitors your nodes and switches over to the healthiest replica on failure.
- The process should be as transparent as possible to your app; ideally no configuration change is needed. The most advanced solutions just change the IP behind the DNS address of your datastore, making sure the recovery process takes only a few seconds.
- Your auto-failover should be based on quorum and should be either fully consistent or eventually consistent. More on this below.
2. Consistency during and after network splits
Cloud is the holy grail of network splits, probably the most complicated problem for any distributed datastore system on earth. Once a split happens, your app might see only part of your in-memory NoSQL nodes if at all, and each of your in-memory NoSQL nodes might see only part of the other in-memory NoSQL nodes.
Why is this such a big problem? Well, if your datastore has any inadvertent design flaws, you may find your app writing to the wrong node during the network split event. This means that all the write requests your app made during the split will be gone once the situation is recovered. This is a significant issue with in-memory NoSQL datastores, as the number of “writes” per second is much higher than all other NoSQL datastore systems.
And what if your in-memory NoSQL is properly designed? No magic here, you’d unfortunately just need to choose between two very bad alternatives (actually one…) as listed below.
- If your in-memory NoSQL datastore is considered fully consistent, you should be aware that in some situations it won’t allow you to write anything until the network split is recovered.
- If your in-memory NoSQL datastore is considered eventually consistent, your app probably uses a quorum mechanism on read request, which either returns a value (on quorum) or is blocked (wait for a quorum).
Note – as there is no eventually consistent in-memory NoSQL datastore available in the market today, only option #1 is a real-world scenario.
3. Data Durability
Even if your in-memory NoSQL solution allows many replication options, you should still consider data persistence and backups for several reasons:
- Perhaps you don’t want to pay the extra cost of in-memory replication but still want to make sure your dataset is available somewhere and can be recovered (even if slowly recovered) from a node failure event.
- Let’s say you want to be able to recover from any failure event (i.e. node failure, multi-node failure, data-center failure, etc.), and you want to preserve the option to have a copy of your dataset available somewhere in a safe place even if it’s not up to date with all the latest changes.
- There are many other reasons for using data persistence, such as importing your production dataset to your staging environment for debugging purposes.
Now that you are hopefully convinced data persistence is needed, in most cloud environments you should use a storage device that is attached to your cloud instance (like AWS’ EBS, Azure’s Cloud Drive, etc). If you instead use your local disk, this data will be lost at your next node failure event.
Once you have data persistence in place, your number one challenge becomes maintaining the speed of your in-memory NoSQL datastore while simultaneously writing your changes to persistent storage.
4. Stable performance
In-memory NoSQL datastores such as Redis and Memcached are designed to process over 100K requests/second at a sub-millisecond latency. But you won’t be able to reach these numbers in your cloud environment unless you follow the principles listed below:
- Make sure the solution you choose uses the most powerful cloud instances (such as AWS’ m2.2xlarge/m2.4xlarge instances or Azure’s A6/A7 instances), and in a dedicated environment. Alternatively, a mechanism that prevents “noisy neighbor” phenomenon across different cloud accounts can be implemented. Such a mechanism should monitor the performance of your dataset in real-time, per command and on a regular basis, and apply a set of mechanisms such as automatically migrating the dataset to another node, if it finds that latency crossed a certain threshold.
- To avoid storage I/O bottlenecks, make sure your solution uses a powerful persistent storage device, preferably configured with RAID. Then make sure your solution will not block your app in bursty situations. For instance, with open source Redis you can configure your slave to write to a persistent storage device, leaving the master to deal with your app requests and avoid timeout situations during bursty periods.
- Test cloud vendors’ suggestions for storage I/O optimization like AWS’ PIOPS. In most cases these solutions are good for random access (read/write), but don’t provide much benefit over the standard storage configuration for sequential write scenarios like those used by in-memory NoSQL datastore systems.
- If your in-memory datastore is based on a single threaded architecture, like used by Redis, make sure you are not running multiple databases on the same single threaded process. This configuration potentially creates a blocking scenario where one database blocks other databases from executing commands.
5. Network speed
Most cloud instances are configured with a single 1Gbps NIC. In the context of in-memory NoSQL datastores, this single 1Gbps interface should deal with:
- app requests
- inter-cluster communication
- storage access
It can easily become the bottleneck of its operation, so here are a few suggestions for solving this problem:
- Use cloud instances with 10Gbps interfaces (however, be prepared – they are very expensive).
- Choose a cloud, like AWS, that offers more than one 1Gbps NIC for a special configuration like inside the VPC.
- Leverage a solution that efficiently allocates resources across the in-memory NoSQL nodes to minimize network congestions.
For a simple key/value caching solution like Memcached or for the simple form of Redis, scaling is not considered a big problem as, in most cases, it only requires adding/removing a server to/from a server list and changing the hashing function. However, users that have some experience with it realize that scaling can still be a painful task. A few recommendations to address this issue are:
- Use consistent hashing. Scaling with a simple hash function like modulo means losing all your keys on a scaling event. On the other hand, most people are not aware that with consistent hashing you also lose part of your data when scaling. For example, while scaling out you lose 1/N of your keys where N is the number of nodes you have after the scaling event. So if N is a small number, this can still be a very painful process (for instance scaling-out a 2 nodes cluster with consistent hashing means 1/3 of the keys in your dataset are lost).
- Build a mechanism that syncs all of your in-memory NoSQL clients with the scaling event to prevent a situation where different app servers write to different nodes during the scaling event.
Scaling becomes a real issue when dealing with complex commands like UNION or INTERSECT of Redis. These commands are equivalent to the JOIN command in the SQL world and cannot be implemented over a multi-shard architecture without adding significant delay and complexity. Sharding at the application level can address some of these issues as it allows you to run complex commands at the shard level. However, it requires a real complex application design that is tightly related to the configuration of your in-memory NoSQL nodes, i.e. the sharded application must be aware of the node on which each key is stored; and scaling events like re-sharding require significant code changes and operational overhead.
Alternatively, some people claim that new ultra-high RAM instances, like the High Memory Cluster Eight Extra Large 244 GB memory of AWS (cr1.8xlarge), solve most scaling issues for complex data-types by scaling up your instances. The reality is a bit different, because at a 25GB-30GB dataset size, there are many other operational issues with in-memory NoSQL datastores like Redis, which prevent the execution of scaling up. These are related to the challenges described earlier in this article, such as replication, storage I/O, single threaded architecture limited to a single core, network bandwidth overhead, etc.
7. Huge Ops overhead
Dealing with all the operational aspects of an in-memory NoSQL datastore creates real overhead. It requires deep understanding of all the bits and bytes of these technologies in order to make the right decisions at the most critical times. It also requires that you stay constantly up to date on the latest changes and trends for these systems, as the technologies change very often (maybe too often).
As I’ve outlined above, it’s critical to have a good understanding of the issues with Redis and Memcached in order to take full advantage of all the benefits of these open source technologies. It is especially important for enterprise IT teams to know how to best overcome these challenges in order to leverage in-memory NoSQL datastores within an enterprise environment. I’m admittedly biased, but see a ton of value in looking at commercial solutions that overcome things like scalability and high-availability limitations without compromising on functionality or performance, since executing these operations in-house requires a high level of domain expertise that is rare.
There are a few in-memory NoSQL-as-a-service solutions in the market for Redis and Memcached; I recommend that you conduct a comprehensive comparison between each of the available services, as well as a DIY approach, in order to determine the best way for your app to overcome the challenges of managing in-memory NoSQL in the cloud. It’s a good idea to get some real-world experience with your top candidates, and some of these services offer a free tier plan exactly for that purpose.
About the Author
Yiftach Schoolman is co-founder and CTO of Garantia Data. Yiftach is an experienced technologist, having played leadership engineering and product roles in diverse fields including application acceleration, cloud computing, Software-as-a-Service (SaaS), Broadband Networks and Metro Networks. Yiftach was founder, President & CTO of Crescendo Networks (acquired by F5, NASDAQ:FFIV), VP Software Development at Native Networks (acquired by Alcatel, NASDAQ: ALU) and part of the founding team of ECI Telecom broadband division, where he served as VP Software Engineering. Yiftach holds a B.S. in Mathematics and Computer Science and has completed studies for a master’s degree in computer science at Tel-Aviv University.
Just pick a working solution!
Start by looking at the leading solution in this space, Oracle Coherence: coherence.oracle.com/
Cameron Purdy | Oracle
Re: Just pick a working solution!
Start by spending a little more money on marketing campaigns.
Re: Just pick a working solution!
I do understand that there are a significant number of architects and developers who do not want to pay for commercial software, no matter what, and would rather spend people's time instead. Sometimes that can be a better decision -- for example, there are some very high quality "free" solutions.
However, if you noticed by reading the article, there are many use cases that are not simple, and many of those problems have already been solved, and solved well. In those cases, spending large amounts of time to cobble together and operate a partially working solution out of supposedly "free" software is likely to be an irresponsible choice, at least if one considers one's fiduciary responsibility.
You talk about commercial software like it is the enemy; that is amazingly irrational, and it demands a close personal examination of your biases. After all, this is the business of software development, and not some early 20th century political revolution. Decisions about what software to use should be rational and well-considered, and should take price into consideration as one attribute, just as one should consider the costs of development, operations, maintenance, and other factors such as reliability and suitability to purpose. While every organization is different, there are a vast number of cases in which commercial solutions are far less expensive than supposedly "free" solutions. In those cases, avoiding a commercial solution based solely on the fact that it is a commercial solution is irrational and irresponsible.
We are fortunate to have a vast array of options in software, including some outstanding open source and commercial options. Closing your mind to an entire swath of options just doesn't make sense.
Cameron Purdy | Oracle
The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
How Can We Use Our Creative Power and Technological Opportunity to Address the Challenges of the 21st Century?
Gyorgyi Galik Feb 26, 2015