BT

Scaling Stateful Services

| by Hrishikesh Barua Follow 16 Followers on Nov 26, 2015. Estimated reading time: 5 minutes |

Caitie McCaffrey, distributed systems engineer at Twitter, summarized the benefits of stateful services which are less known than their stateless counterparts in the industry and how they can be scaled at the Strange Loop conference last September. She counts data locality (that can be achieved by the function shipping paradigm), higher availability and stronger consistency models among these benefits and also gives some real world examples and a few pitfalls.

Data locality is the idea that each request is routed (or “shipped”) to the machine that has the data to operate on it. The data store is hit for the first time when a request is made. After the request is processed the data is left on the service. Future similar requests can be served fast by the same service from the in-memory data. This leads to low latency as the data store does not have to be accessed again. This “function shipping paradigm” is the key difference between stateful and stateless services.

A stateful service also leads to stronger consistency models. These are related to the CAP theorem (Consistency, Availability and Partition Tolerance) which essentially asserts that it is impossible to build a distributed system that can satisfy all three properties. This easy to read FAQ covers the basic concepts of CAP. Since no distributed system can avoid the “P” in CAP (#10 in the FAQ) real world systems that choose availability over consistency are called AP and the ones that choose consistency over availability are called CP. Stateful services can be built using sticky connections where a client request always gets routed to the server which served it previously so that the data for a single user is on that server. In such implementations, the degree of consistency in AP systems can be increased. These stronger models include Pipelined Random Access Memory and Read your Write. The first one implies that all requests see writes from one request in the order they were issued from the request. The second one implies that once a request has issued a write it will read the updated value and never see an older value. Werner Vogels summarizes these and more in his article

Whether or not read-your-writes, session, and monotonic consistency can be achieved depends in general on the "stickiness" of clients to the server that executes the distributed protocol for them. If this is the same server every time, then it is relatively easy to guarantee read-your-writes and monotonic reads. This makes it slightly harder to manage load balancing and fault tolerance, but it is a simple solution. Using sessions, which are sticky, makes this explicit and provides an exposure level that clients can reason about.

McCaffrey says that the industry often focuses on stateless services for achieving scalability and ignores stateful services. Eric Evans in his book Domain Driven Design writes:

When a significant process or transformation in the domain is not a natural responsibility of an entity or value object, add an operation to the model as standalone interface declared as a service. Define the interface in terms of the language of the model and make sure the operation name is part of the ubiquitous language. Make the service stateless.

Stateless service architectures can be easily scaled horizontally by adding backend servers to a front-end load balancer. Such applications have what is called “the data shipping paradigm” where data is requested from the backend data store for a request. In future requests, the same data is requested again irrespective of whether the request comes to the same service instance or another because the instances are stateless.

Is this model always useful? It is wasteful for “chatty” applications that have frequent communications between the server and the client and stateful services are a better option in such cases, according to McCaffrey. Kai Wähner agrees and lists these as the pros of statefulness:

  • Easier to develop when state is shared across invocations;
  • Does not require an external persistence store;
  • In general, optimised for low latency.

Sticky connections can be implemented using persistent connections but problems with them include unequal distribution of load in backend servers. This can arise as clients get tied to servers and some servers might be underutilized whereas some may get overwhelmed with requests. One way to mitigate this is backpressure which involves rejecting requests once a certain threshold is reached. Stickiness can also be implemented by using routing logic. This lets any client talk to any server with the request getting routed to the “correct” server. Two problems in implementing routing are cluster membership (who is in my cluster?) and work distribution (who does what?). Cluster membership can be static or dynamic. The latter can be implemented using gossip protocols or consensus systems. Work distribution can be done by various mechanisms - random placement, consistent hashing and distributed hash tables.

McCaffrey talked about three real world examples of stateful services:

  • Scuba, an in-memory database at Facebook used for code analysis, bug reports, revenue and performance debugging. A single request fans out into multiple backend servers and the responses are collected and returned along with a measure of how complete the overall response is;
  • Uber’s Ringpop, an application-layer sharding library that also provides request forwarding;
  • Orleans, an actor-based distributed-systems programming model from Microsoft Research.

The stateful pattern is already well known in the MMO game development world. There seems to be more adoption of this pattern in other domains too in recent times as is evident from the above examples.

Towards the end of the talk McCaffrey gives an example of how Facebook manages fast Scuba database restarts by decoupling the memory lifetime from the process’s lifetime. After a Scuba database machine is restarted it takes a long time to read its data back from disk. To get around this the in-memory data is copied to shared memory when a database node shuts down, and copied back to the node when it comes up.

Some pitfalls while building stateful services that McCaffrey lists in her talk include unbounded data structures resulting in memory issues, memory management problems like long garbage collector pauses and reloading state. State reloading can occur during recovery and deploying new code, both of which have similar costs as the first connection which gets the data from the database is always expensive.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss
BT