BT

InfoQ Homepage Presentations Mind Your State for Your State of Mind

Mind Your State for Your State of Mind

Bookmarks
39:25

Summary

Pat Helland provides a partial taxonomy of diverse storage solutions available over a distributed cluster. Part of this is an exploration of the interactions among different features of a store. The talk then considers how distinct application patterns have grown over time to leverage these stores and the business requirements they meet. It concludes with a set of actionable takeaways.

Bio

Pat Helland currently works as a Software Architect at Salesforce. He has been implementing transaction systems, databases, application platforms, distributed systems, fault-tolerant systems, and messaging systems since 1978.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Helland: Most of my talks, most of my papers come from when I'm just sitting around staring at the ceiling. They're not necessarily directly to do with my day job. It's more driven by annoying the people around me because I'm staring at the ceiling and not paying attention.

This is "Mind your State for your State of Mind." This talk is a paper in the "Communications of the ACM," so if you're interested, just look for the title, look for my name. Go google it and you'll pop it up. ACM's wonderful, they seem to let me write whatever random junk I come up with and they're accepting it. It's awesome, it's really fun.

We're going to do an introduction. We're going to talk about, What's This State Stuff? The Evolution of Durable State Semantics. Session State and Transactions. Identity, Immutability, and Scale. We're going to walk through some patterns I see in applications and how they apply these concepts, and then we'll wrap up.

Introduction

There's interesting trends in storage. I have been getting a paycheck to build database systems, the plumbing, the database management systems and distributed systems since I had hair - a long time. It used to be that the storage was directly attached to the computer, and you had one computer and life was pretty easy. It was good. There were a lot of challenges, but that's what it was.

Then, we started getting these shared appliances called storage area networks, big, fancy things to take care of your data. Then, we started squirting data around the cluster, and then we started saying, "Wait a second. What if I think about my application as data by putting a REST API on top of it," and so you actually write data but that causes some random application behavior. I think it's a fascinating composability we have. Storage has come through huge changes.

Computing has evolved. Used to have a single big momma process called a mainframe region, I remember that. Multiple processes on the same server came later. Then, you'd start doing RPCs, remote procedure calls across the cluster. I literally remember the first RPCs packet sending in the basement of my college going across three different computers in a ring. Then, you started getting services and service-oriented architecture and I participated in a lot of that stuff in the late '90s about what does it mean to have this trust in service-oriented architecture? That evolved later on. The service-oriented architecture had state, so it was controlling the boundary with trust. In the belly of it, you had state. These microservices don't so often have state and that's giving us headaches. That's giving us challenges, but we're doing it for big and important reasons.

Computing's use of storage has evolved. We used to have direct file I/O and because of that we had to do what I'm going to explain. We'll walk very carefully through careful replacement. What do I mean by that? You had to do it carefully. Careful replacement had two different variations. One is, if I go to change something and it's going to get trashed because I changed it, then how can I deal with that? The other is, if I got to do five things in a row to make a big change, what happens if I get three of them done and then I crash? How do I deal with that?

I worked on some of the very early transaction stuff. I have been working on transaction things since 1978 and I shipped the second distributed two-phase commit and so transactions are in my blood. Fault-tolerant transactions are in my blood, I love them. They provide careful replacement for the app, so it doesn't have to think about a bunch of junk. Later, SANs provided careful replacement for the plumbing, and then the stateful 2-tier and N-tier transactions. I built out the first transaction stuff with Microsoft Transaction Server. How do you do stateful into your transactions? Then, you get into key value. We did this database stuff, it was really cool. Then, we started doing key value for a good reason. Then, we're doing REST PUTs. We invoke the app code. Who knows what that state happens?

Careful replacement variations: a client may write to the previous value or it may crash and interrupt a sequence. What are the challenges in modern microservice-based apps? Nowadays, microservices power many scalable apps. There's pools of equivalent services and the incoming state is load-balanced. I love this picture. This is from the SOSP 2007 Dynamo System. I was not on the team, but I helped a little bit with that project and the paper came out after I left Amazon. Requests come in and they're routed around page rendering components, and then they go to the next level and the request routing across, and then they go to the next level and now you get down to the bottom and you have some storage, in this picture, a Dynamo instances and then you get s3 and then you got other data stores. There's this calling of this microservice calls that microservice calls that microservice, which was pretty early in 2007.

Microservices are amazing. They are operationally amazing. They help you do health mediated deploys or canaries. What the hell is a canary? Back in the day, before these mechanical things happened, coal miners would take a little, yellow bird in a cage and they would crawl down to coal mine because if the methane gas killed the bird, they got the heck out of the mine before it killed them and so it's an early warning system. If I have 100 servers running a particular microservice, I'll roll out 2 or 5 of the new version of the code. If it kills over, then I back off. I don't roll it out everywhere, and so it becomes a canary in the coal mine and microservices make that much easier. Rolling upgrades, supporting fault zones are much easier with microservices, and fault tolerance is much easier when the app semantics allow it.

Durable state is usually not kept in microservices because you can't update the state with these things coming and going. It's really hard to do that and they're coming and going willy-nilly. That latest state is kept typically somewhere else besides the microservices and so the microservices, "I need that state," it goes off and it gets it and that works when things are changing and moving. Sometimes, read-through requests to durable state is not in microservices. That's the normal thing you have.

What's This State Stuff?

What is this state stuff? Let's talk about durable state and let's talk about session state. We don't talk much about session state anymore, but we should. Durable state is the stuff that gets remembered across requests and persists across failures. If it doesn't persist across failures, then it's not durable. If it doesn't persist across the request, then it's not durable. How do you do it? You could do it in a database. You could do it in file system files. You could do it in key values. You can do it, caches. There's a lot of really innovative ways of capturing that state. You can update it with a single update. You can update it with a transaction. It's not always easy, there are challenges. You can update it with a distributed transaction - even more challenges. You can update it with careful replacement that we're going to talk about more. You can update it with messaging.

A really important question is, can you read your writes consistently? If I go to get the state of something and it's got a key on it or something like that, do I get the answer I expect or am I going to get surprised? Weekly consistent stores and caching each make reading your writes challenging, so it's not always fun when you go and you say, "I want to take and I want to put X over here," and then you say, "I want to put Y over here," and then you go to read and you get X and you're, "Damn" because it's not right.

Sessions state is a different thing. Session state is where you have stuff remembered across requests in a session but is not remembered across failures. Session states exists within the end points associated with the session. You don't typically have to put it away. You don't typically have to write it down, and multioperation transactions are a form of session state. If I see "Begin Transaction" and I talk to something and I say, "Let's do the first step," and he says, "Ok," and you come back, and they say, "Let's do the second step," and he says, "Ok," and then you come back. That relationship is done with session state. Then, you might say, "Let's commit that transaction." You say, "Ok, I did."

Session state is hard to do when the session smears across services. I had 100 microservices, I go to a microservice. The load balancer says, "You get 37." "Great," and I say, "Let's do the first part of the transaction," and then you come back, you say, "Ok, let's do the second part." "Ok, you get 63," 63's, like, "Huh? I don't know," and so you don't have that session state. Typically, keeping it in a service instance makes it hard to move.

I did a paper in 2005 called "Data on the Inside vs Data on the Outside" and that's because I saw these trends. I was living and built into this world of databases and relational, transactional semantic databases and I said, "Wait a second. There is stuff that is being squirted out across these boundaries. How does that work? It's leaving the database. What's that data like?" I realized I can taxonomize this. If data's inside the database, you see a classic transactional relation. You see tables, rows, and columns. You have values. Relational relates values, it's cool. It lives in one place in the database and one time in the transaction, and so I could write column for ACM Q and so I call the column "Escaping the Singularity." It's not your grandmother's database anymore," and that's because classic relational, you have to make it at one point in time and one point in space and things get different when you bust out of that. Data on the inside is inside that singularity.

Data on the outside is messages or files or events or key value pairs. It's unlocked data not stored in a classic database. It ends up having identity and optional versioning. I did a paper last year called "Identity by any other Name," which is all about how we knit together these loosely coupled systems with identity. Outside data is immutable but may be versioned. Each file or event or message or key has a unique identifier, the ID and the URI. Something else that might identify it, it may be implicit on the session or it may be implicit within the environment.

The Evolution of Durable State Semantics

Let's talk about the evolution of durable state semantics. We used to do careful replacement on disk blocks. Ok, when I was a kid I dropped out of college and got a job. If you did a write to a disk block, it went through a transition. The transition was the old good value to garbage to the new value. If you crashed at the wrong time, you got the garbage. How do you deal with that? Power failures and interruptions might destroy it. That's a challenge. You would create a file, one, and then you would modify it, and then you would do version two, and so that transition would leave you with garbage.

Careful replacement for single block writes with mirrors. The trick we did with mirrored data was you would write the new value into some other place and only when it was safe you would overwrite it and you would write the tail of the log carefully, so if I had file one and file two, version one, I would trash one block. I'd write the new one and then I'd trash the other and I'd write that. You had to make sure that at all times, by looking between the two mirrors, you could find a good value, so you didn't destroy your transaction log.

Careful replacement for single block writes with non-mirrors, you had to ping-pong the log tail. If I only had one file for the log, I would have to write to the block after the one, even if it was only partially full and then you would have them both and then you would go back and you would write it. You had to play these games with careful replacement to make sure that the system didn't trash what you had. We're seeing this pattern move forward. Let's talk about how that works.

Careful Replacement for Record Writes - if I wanted to go to a database or something like that and update a record, if I update record X before updating record Y, then I could figure it out, so let's just assume my app had that pattern. If I update X and then I update Y, I can put it back together even if that breaks in the middle and I only get X. We update X, I can recover from that crash, but if I did it the other way around, if I update Y first and then X, I'm doomed because I can't ever track it back. Imagine, for example, updating the primary in a record in a database and then updating the secondary indices. I could figure out how to do that because the data is in the primary that I could go do. That's an example of that, but I still know I haven't completed doing the update of the indices. Another example is an application queue. If I write down, "I want to do this stuff," and it is item potent, I can come back after a crash and redo it even if I was only partway through and I can get the answer, so that's another example of careful replacement.

Transactions made life really wonderful. They were also wonderful for me as a database system implementer. The API was, begin and maybe abort and then I had to write a million lines of code, because it was a really trivial API with a whole bunch of junk underneath to deal with. Transactions bundle and solve careful record replacement. That X and Y, I don't have to think about it. I say, "Begin, update X, update Y, end," and it's all or nothing. That's amazing. The database ensured it was amazing.

Handle challenges with careful storage replacement. As a transaction system implementer, it was my problem to worry about broken writes to the pages of the disk - my problem, not the app developer's problem. That was awesome, so the writes to storage were really cool, but then you end up needing to do transactions across time. How can I ask for a partner computer, partner company way the heck over there and send a message, "Please order this stuff," and then later on when the answer came back do the next step of the work? That required multiple transactions and so I still needed to reason about the careful replacement at a transactional level to integrate into the broader world and so that's a challenge.

Work across space - for example, across trust boundaries. I'm working with another company. How do I make that work? I have to end up doing that by issuing a, "Would you please?" an, "Ok, you did," and so there's multiple transactions on my side, and so that leads us to the messaging semantics to integrate loosely coupled systems.

Transactional messaging is pretty cool. A transaction might write down in my local database and my local system, "I want to send this message," and it's going to happen atomically, so I go and I write a bunch of stuff and I write the message with that yellow envelope and that's transaction T1 and then I might atomically consume an incoming message. Over here, there's an incoming message that will atomically consume it and then do a bunch of stuff. It might send other messages, so that's cool.

Exactly once semantics can be supported. A committed desire to send causes one or more sends and I retry until it's acknowledged. I committed that desire, so I'm going to keep sending until finally it comes back and says, "I heard you. Shut up." I've actually trained my wife that if I say, "Ack," it doesn't mean I agree. It doesn't mean I disagree. It means saying again is not going to help. It's just, "Ack," so that's ok. The message has to be processed at the receiver at most once, so the receiver has to detect duplicates. See those four arrows and only one of them got through, so that actually gets to be hard in a scalable system. You need to remember the messages you've processed so you don't process them twice. At sufficient scale, that by itself is very hard, remembering the messages that I've received.

How do I remember them? I got to detect duplicates. How long do I remember? Does the destination split? Does the destination move? Read Your Writes. Yes or No? Used to be, back in the day, if you wrote something down, you could read it. Linearizable stores offer "Read your writes." Even as the store scales, as soon as you've written to the store, you can read the latest value. Linearizable, we will see sometimes means, "I might wait a long time for a perfect answer." Does linearizable store offer fast predictable reads, fast predictable writes and "Read your writes?" In a linearizable store, I always get the perfect answer. That means, to get always the perfect answer, I have to get a quorum of machines in this distributed system to write them down. I've got to get enough done. Most of these systems, you talk to copy 1, replica 1, you talk to replica 2, you talk to replica 3, and only then when all that happens does it say, "Yes." If one of those machines goes slow, your answer goes slow." You can do it, but it might go slow.

Non-linearizable stores do not offer "Read your writes," so a non-linearizable store has no guarantee that a write will update all the replicas, so I might go read an old value if I hit the replica that didn't get updated. Is that ok? Does that work? I can make it fast, but it might once in a while give you the old answer, so reading and writing will have a very consistent SLA. I skip over sick or dead servers to write. I skip over sick or dead or servers to read, but I can't promise you that it's always the latest, perfect answer in that class of store. So, non-linearizable stores are fast and predictable reads, fast and predictable writes, but they can't always read your writes.

Cached data offers scalable read throughput with phenomenal SLAs for reads. If I hit a particular replica of the cache and it's slow, I go to the next. I'm all good. It's all fast, but I cannot update that atomically. I can't tell you immediately, "You got the latest answer." It hits one of the computers and returns one of the values. A scalable cache is fast, predictable reads, not fast predictable writes and cannot read my writes.

Different stores for different uses. Is it ok to stall in a read? Is it ok to stall in a write? Is it ok to return a stale version? You can't have everything. Both are useful, all of these tricks are useful. It's just knowing and understanding the tricks and the tradeoffs that are involved in a big distributed system. There's a great paper, "Linearized Ability versus Serialized Ability" by Peter Bailis, a blogpost. Immutability is a solid rock to stand on. Sometimes, we can store immutable things. If you look for it, many app patterns will create immutable items and a 128-bit UUID is an example of an identify that won't collide. Storing immutable stuff can change the behavior of a store because, here's the ID, here's the stuff, there's no version, there's no stale. If I find it, that's fine. If I didn't get it to all the replicas and I go to one replica and it's not there, I go to the next, "There it is." It's not the wrong one, it's not an old one.

If I have a linearizable store with these fast, predictable reads, writes, non-linearizable, scalable, what am I getting here? If I read my writes for immutable data, there is only a correct version. This is one of the powers of immutable data. This is why I wrote that paper that immutability changes everything. Interesting options for applications with immutable data. Non-linearizable gives really fast and predictable writes and reads. Look at that, there's three yeses. All these behaviors are phenomenal if it is immutable data, not if it's changing data. Scalable cash, it works a lot. It works great with a lots of fast and predictable reads, and like I say, immutability changes everything.

Session State Semantics and Transactions

Let's talk about session state and session semantics. If I have the same process or different process, back in the day, the database was in the same process, literally, physically in the same server and a library call from the app to the database was easy. Sometimes, multiple applications were loaded together. They were all in the same library. I remember that. Then, we started moving to other processes on the same server and the session had session state. I'm talking across the processes. I have to remember what's going on. You see this today in JDBC connections, which is the descendant of what I saw early on. You have cursor state as you traverse it. Each process in the session had information about it, and later still, we started moving the database and the apps across different machines, typically in the same data center.

Stateful sessions were the natural outcome of shared processes. You knew who you were talking to, they knew you. It was all great, we're all in a family. You remember stuff all the time. They worked well for classic service-oriented architecture because when you're talking to the service, you expect a long session with a state on each side, and the stateful sessions meant application could do multiple interactions within a transaction, so I'm going to say, "Begin transaction, talk, answer, talk, answer, commit," and in many circumstances, rich things could work across an N-tier environment, and I architected the first one of that, the Microsoft Transaction Server. A client could come in with app-A talking to app-B, talking to DB-1. It could come in and it could talk to two different databases, two different apps, and the client didn't even know there were two databases back there, didn't have a clue, and we would just do distributed transactions to make it work.

Let's look at that with microservices. You go, "How is that going to work?" Microservices stink when it comes to session state because you may hit the same microservice instance over and over until something changes and nobody tells you and now you hit one that's not the same. You tested it, it worked great. Then, it doesn't because the deployment changed, or things changed. Usually, they go back to the same microservice. If the individual instance fails, they use another one. No more sessions state. Session state is needed to create cross-request transactions. Microservice transactions typically store one request and so all of that state stuff across multiple requests is hard and the challenge of two-face commit is hard. Microservices are in fact worth the restrictions, but those individual single call transactions are putting us back in the world of careful replacement. I have to carefully replace across this database and then hit this particular microservice instance, which hits that database, that key value, and I have to think about the careful ordering that I didn't have to think about in a transactional world. On the other hand, I've got phenomenal rollout of the stuff. It works great for operations.

It is not your grandmother's transaction anymore. Transactions only work in a single call to the store. Scalable microservices, applications, as microservices compose, they call one another. Scalable stores, transactions across multiple identities, and distributed transactions. I can have scalable linearized ability per identity "Read your writes," scalable nonlinearized ability with per identity "Read your writes," so it's a fun world.

Identity, Immutability, and Scale

What is identity? Each identity is represented by some name, some number, some string, some URI, and it can represent something that's immutable. If I pick "The New York Times" from October 9th, 2017, San Francisco Bay Edition, it's the same bits, the same paper, the same everything. If I say, Today's New York Times, it's not the same. It's bound to something. Each version of the identity is immutable. A change makes a new version, hence each version is immutable. Creating an identity for the immutable version is really useful. It's really useful to formalize the identity of the immutable thing and then let that be kept in your store, which can now deal with it in different ways. Caching, copying, and referencing are not subject to ambiguity.

Remember we talked about a version here. It may be linear. That is in fact called linearized ability. If I go and I read my writes, I have a linear state, it changes in a linear fashion and so I have a linear version history. It's also called strong consistency. Version history may be a directed acyclic graphic, which is a lovely data structure, but it just says, "I pointed to this. You're pointing to this. I don't know about you. We don't know about each other," and so the history is interesting in a directed acyclic graph. It happens all the time if you do not have linearized ability.

How do I think about cross-identity relationships? It's a difficult thing to do careful replacement across identities. Remember, we went through the microservices and now we get to update one key - not two, one atomically. How do I do that? I have to do a change and then go ahead and do a change, so that's ok, but say I want to update identity A so that version W becomes version X and then update identity B so version Y becomes version Z, so I'm doing this and then I could do this and now I have a guarantee of order because I got the answer and I can assert there's a dependency and a semantic there. Each time window where B is Vz also shows A is Vx, so I can now count on that when I go to read things.

Careful replacement is predictable over linearizable stores even if they're big and scalable. Never read B of version Z unless you can read A of version X. Careful replacement over a non-linearizable store will behave unpredictably. You have to write a new version and then it sometimes gets wonky. Here, the two on the left are identity A and the two on the right are identity B, so I do an update, I do an update, I replicate it, I replicate it. He dies, he dies and now I have broken my rule. You will have bugs if you try to build careful replacement on top of non-linearizable stores and this is a challenge we have because that puts us back in the, "I'm going to stall to get the perfect answer." Careful replacement is buggy over non-linearizable.

Some Example Application Patterns

Let's go through some example patterns. Careful replacement over a key value, just as we talked. I've got objects and values at a uniquely identified, the work arrives, it's captured, new values are written to replace the old. The scalable applications can be built over the key value stores with read your writes. This is what you see. It's all good, but again, you may have delays in this, and I want to be really clear about what I'm trying to point out. Each of these stores is linearizable. Each of these stores will update all of the replicas there or formally remove a replica from the cluster before saying, "Yes, that was a lovely write." That change, I heard it. I'm not going to ever show you the old value. Because of that, I can compose a correct workflow of these things, careful replacement over them.

I could build a transactional Blob-by-Ref. I've got this non-linearizable store and I'm only going to put immutable blobs in it, so immutable blobs are placed into it and what I'm going to do is I'm going to first say, "I intend to write this blob in my transactional database," and then I go and I write the blob in three places and I don't have to have perfect answer. I can have fast answers, but not necessarily correct and not necessarily linearizable. Then, I do transaction T2 to say, "That was delicious. It was great, I did it. I remember it and now it's out there." The blob store is implemented with many commodity servers. It's really cool. It's leveraging the immutable nature of the blobs in order to get the non-linearizable stores' fast behavior.

Let's talk about shopping carts here for a second. It turns out that customers don't like being told to wait for the shopping cart. If you show an old version of the shopping cart to a user, they will fix it. If you say, "Wait a minute. It's going to take me 10 minutes to get you perfect shopping cart the way you saw it last," they will do the dishes. This is just a fact of how we've measured, we've seen how people are. It is a better business decision for a shopping cart to once in a great while rarely show you an answer which is not perfectly correct to the last shopping cart. It's annoying to the customer, but it's a better business decision to do that, so this may be surprising, but this is what it's done. Customers are very unhappy if it stalls. The shopping cart should be right now even if it's not right, and so that's a deep tagline for you to take away when you think about the behavior of your distributed stores and your application. Do you want it right or do you want it right now? In very many cases, human beings want it right now. They want it now, because it turns out you get to check out the cart before you look at it, before you say "Submit" and then they process it and they say, "Here you go."

Let's talk about an ECommerce Product Catalogue. I have a product catalog and it feeds and crawls by a backend and I push them out there. Each product has a unique identifier. The identifier takes me to a partition. If I come in with D, I'm going to go to the leftmost partition. Out of that, I have to pick a replica. How do I pick a replica? Load balancing, random assignment. "That one's too slow," retry to a different one. I have replicas within each partition and so that ends up being very powerful. The backend's processing come in and they pub-sub distribution through the caches themselves and then the user catalog lookups are latency sensitive, so the feed into there is not very typically very latency sensitive. It can stutter, it can jitter, it can take its time. The incoming request with people behind them, you better be fast and so you're going to retry. It's pretty interesting stuff. It needs low latency with predictable reads, but no low latency with writes, and no read your writes. Lots of scale is needed for these things.

Search - you have web crawlers that are coming in here and they feed the search indexers and the search terms are then identified for each document and spread out. Updates to the index are not typically very latency sensitive. However, when you go to Google or Bing or something to search, you are latency sensitive and you get really grumpy if it takes a long time, and so those searches, what's even harder with this than the product catalog is they have to hit all of the shards, all of the partitions and get answers from all of them to coalesce them to figure out how to recommend things to you. Anyone of hundreds or thousands that are slow will cause you to experience annoyance, so it's very important that each request gets low latency.

One of my favorite papers, any systems person should know this paper well, is the "Tail at Scale" by Jeff Dean and Luiz Andre Barosso, Communications of the ACM 2013. Basically, what they say is, we send the request out to all the shards and we wait until it's the 95th percentile timing, meaning 19 out of 20 happened before this time and 1 out of 20 takes longer than this time. If one of those many hundreds of requests takes that threshold and crosses it, retry it to another server within the shard, and so now the 99.9 percentile latency is dramatically lower at the expense of 5% more work, because you're doing things repeatedly 5%, and, of course, those requests have to be item potent. This gives you low latency reads, not so predictable low latency writes, and not so read your writes, because it's soft and gushy. Search results from these web crawlers are not transactionally correct and we are super happy with that, lots of scale.

Conclusion

Conclusion, it's about the application pattern. Do I want low latency predictable reads, low latency predictable writes? With careful replacement, I should stall. I want to stall. I'm willing to stall in order to not tangle up my work, and I work across multiple key values. Transactional "Blobs-by-Ref." I'm taking big objects, big things and I'm pushing them out in documents. I'm pushing them in some store. I don't want the stall to write and read those. I'm going to have potentially a stall putting them in the transactional system. I don't want a second store of stalls by putting them into a transactional or linearizable store.

Shopping carts - I need to read them fast, I need to write them fast. I'm willing to have them go sideways a little bit occasionally once in a while. It's a business decision for the happiness of the customers. Product catalog - similarly, the catalog changes. The manufacturers put in a change. It doesn't necessarily have to get out into the caches immediately. It can take minutes or hours sometimes, but I want to be able to read it fast, because there are people waiting to see it. By the way, sometimes you'll pull it up and the product catalog has this line of information or these details and then you'll retry, and it will be different and that's considered acceptable because people know how to process and handle that. Search, similarly, low latency reads are super important, but the writes and reading your writes is not super important. Linearizable and read your writes are not always required in modern, scalable applications.

Takeaways

Just so you guys know, whenever I do a talk, the first slide I right is the takeaways and I will take sometimes days on one slide. This is what I hope you remember. There are two kinds of state, there's session state and durable state. Stateful sessions remember stuff. Stateless doesn't remember it on the session, if it's stateless. Durable state, the stuff is remembered when you come back and you crash and restart.

Most scalable computing comprises microservices with stateless interfaces. They need partitioning and failures and rolling upgrades. Stateful sessions are really hard in microservices. I didn't say impossible, I said, "wow" hard. Microservices may call other microservices to read data or to get stuff done. That's the world we're living in in this scalable world. Transactions across stateless calls usually aren't supported in microservice solutions. I didn't say never, I said usually. It's hard. Microservices, there's no server-side session state, there's no transactions across calls. There's no transactions across objects because it's hard. It's hard to make it happen. I didn't say it's impossible, but it's really clunky and we haven't really figured out how to do it effectively and I don't predict we will, because the thing that makes them lovely as a lightweight thing doesn't really hold the sessions super well.

Coordinated changes use careful replacement technique, which literally is back from the days. They were doing this in the '60s. I was doing it in the '70s. Each update provides a new version of this stuff with a single identity, so I'm going to an update and I had an old version X and now I've got a new version Y and that is an update. The complex content within the new version may include any things like outgoing and incoming messages, transactions can be used for some of this trick as long as they're local to an environment that can have it, but the whole careful replacement knowledge and model is becoming even more important in microservices so that you can make everything work together.

Different applications demand different behavior from durable state. There is no one perfect answer. What is the solution? What's your app need? What's your business need? Do you want it right, meaning, "I wrote that data. I got that same data back," or do you want it right now meaning, "I don't want to stall when things in the data center go wonky?" By the way, we're seeing more and more and more stalls in the data center as we build them on lighter weight, more efficient servers, they tend to have what's being known as gray failures and I'm beginning to write papers about that. It's very interesting to think about gray failures, but they interact with this. Things stall more often.

Many app solutions based on object identity may be tolerant of stale versions. Think about that. It's ok, staleness can be great, and immutable objects provide the best of both worlds by doing it right and right now, if it's immutable, which is not always everything. But you can many times decompose your solution to have a little bit of change and a lot of immutability and that's a very interesting thing to think about.

 

See more presentations with transcripts

 

Recorded at:

Jan 06, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT

Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.