Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Reactive DDD—When Concurrent Waxes Fluent

Reactive DDD—When Concurrent Waxes Fluent



Vaughn Vernon gives practical guidance on using DDD to model business-driven solutions that result in software that is fluent, type-safe, and with core Reactive properties. Specific attention is given to moving legacy systems that have deep debt to ones that have clear boundaries, deliver explicit and fluent business models, and exploit modern hardware and software architectures.


Vaughn Vernon is a software developer and architect with more than 30 years of experience in a broad range of business domains. He is a leading expert in Domain-Driven Design and a champion of simplicity and reactive systems. He is the author of three books: Implementing Domain-Driven Design, Reactive Messaging Patterns with the Actor Model, and Domain-Driven Design Distilled.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Thank you very much for attending. I know your time is very valuable, and I always value when people decide to take their time and listen to me present. So the talk today is, yes, reactive DDD, or domain-driven design. And so, where did the two ideas, the two concepts, approaches to software development overlap? I think that today we're in a bit of a crisis in terms of the way that software is designed, the way it's implemented. And, in my experience, over many years of software development, I found that it really doesn't have to be the way that things are done today. And so, I want encourage you to take carefully and consider the information that I'm going to share with you and give it a try. It's not difficult. It is emphasizing simplicity, and sometimes simplistic or simple doesn't mean easy, but I don't think that it's overly complex to try to use these approaches.


I think where most of us are, or have been for a long time, is in this mode. This is a blocking mode where, as you see, there's a client and a server. Now when I'm discussing these two concepts, I'm not talking about a remote client and a remote server, I'm talking about two objects. And, as Rebecca Wirfs-Brock has pointed out in her writings about responsibility-driven design or development, you know, we talk about an object that provides a service, as a server object, and a client that uses that service or consumes that service as a client object. And so, what happens when this client requests a service of the server? Typically it blocks. This is no doubt, what you learned, whenever you started programming, in whatever phase of your career, you probably have always been working with blocking.

And this, generally speaking, is not well-suited on the large scale for the kinds of systems and the kinds of infrastructures that we work on these days. Also, this happens when we request a behavior of an object, and sometimes even requesting behavior is something that's rarely done these days. And I'll show you why I say that in a moment. But the HTTP request-response is often a blocking operation where a remote client will make a REST request to a remote-server service and, essentially, the request will block until a response is received. But sometimes, that's quite a delayed or latent process. And then, of course, when we write something to the database or read something from the database, very often our connections to the database are synchronous. And you might add more to that.

Now there is an improvement today in that some of the frameworks, or web servers, or so forth, that are available are providing some asynchronicity or asynchrony to their request-response behavior. And also, you can get asynchronous database connection. So it's not, you know, entirely a loss today, but it's still not a widespread situation.

Anemic Domain Model

The other problem that we have today is largely software is implemented using an anemic domain model. This is where essentially a domain object, or what people liberally call domain object, really has no behavior, it just has data settings on it. So you can, in Java for example, call, "Set something," "set something else," and that's sort of seen as the way that a service communicates with the domain object. This is problematic and, hopefully, I can show you that, when you consider a behavior-rich domain object, there's actually much less code overall in a behavior-rich domain object than there is in an anemic object. And you can test the behavior-rich object, whereas testing an anemic object is quite difficult.

Imagine that, there aren't just these seven attributes, on this Java object, that's marked as an entity, it's annotated as an entity, and it has an ID column and other columns. So essentially, what we're doing with this object is we're just using it to map data into a relational-database table, into a row. And that is, very often, how software is being designed these days. I would say, not 100% but it's a very probably high number in the 90% of times software's being developed this way. Now imagine that this object has 25 string attributes on it. Think of all the possible ways that a client could set data on this object incorrectly. How do you test that the client is not only perhaps setting the correct attributes but not setting the incorrect attributes? You could probably write a hundred different tests and not be confident that you've covered all the cases, nor is it really necessary if you're using rich behavior.

Message Driven

I think that where we are headed today is a message driven architecture, message driven systems, message driven domain models even. And I think that this is highly necessary because most of the software that we see in the open today, in the wild, is blocking, it's not using processors to the extent that they can be used. And message driven, even at the object level, can help to improve the overall throughput of the system because all cores, on a given server, can be used to a very high percentage of their capacity.

Event Driven

And this even includes event driven. So any kind of event can also serve as a message but this is where we actually create a concept, in our domain model that represents a happening that we care about and that happening is recorded as a fact. And that fact can be saved into a data source, and that fact can then be basically published out, or relayed to, or broadcast to other subsystems because they have some interest in it.

And what you see here is not just an event driven architecture but it's also a reactive architecture. Because, as you can see, the controller, on the left-hand side, is sending a command message to what, in domain-driven design, we might call an aggregate. It's basically an entity that has a very certain transactional boundary. And that command can then be processed and, if accepted, an event has emitted, that event can be persisted and even help to represent the entire state of the aggregate or entity. And then, notice how the commands are actually being queued at the bottom. So this is introducing the idea of an actor, this is using the actor model. So this is where command 1, 2, 3, and 4 are being sequentially processed by the actor, or this aggregate, this entity, but they're only being processed one at a time in the order in which they occurred. Which means there's a non-blocking and non-locking kind of environment where the actor doesn't have to worry about concurrency violations.


Now I want to tell you a story. Is this sort of an odd-looking coffee mug to you? Yes. So in I think it was early 1987, I was asked to co-author a book entitled "The Advanced C-Programmers Guide to OS/2." And Microsoft Press was the publisher. If you know anything about OS/2, well, that explains a lot. But how many here have ever wanted to scare Bill Gates? Oh, come on. Yes. I mean, here's what happened: I'm writing this book and I came up with this library, a message-based library that was using the OS/2 IPC facilities. And what I did is I created a character mode, desktop API basically, that sat on top of the OS/2 API. And it handled full windowing with asynchronous control, and all the windows actually, you know, weren't going through a processing loop but actually reactive to the fact that a clock updated or reacting to anything. And without dragging this out further, basically, when Bill Gates found out about what I did, he said, "Shut it down. It might compete with Presentation Manager." So my days with reactive and messaging, and so forth, goes quite a ways back. And it proves that you can scare people when you do the things that you want to do.

Reactive DDD

So one of the maybe problems that we faced with reactive today is that sometimes the reactive platform itself requires us to, if you're optimally using the reactive platform, to even switch languages that you're using. Can you model with fluency in that reactive system or platform? Do you have type safety in that platform? Is it testable type safety, is it testable model fluency? And so, what I suggest is don't give up the languages. For example, if you're working in Java on a regular basis, you don't have to give up Java to get reactive benefits.

So when I talk about reactive DDD, this is what I'm referring to. And you probably, you know, could recognize these green blobs as, let's say, microservices. You might immediately question that and say, "Well, there are too many entities in that for it to be a microservice." Well, it depends on your definition of microservice. What I'm talking about here is a microservice as a domain-driven design-bounded context, which is not a monolith but it's also, generally speaking, not a single entity type. So a relatively small model that is bounded away from other models because it has a specific set of language drivers. As in human language drivers, business language drivers that say, "Okay, the bounded context, on the far right, and the bounded context, on the far left, speak different languages." And even if they use the same words, they can have subtle or entirely different meanings for those words and behaviors for that matter.

So this is what I'm referring to, with reactive DDD. And you see how I have a command model and a query model that are, what we would say, segregated from each other. This is talking about the CQRS pattern. It doesn't mean though that to have a reactive DDD ecosystem with microservices that you must use the CQRS pattern, but we'll find, in a few moments, that this can be very handy to use.


So what is fluent? I don't know how well you can see this definition from the back but fluent is a way of articulately expressing yourself. And I really have to reiterate that, by setting data on entities, you are not conveying the intention of why you were doing that. So just think about having an anemic entity that has, whatever, 25 different setter methods on it, maybe not so many but, in any case, you set five of those attributes through a setter method, what does that mean? Do you require your client to understand what that means? In essence, what you're doing is you're putting the burden of the client in conforming to your data model rather than the client understanding what the business language is.


So when I talk about fluent, I'm talking about potentially creating a protocol, such as this one, a progress protocol. Actually that's a mistake - that should say “proposal”, in case you're editing my slides. And the proposal has the protocol of you can submit a proposal for a client with some expectations. You also have, in the protocol, that the pricing set by the expectations of the client, or defined by that in the proposal is denyPricing or it could be verifyPricing so the pricing is accepted. If it's denyPricing, then we're going to provide a suggestedPrice as a money. So notice how this is actually a fluent model, you are expressing the intent of the operations that are being performed on this domain object.

Ok, so it's fluent. We can say, "proposal.submitFor(client, expectations)." Don't you just like how that rolls off? Maybe if you just said it under your breath, you'd go, "Wow. That says it all, doesn't it? It's a proposal submitted for client expectations." Yes, everybody knows exactly what we're doing. So that's fluent. And notice, you know, I tweeted about this last night, there's no semicolon in this language. That's a trick, there is. It's Java, but I've put the semicolon on column 192. So yes, I'm emulating a semicolon-less language.

Now what's interesting too is, not only can I have fluency in the domain model itself, but what if I could have fluency in the library or the tool set that I'm using? For example, if I have a stage where actors are playing, if I could say stage.actorOf(from(user ID), and then, I'm going to take that user and I'm going to, "Now use another actor, the user actor that I just looked up." But I didn't just look it up. It was looked up asynchronously and, therefore, I don't know when that user may or may not be found. But when it is, then I can ask the user to, in essence, convey a new contact onto the user, new contact information for that user. And when that is finally done, I will ".andThenConsume" so I can use a restful response to, "Respond," and, "Ok," with the serialized user.

So there's an idea of fluency in the library itself. Oh, and I forgot to mention why is the, ".otherwiseConsume," there? Well, this is in case the user wasn't found, what do you want to do in that case? "OtherwiseConsume." And we then answer a response of not found, in our REST response. So imagine being able to have fluency, both in your API and in your domain model. It just sounds the way things work.


The question is, "Is this reactive?" Yes, it is reactive because there's this sort of invisible thing happening behind the scenes. When I, as we would say in Java normally, "Invoke this method," when I invoke this method, this "submitFor" method is not just an invocation on the proposal actor itself, instead that "submitFor" invocation is reified into a message that is then delivered asynchronously to the proposal, which is an actor that is receiving messages asynchronously. And so, this is a command kind of message, we're saying, "It's an imperative," we're saying, "do this." So, in fact, it is reactive, and yet, you don't have to know intuitively, as the client, that you are working in a reactive environment, other than the fact that, as soon as you "submitFor," you get control back. And it means that that actor will not have an immediate response for you. So what does that mean? Well, that's why we have this ".andThenConsume," method where we can consume the result if there is a result afterwards or we get essentially, another name for a future that causes this reactive response to be asynchronous, and you can deal with it asynchronously.


Type-safe? Well, I think this is type-safe, so we're going to use this proposal to "submitFor" a client from a clientID. And it has the expectations of "Summary," "Description," "Keywords," "completedBy," "steps," and a "price". So not only is this fluent and expressive but it's type-safe. And what we're leveraging here are, what are called, value objects, to express our ubiquitous language of domain-driven design but also, even at the creational point of view, we're doing that very fluently. And it's type-safe at every single attribute. So a summary has a specific type. And, if you've ever seen one of these API's into a service method, you have to pass maybe 5 to 25 string parameters, you know, in one single method invocation. How do you get the order of those parameters correct? I mean I think it takes a genius to remember, just the order that those parameters are in. Or a very tired set of eyes. How do you actually accomplish that? So type=safety is an important thing.


However, because actors work in a very reactive or asynchronous way, how do we know when the command will be fulfilled? Was it fulfilled? When does the event get emitted? Has it been permanently persisted to a data source? So when we go asynchronous concurrent parallel, we are introducing uncertainty. And the uncertainty is even introduced at the entire system level. So I was talking about the uncertainty that occurs inside a single bounded context, or a microservice, with that example of an actor, but, externally too, how are these events, for example, consumed around the entire system for a full-system solution? There's uncertainty here. How do we deal with that? How do we model it? Well, I'm going to talk about that.


So here we have a proposal, and notice that this proposal implements the proposal protocol. So this is the protocol that this proposal understands, and I showed you, a few slides ago, what that protocol is. And we happen to be an event-sourced kind of entity, so we're using event sourcing. And if you don't know about that, I won't go into it a lot here, but you can look it up. It's basically where the events that get emitted from this proposal, collectively and in order, represent the entire state of this proposal entity that has been built up over time. So notice that we have two attributes, client and expectations, these are the ones that have been put into our proposal through the, ".submitFor," fluent method. But how is it that we deal with the uncertainty in this proposal? Well, we can use a progress. What is a progress? This was not passed in by the client but rather it's an internal object that we're going to transition as we know more, as we learn more about what has happened to this entity. So, for example ( I hate to rollback but just to make a point), when the proposal is submitted, eventually, we will get, for example, a denied pricing or a verified pricing because this proposal has some pricing information in it. And that will be verified by another bounded context because this is a pricing service and that verification is then, later, communicated. And when it is finally communicated, we're going to transition this progress step by step. So focus in on the progress.

And if we look in at the progress, this is actually a value object, and what we're going to do when the progress is verified for pricing, this is a side-effect-free behavior, which means it's using roughly a functional approach, which says, "We're not going to modify the progress in place. What we're actually going to do is return a new progress that is created with the current state of this progress and, in addition to it, specification of, PricingVerified". Now this proposal entity can know its current progress, it knows when it has completed a certain set of steps or when some are incomplete, and therefore, this modeling technique helps us to deal with or model the uncertainty. So you notice what we're actually doing is we are not trying to model the uncertainty out, at the infrastructure level, and try to make everything look synchronous, and everything look ordered, and everything look non-duplicated and de-duplicated and so forth. What we're actually doing is we're saying, "Okay. We work in a distributed environment. We are going to model this for the distributed nature of our service. And as we do, we're going to name something that is not necessarily part of the original idea of a proposal. And yet, we need to." And so, it's not necessarily the natural model or the real-world model but it is a useful model. And that is the goal of a model, as we know, it's been said time and again that all models are wrong, some are useful, and that's what we're trying to accomplish here.


Okay. So I've been touching a little bit on microservices and everybody wants to go microservices. But what is a microservice anyway? And I have a certain definition that I promote, and I don't stand alone on this idea, although you're free to examine the other approaches and determine what you like. So defining though the size of a microservice can be a pretty important thing. So if everybody else wants to go microservices, that means we want to go microservices. And just for what it's worth, this is what the business wants. Right? And it's not a joke. I mean, if you're working for a profitable organization, and even non-profit organizations have to be profitable to exist, they want this. But you know what your job is? Your job is to convince them that this is what they want. Oh come on, you didn't get that? I worked hard for that. Okay, these are microservices. The other is what? A big ball of mud. So what we're going to do is define, get some definitions here what is a microservice? Legacy, this is legacy. Why do I say, "This is legacy?" Because it makes money. If it didn't make money, it would be unplugged. Hopefully. The business would know better, "Oh man, we're just dragging this thing around wherever we go. Now we need to go to the cloud, it's not making any money. But let's port it anyway, let's lift and shift anyway." No. So that's what legacy means.

Oh, but this is legacy too. What's the difference? Monolith. I hear people say, "Monolith," and I just wish that they would sort of clarify because I think monolith is used, generally, in a very negative connotation. Not always, but, generally, what they're talking about when they say, "Monolith," is a big ball of mud. This is something that, you touch something, over here, and something, way over here, breaks and you have no explanation until maybe lots of research is done, and you have no idea. But this is actually a well-modularized monolith. And so, you could do a lot worse than a monolith, this may not scale or perform the way that you want it to, or need it to, I should say, but it's a lot easier to reason about this kind of monolith. And so, you can imagine where you are using packages or namespaces, or whatever sort of language that you're using, to separate out, within a single JAR file, the different modules. And as I'll show you in a moment, there's a good hint to what these modules might be. But I think that this is probably what most people are referring to when they talk about a legacy monolith, I think they really mean the big ball of mud.

And again, this is where things are so tangled, so ridiculous. I have a colleague here and friend, we've worked together, from time to time, and I remember I tell this story often. His team of architects roll out this UML diagram, but I'm pretty sure, Tom, that that UML diagram was 20-feet-long maybe. I don't know. But, you could touch something over here and that, logically shown on the UML diagram, that's real code running that could break something over here, and you had no idea why. And I'm not saying that that was Tom's fault, it wasn't. But that's I think, you know, what we're talking about when we say monolith, often.

A microservice. What is a microservice? Some people say it's 100 lines of code. And, frankly, I think the person who takes credit for the term microservice and the conceptualizing of microservice, refers to a microservice as basically 100 lines of code. But should it be 400 instead, would that be good? Maybe 1,000 lines of code, is that a good microservice? I mean if you say 400, and it's 450 lines, does that make it a bad microservice? I don't know, but here's what happens with that 100-line microservice is you start out off and basically these are just entities. So each microservice has a single entity type in it or at most a single entity type. And all of these entities, when something happens to them, they publish a message of some kind to a topic, let's say, that's Kafka, and then, any other microservice that is dependent on that message being sent through Kafka is consuming that. And now that we have a microservice's architecture, the problem with this is, not so much right now today. As I see this, the problem happens over time.


And this is what happens. We start thinking about, "Okay now. That service A and service Z did this for it. Does service A still depend on service Z or maybe service Z isn't even relevant anymore? Could we unplug it?" Oh man, this is hard. You know.

Now I know that things are improving with service meshes and the kind of logging that's going on. But just ask yourself how long you could survive that kind of situation? I know I wouldn't want to. That's all that I'll say, I can't speak for you. And so, what some have done is they've said, "Hey, I got the solution. It only costs $400 a month to keep one of those microservices running. Let's just keep it running, we'll never unplug it. That way we don't need to know what depends on it or if it still does anything relevant." And so, this is what we end up with over time. So, you might say, "Well, that was unfair to draw all of those little microservices as a big ball of mud," but I think, by my definition, this is just a distributed big ball of mud because you don't understand it in the same way that you don't understand the monolithic big ball of mud. And when you're afraid to unplug something because you don't have any idea if it's still relevant, I don't know, but I think I'd be worried about that. Whether it's $400 a month or not, because this is what it amounts to. Four hundred dollars a month and it keeps growing.

Complex System

And so, what is a complex system? Now I'm not saying you would necessarily create two million or five million lines of code purely through microservices, but if you did, just think about it from this aspect. Two-million-lines-of-code system. That's 20,000 microservices at $400 a month, that's $8 million a month, $96 million a year. Or, let's say, we're up to 5 million lines of code, that's 50,000 microservices. So all that I'm saying here is, before you jump down that path, think about it. And then, consider that a bounded context as a microservice may be the first best step for you. This is, again, not a monolith, but it's not as small as a single entity type either. And can we still talk to Kafka topics or through Kafka topics? Sure, why not. But now, with roughly the same number of entities involved that we had in the first rendition of that distributed big ball of mud, what we're talking about is seven bounded contexts, or microservices, rather than dozens of them. And growing.

Identify Strategic Drivers

So one of things that we need to accomplish is that we have to try to achieve strategic business advantage. And that is really the big job that DDD tries to solve or is intended to solve. And if you look here and you go back, in your mind, to that anemic client model, that anemic client model could be replaced with just a few methods, which is I can set a new address on that anemic model. I can set a new telephone number on that anemic model, it's fluent, it's explicit. The intention is revealed through the interface itself. But then, notice this additional method, "relocateTo." This is also changing the address but it has a different use case, and the use case is that if this client has just purchased something on our ecommerce system or proposed a job request that some worker is going to consume and they said, "Oh, I just moved house, I need to change my address," and they change the address. Now all downstream concerns can be aware that this client's address has changed because this domain event is being sent out through to other microservices or bounded context who need to know this, who need to consume that nugget of factual knowledge that says, "We need to react to this. This is a reactive system."

Explicit, Testable, Less Code

And notice that this client is now testable. And look, just a couple lines of code, "relocateTo," yes indeed, is setting an address-value object but it's also emitting an event, "ClientRelocated," this is how the downstream knows. And you can imagine that, just in one or two tests here, "testThatClientRelocates," we can assert that the client relocates in the way that we expect it to, and we can even assert that the domain event was emitted as part of the test acceptance.

Monolith to Microservices

So now, if you go back, in your mind, to that monolith, that was a well-structured modularized monolith, what if each of those were a bounded context? I just want to make that point because we're going into a more complicated or complex part of this story, and that's, "How do we get from there to there?" Well, if you have a monolith that is well-modularized as bounded context, getting to microservices can be a matter of breaking those apart. They should already be very loosely coupled, as you see from the interfaces between those bounded contexts or those modules, it's already loosely coupled. And so, what we're going to do is incur some network overhead latency and the uncertainty of network partitions and whatever it happens to be. But think about how much easier that is than this. Now, how do you get from the big ball of mud to one of these? Very, very carefully.

Sometimes there are these unavoidable situations like COBOL. COBOL happens, you know. And man, it happens in a big way. But one of the big problems with COBOL is you can no longer hire COBOL programmers, and companies are trying to hire them back as contractors out of retirement to maintain their systems. So when you're in that situation or another sort of very languishing technology or a product that you're leveraging for your applications and services, you've got to get out of there. But if you're, say, using Java for the big ball of mud, or another currently-supported and well-supported language and platform you kind of have to tackle this like one bite at a time. And one bite at a time means that it's change-driven, value-driven, test-driven. So you don't just dive in and say, "Hey, manager, our team needs like 3 months to turn this monolith into microservices." Now Andrew just said it took them 18 months to do that, at Hulu. So be careful about saying something like 3 months. But whatever number of months it takes, you're probably better off trying to turn the big ball of mud, first, into a modularized monolith, and then, taking the steps over here. Because you can get away with it when the company, when the business says, "This needs to be done."


But another solution to this, when you really have to take the big step of, "Let's get out of here now," is an event-driven approach. And this is where you can strangle the big ball of mud one microservice at a time. And this is where basically there are a few approaches to this, one approach is use triggers. Put triggers in your database that, whenever a row is written into a table, whether that's created or updated, you can cause a trigger to raise an event. And this is not the most explicit event because it's sort of a little bit hidden where that happens, but it's an event and the microservice strangulation can now start to consume those events. But notice that this microservice has to talk back with events to the big ball of mud because, if the user is using it directly, the big ball of mud needs to know what happened over there because you can't entirely cut off every single client all at once. It just doesn't work that way. So it's strangling but it's, you know, like one microservice at a time.

Another way to accomplish this is through a product called Debezium, it's an open-source product that works with MySQL, Postgres, and maybe a few other databases. And it doesn't currently support Oracle, so you can use Oracle's GoldenGate. But this is basically a database commit log tailor that allows you to, in essence, pick off commits and turn those into events. And you accomplish the same thing but without triggers, and that's a lot nicer approach to do, if you can. But I just want to make a statement here that I don't think that publishing events to the outside world long-term through this kind of solution is the right way to go, but it's a tool for the job that probably works or would work well with a strangler approach. But I don't think that you want to design your new bounded context to publish events out to, you know, a topic or something by using an event log like this.


Restructuring. This is a different approach, it's not really strangling, it is, in a way. But what you're going to attempt to do is, potentially, find as many entities as you can that can just represent the things that happen in the domain model. Break those away, restructure them, and now, use that database commit log to project into a query model which is used for your user interface. Problem? Yes. Well, at least challenge, and that is that the command model and the query model are, eventually, consistent. But it could be that you'll take more of the hit, in the UI, than in the application. So that's another consideration. And then, as you sort of deconstruct that monolith, little by little, you can talk to the big ball of mud primarily through the command model, and the query model, and scale out your microservices a lot better than they were.

And ultimately, this is sort of where we want to end up. We want to have the microservices as bounded context. But I just have to say, this is hard. You really can't just jump into this and say, "Oh, we're going to be done in a few months." It's hard work. But I think as somebody said, "Sticking with the other way is even harder."


I just wanted to kind of wrap up with a few thoughts about why reactive from maybe a completely different viewpoint. Almost nobody wanted to scare Bill Gates but who here is concerned about the ecology, you know, our environment? Anybody? Yes. Could I just mention cryptocurrency? More hands now. So Dave Farley recently tweeted that most industries would never tolerate a 50%, you know, loss of efficiency for ease of use, and yet, software developers do this all the time. And he said, "Anyone who does that is developing weird software." And then, our Vlingo platform tweeted Donald Knuth saying, "Yes, that's right. In fact, if you don't know anything or enough about your hardware, any software that you create for it is going to be pretty weird." So have in mind what we're doing to the ecosystem of our Earth by all of these latent and blocking and inefficient pieces of software that we're writing, and realize that we're producing 1,000 X carbon dioxide overhead. And I'm not just here totally to appeal to this side, but there are factors than just performance and scalability to be aware of.

And so, ultimately, we want to rework into a reactive system, this is what I think makes a lot of sense. And I'm just going to tell you briefly about the platform, the open-source platform that I'm developing and we're building a team around this effort, it's called Vlingo. You can say, "V-lingo," if you want to, but I say Vlingo, it seems to sound better. But we do support these actors as aggregates, we do support a reactive HTTP server. Very lightweight, all this stuff is under a megabyte of code, right about a megabyte of code right now in terms of Java bytecode. And Lattice, which is basically a grid or a compute grid that runs on top of clustering, within the platform, which is also all reactive. And Streams is being developed and should be released shortly.

So, you know, kick the tires, take a spin, it's at

See more presentations with transcripts


Recorded at:

Dec 27, 2018