Using SEDA to Ensure Service Availability
What is SEDA 1? No, it’s not the fizzy carbonated stuff with that you daddy used to pour in his whisky! The acronym stands for Staged Event-Driven Architecture, and it is a proposed architecture based on a way to scale systems using decomposition and filtration of system activities using stages and events.
In SEDA, a request to system or service can be split in to a chain of stages each doing its part of the total workload of the request. Each stage is connected to the others by request queues, and by controlling the rate at which each stage admits requests, the service can perform focused overload management (for example, by filtering only those requests that lead to resource bottlenecks). By choosing this approach a given computer system can scale very well without having the traditional pile of CPU capacity to handle request bursts. The rest of the time, most of the CPU pile does more or less nothing but count sheep.
SEDA has emerged as a part of the Ph. D thesis of the brilliant young man called Matt Welsh 5 as one answer to the problem of managing performance of Internet services under extreme overload. We suggest you read Adaptive Overload Control for Busy Internet Servers 3, which forms the foundation of this article.
One way of describing SEDA could be using the metaphor of kids; they have input like milk, bread, and porridge. And after a while, passing different stages, they produce some output... STOP! Maybe we won’t go down that path--it came out totally wrong. Hmmm, I think it’s wiser to turn back our well-known kindergarten metaphor from article #1 4.
Okay! Imagine that our kindergarten is allowed an excursion to a amusement park. But to take a bunch of kids to an amusement park can turn out to be like smashing a jar full of ants...
Thus some constraints on the visit to the amusement park could be quite proper; one of them being that all the children have the same fixed itinerary through the amusement park. First the merry-go-round, then the roller coaster, radio cars, mystery train, and so forth. Escaping the methaphor for a minute, this is quite similar to a typical service request; it has a determined route through a number of stages to fulfill the overall task of the service. In our amusement park, the roller coaster, radio cars, mystery train, etc, etc, serves the purpose of SEDA stages.
But, as you no doubt have experienced, each of these amusement park stages has a different capacity. Thus we experience bottlenecks. If you have experienced long queues with kids with no adult supervision, every mother, father, aunt, uncle, and teacher knows that this is a place that evil grows proportional with the queue time of each individual child. Our nicely lined up little queue will rapidly be transformed into to a bunch of English hooligans after a football match where Norway has beaten England 10-1. So, long queues are bad news for both children and request stages as well.
Then let’s see how SEDA can rescue us from growing queues. If we look at a typical SEDA stage as shown in Figure 1 below, we notice the following components:
Figure 1: SEDA Stage
- Incoming Event Queue
- Admission Controller
- Dynamically sized Thread Pool
- Event Handler
- Resource Controller
First is the Incoming Event Queue. This is where incoming requests to the service component waits for further processing, and can be compared to the line of kids waiting for radio-car rides. Associated with the Incoming Event Queue is the Admission Controller, serving as the gatekeeper of the queue.
The threads in the dynamically-sized Thread Pool operate by pulling a batch of events off of the incoming event queue and invoking the application-supplied Event Handler. The Event Handler processes each batch of events, and dispatches zero or more events by placing them on the event queues of other stages. Applying this to our amusement park, the pool of radio-cars acts as the Thread Pool, and the combination of the threads and Event Handler matches the a number of radio cars driving around with kids fulfilling the task of giving them bad driving habits.
Finally, each stage is tuned into its ideal operation mode by a dynamic Resource Controller, which adjusts the stages’ parameters appropriately. For example, one such controller adjusts the number of threads executing within each stage based on an observation of the stage’s offered load and performance. The Resource Controller’s role at the radio-cars is to tune the number of cars driving by adding or removing cars from the pool of radio-cars.
So far so good, back to the ticking bomb with potential hooligans. How can we keep the time wasted waiting in the queue at an acceptable level? We will do this by using the SEDA mechanism listed above. First we use the Thread Pool mechanism of the radio-car facility by simply adding more radio-cars. The man in charge of the radio-car facility (who act as the counterpart of SEDA’s Resource Controller), is responsible for adding more radio-cars when needed. But this will only keep us happy for a while. The pool of radio-cars will at some point dry up...
Many would run for the hills at this stage, but we are going to use our second tool in the toolbox, which is the bully guard of the radio-cars queue called the Admission Controller. As an observation, the same thing is true for both kids and computing tasks; some elements just require more processing than others. So when the queues starts filling up, the resource demanding troublemakers are simply removed and rejected. In our amusement park, Benny the Gatekeeper serves the role of the admission controller, and performs the important service of keeping the queue clean from those kids that try to ruin their surroundings on purpose. You probably know them by sight already! The Admission Controller can either remove them completely, or just assign them a lower priority. Then they will be taken care of when the processing capacity is large enough to handle them.
Still there is some chance that the capacity will hit the roof. So we need to turn back and see if SEDA has any more cards up the sleeve. And (surprise, surprise!) SEDA has a couple of more tricks that we can use. These tricks make use of the Resource Controller as well as the Admission Controller, who both provide information about the queue and resource status. Thus it is possible to get the knowledge of when the queue has reached its upper limit, and all available resources are used. This information can be provided to the Event Handler, which again can apply additional means to handle the load. One of those means can be Service Degradation. In our world of amusement parks, service degradation could be to reduce the radio-car trip from five to two minutes. Sure, it makes the kids grumpy, but hey! A short radio-car trip is better that no trip! By enabling (or disabling) modifications to service levels, the system can be allowed (or disallowed) to gracefully degrade.
So far we have talked about how to handle potential overload in a single SEDA stage, but as mentioned above, a service consists of a chain of SEDA stages. It would be quite handy to have some communication mechanism that can signal to the stage before in the chain when the queue size hits the roof. SEDA does this by noticing when events are rejected by an Admission Controller, and sending an enqueue failure to the stage upstream. This is like two tin cans connected by a string used to communicate from the radio-car facility to the merry-go-round. When the merry-go-round gets feedback from the radio-cars, it can provide it with a strategy to handle the situation with. It could redirect requests, like starting to send kids to the rollercoaster instead of the radio-cars, offering a ten minutes ride instead of five minutes, or as a worst case scenario, start to reject incoming requests (or kids).
By using the mechanisms described in the sections above, we end up with a very flexible and fine-grained way to handle overloads without causing the overall quality of service to suffer (badly) when the system is saturated with service requests.
So, then to the ultimate question; what the heck has this to do with Mule?
Mule and SEDA
In Mule, the environment, in which components are executed, is specified using the model concept. The model encapsulates and manages the runtime behavior of a server instance, and can be configured to operate in several different modes. As it turns out, a SEDA-based mode is the default one. In this mode, Mule is treating each component as a stage, with its own thread pool and work queue.
Remembering the SEDA-model, a stage consists of the following parts:
- Incoming Event Queue
- Admission Controller
- Dynamically sized Thread Pool
- Event Handler
- Resource Controller
With Mule, the Incoming Event Queue is provided as an inbound router or endpoint, and the Event Handler is the component itself. Thus we're short of an Admission Controller, Resource Controller and a Dynamically sized Thread Pool.
The Admission Controller is associated with the Incoming Event Endpoint of the SEDA stage, or component in Mule lingo. The most straightforward way of implementing this, is as an Inbound Router, which "can be used to control how and which events are received by a component subscribing on a channel" Mule Architecture Guide 2.
For example, our good friend Benny the Gatekeeper, the second in command for overseeing the radio-car queue, could be implemented as an Inbound Router, and configured in the following way:
Benny the Gatekeeper, however, is the not the sharpest knife in the drawer, and relying solely on his judgment is more than the amusement park bosses dares. This is why Kitty B has been asked to assist, or rather supervise Benny, in order to make sure the radio-car queue still earns its reputation for always being perfectly managed.
Mule lets us provide several chained inbound routers, which all have to accept an entry (kindergarten brat) in the queue, in order to grant it entry:
That the kids still are in kindergarten does not mean they're dumb. They've learned that a bar of chocolate can make Benny look the other way and let them pass, even if they're too young or short to ride the radio cars. Luckily, Kitty B is not a sucker for the brown stuff and manages to keep the brats out. When rejected the kids are handled by a catch-all-strategy, which kicks in when none of the inbound routers accepts an event on the queue. In the example above, the standard Mule ForwardingCatchAllStrategyforwards the rejected event to the provided endpoint. In the case of the chocolate bribing kindergarten brats (CBKB), they are shipped off to the RejectedFromRadioCars queue.
Thus, it's pretty safe to say that Mule provides excellent support for introducing your own flexible Admission Controllers.
As mentioned above, SEDA provides the notion of Resource Controllers that can manage the resources used in a stage. If the radio-car queue just keeps on growing, the Resource Controller can simply add more radio-cars to meet the demand.
As it turns out, Mule lets us configure most aspects of how stages are initialized and this is also true for the number of Threads and the size of the object pools. For example, consider the following configuration fragment of the Radio-car stage:
Mule lets us tune the amount of threads that can be allocated to a stage. This makes it easy for the radio-car operator to specify how many radio-cars should be available for our lovable kindergarten brats to enjoy. However, Mule does not provide out-of-the-box mechanisms for adjusting the amount of threads used, during runtime. Instead, Mule takes it upon its shoulders to make sure a stage operates with optimal parameters. Since we have to settle for people like Benny the Gatekeeper, delegating the Resource Controller functionality to Mule is always a good choice. However, if more control is needed over your resources, the default SEDA Model implementation can simply be overridden to achieve this.
Utilizing Mule Statistics
Mule collects a lot of statistics about its components, queues and overall environment. This can be utilized by, for example the Event Handler. Statistics about the size of the queues for each stage is freely available. This enables us to pull Service Degradation out of our toolbox, reducing the duration of a radio-car ride whenever the queue size passes a certain threshold. The kids will not like it, but we end up with an Amusement park Event that is running optimally, and the best part is that Benny will receive all the heat from the children that were cut short after standing in line for an eternity (a child’s eternity).
The collected statistics could also be utilized by an implementation of an Outbound Router. When the children are done with the radio-car ride, they pass Benny the Gatekeeper on their way out, giving him heat because of the reduced duration. If Benny is wearing his lenses, or in Mule-terms; is looking at the component statistics, he knows which Amusement park Events are not full. With this knowledge he could advice the kids who just gave him heat to head for the queues where the progress is painstakingly slow. This is a way of controlling the resources of the whole Amusement park environment.
As described in the first section, SEDA is a powerful mechanism that can be used for implementing a dynamic system that can handle bursts of load without causing a resource starvation in the overall application.
In this article we have tried to describe SEDA by itself and how it is implemented in Mule by use of an Amusement Park example (the link to the complete example source code is listed in the Resources section below).
To summarize the example; we’ve got the following model of the Mule Amusement Park application, keeping in mind that a lot of details have been left out.
Figure 2: Model of Mule Amusement-Park Application
Our kindergarten brats enter the Amusement Park by enrolling in the Mule Fair queue, or channel. Our lovable children are picked up at the gate by a park guide, who is responsible for directing the young’un’s towards one of the many fair events, the radio-car ride being the first, and also the most popular. After that the merry-go-round and the rollercoaster follows.
As mentioned before, guarding the gate at the radio-car queue is our most revered friend, Benny the Gatekeeper, who is implemented as a Mule Inbound Router. Benny, the old bastard, can be bribed with chocolate, but this is no shocking news to you. What’s really disturbing is the fact that if a kid comes along whose name contains the letter ‘e’, Benny will simply reject him or her and send them away sobbing to the main gate. Not the most useful Admission Controller you could think of, but still, he’s all we’ve got. The rest of the children are accepted by Benny and have to wait their turn in front of the exciting radio-cars.
Once done with the crazy driving, the children get out of the cars and regroup at the main gate, awaiting the next event.If the queue in front of the radio-cars, or any other event, grows beyond its capacity, Benny puts on his Resource Controller hat, and reduces the duration of a radio-car ride. If this doesn’t help, he puts an enqueue failure event on the Alarm Queue, signaling the Park Guide at the main gate to stop sending kids his way, until the storm settles.
So, folks, this is very much it. Go and play with Mule and SEDA as a way to handle scalability under high loads of traffic!
About the Authors
Rune Schumann is currently Senior Software Architect at Bouvet. Rune has Java experience since 1996, and worked with large distributed Java systems since 1999. He as worked for several years as senior architect, lead developer and project manager with SOA systems both in telecom and the retail business. Has also held SOA talks both on international conferences and the Norwegian JUG as well. Author of the article "Evolutionary integration with ESBs" published at InfoQ in June 2007.
Rune Peter Bjørnstad is currently working as a consultant at Bouvet. Rune has Java experience since 1998 and is a board member of javaBin, Norway’s Java User Group.
 SEDA Home Page
 Mule Architecture Guide
 Adaptive Overload Control for Busy Internet Servers, by Matt Welsh and David Culler
 Evolutionary integration with ESBs, by Rune Schumann and Kjetil Paulsen
 Matt Welsh Homepage
The complete source code for the Mule SEDA example in the article.
SEDA is a new strategy for incorporating event driven architecture for scalability and availability of services in the context of SOA.
New?? Hmmm, I recall hearing about it in 2003/2004 when I was developing a low latency, high thro'put server component using Proactor,Reactor pattern.
SEDA hijacked by SOA.. i mean in context of SOA might be new.
Mule was around before that SOA Borg started assimilating everything.
And yes, Mule rocks!
The concepts are very good, but not for the reasons given. It mainly has to do with the limitations of scalability of the ACID model, particularly with "I" of read-committed or above.
We use the queued concepts extensively within our Coherence infrastructure, which allows us to achieve and sustain the maximum theoretical network throughput on Gigabit Ethernet -- and that is using pure Java!
However, the benefits to application developers will be even greater, as these "low level" concepts begin to surface in standards for work flow management, staged transactions, etc.
The term "new" was actually added by the editors...
Yes I´m totally aware of that the SEDA concept is not new. Some of the ideas actually origines back to the grandfather of OO, Kristen Nygaard at the University of Oslo, Norway (great university :-D).
He actually introduced these ideas of processing stages. Øystein Haugen and Birger Møller Pedersen (the UML 2.0 guys, you know!) wrote a paper about this thingy they called (surprise!) STAGE. This mist be way back in the nineties...
So guys, sorry for the editor´s "add-on"...
Ben Linders May 28, 2015