BT

Workflow Orchestration Using Spring AOP and AspectJ

Posted by Oleg Zhurakousky on Dec 29, 2008 |

1. Introduction

You need to implement a flow-like process, preferably embedded and you want it to be configurable, extensible and easy to manage and maintain. Do you need full scale BPM engine which comes with its own load of abstractions that might seem heavy for a simple flow orchestration you are looking for, or are there any light-weight alternatives we can use without committing to a full scale BPM engine? This article demonstrates how to build and orchestrate highly configurable and extensible yet light-weight embedded process flow using Aspect Oriented Programming (AOP) techniques. The current examples are based on Spring AOP and Aspect J, however other AOP techniques could be used to accomplish the same results.

2. Problem

Before we get any further, we need to get a better understanding of the actual problem first, and then try to match our understanding of the problem to a set of available patterns, tools and/or technologies and see if we can find a fit. Our problem is a process itself, so let's get a better understanding of it. What is the process? Process is a collection of coordinated activities that lead to accomplishing a set goal. Activity is a unit of instruction execution, and is the building block of a process. Each activity operates on a piece of shared data (context) to fulfill part of the overall goal of the process. Parts of the process goal that have been fulfilled signify accomplished facts which are used to coordinate execution of remaining activities. This essentially redefines the process as nothing more than a pattern of rules operating on the set of facts to coordinate execution of the activities which define such process. In order for the process to coordinate execution of the activities it must be aware of the following attributes:

  • Activities - activities defining this process
  • Shared data/context - defines mechanism to share data and facts accomplished by the activities
  • Transition rule - defines which activity comes next after the end of previous activity, based on the registered facts
  • Execution Decision - defines mechanism to enforce Transition rule
  • Initial data/context (optional) - initial state of the shared data to be operated on by this process

Diagram below shows high level structure of the process:

We can now formalize a process in the following set of requirements:

  • Define mechanism to assemble process as a collection of activities
  • Define individual activities
  • Define placeholder for shared data
  • Define mechanism that coordinates execution of those activities in the scope of the process
  • Define Transition rules and Execution Decision mechanism which enforces Transition rules based on the facts registered by the activities

3. Architecture & Design

We'll begin defining the architecture by addressing the first 4 requirements:

  • Define mechanism to assemble process as a collection of activities
  • Define individual activities
  • Define placeholder for shared data
  • Define mechanism that coordinates execution of those activities in the scope of the process

Activity is a stateless worker which should receive a token containing some data (context). Activity should operate on this shared data token by reading from it and writing to it, while performing business logic defined by this activity. Shared data token defines execution context of a process.

To stay true to the light-weight principles that we have set earlier, there is no reason not to define our Activity as Plain Old Java Objects (POJO) implementing Plain Old Java Interfaces (POJI).

Here is the definition of the Activity interface, with a single process(Object obj) method where the input parameter represents a placeholder for a shared data (context):

public interface Activity {
   public void process (Object data);
}

Placeholder for a shared data could be structured or un-structured (i.e., Map) object. It is entirely up to you. Currently our Activity interface defines it as java.lang.Object for simplicity, but in the real environment it would probably be represented as some type of structured object graph the structure of which is known to all participants of the process.

Process
Since process is a collection of the activities, we need to figure out the mechanism to assemble and execute such collection.

There are many ways to accomplish that. One of them would be to stick all activities in some type of ordered collection and then iterate through it invoking each activity in the pre-defined order. Configurability and extensibility in this approach would definitely suffer due to the fact that all aspects of the process control and execution would be hard coded.

We can also look at the process in somewhat unconventional way and say that process is a behavior-less abstraction with no concrete implementation. However, filtering this abstraction through a chain of Activity Filters will define the nature, state and the behavior of this process.

Let's assume that we have a class called GenericProcess which defines process(..) method:

public class GenericProcess {
   public void process(Object obj){
      System.out.println("executing process");
   }
}

If we were to invoke the process(..) method directly passing input object, not much would happened, since process(..) method doesn't do much and the state of the context would remain unchanged. But if we were to somehow find a way to introduce an activity before the call to a process(..) method and have this activity modify the input object, the process(..) method would still remain unchanged, but since the input object is pre-processed by the activity, overall result of the process context would change.

The technique of applyingIntercepting Filters to a target resource is well documented in Intercepting Filter pattern and is widely used in today's enterprise applications. Typical example would be Servlet Filters.

The Intercepting Filter pattern wraps existing application resources with a filter that intercepts the reception of a request and the transmission of a response. An intercepting filter can pre-process or redirect application requests, and can post-process or replace the content of application responses. Intercepting filters can also be stacked one on top of the other to add a chain of separate, declaratively-deployable services to existing resources with no changes to source code - http://java.sun.com/blueprints/patterns/InterceptingFilter.html

Historically this architecture was used to address non-functional concerns, such as Security, Transaction etc... But you can clearly see that the same approach could be easily applied to address functional characteristics of the application by assembling the process-like structures from the chain of intercepting filters representing individual Activities.

Figure below shows how call to a Process is intercepted by the filter chain where each filter tied to individual Activity of the process, which leaves the actual target Process component with nothing left to do, making it empty and reusable target object. Switch the filter chain and you have yourself a different Process.

The only thing left to do is to see if there is a framework out there which can help us to assemble something like this in the elegant way.

Proxy-based Spring AOP seem to be the perfect candidate, since it provides us with the simple structure and most importantly the execution mechanism. It will allow us to define an ordered collection of intercepting filters that represent the Activities of a given process. Incoming process request will be proxied through these filters decorating the process with the behavior implemented in the Activity Filters.

This leaves us with only one remaining requirement:

  • Define Transition rules and Execution Decision mechanism which enforces Transition rules based on the facts registered by the activities

The beauty of proxy-based filters is transition mechanism itself. The Intercepting filters will be invoked one after another by the proxy mechanism, every time we invoke target object(process). This comes for free, and works perfectly well in a situation where each and every Activity must always be invoked. But in reality this is not always the case. One of the statements we made earlier in the Problem definition is: "Parts of the process goal that have been fulfilled signify accomplished facts which are used to coordinate execution of remaining activities" - which means that completion of one activity does not necessarily grant transition to another activity. In the real process, transitions must be strictly based on facts accomplished and/or un-accomplished by the previous activity. These facts must be registered with the shared data placeholder, so they can be interrogated.

Accomplishing it is as simple as throwing an IF statement inside of our intercepting filter:


public Object invoke(MethodInvocation invocation){
   if (fact(s) exists){
      invoke activity
   }
}

But that would create several problems. Before we go into what they are let's clear one thing; In the current structure each intercepting filter is closely coupled with corresponding POJO Activity, and rightfully so. We could easily keep all Activity logic inside of intercepting filter itself. The only thing that stops us from doing it is our desire to keep our Activity as POJO, which means the code in intercepting filter will simply delegate to Activity callback.

This means that if we put transition evaluation logic inside of the Activity we would essentially couple two concerns together (activity transition logic and activity business logic), which would violate the basic architectural principle of separation of concerns and would result in concern/code coupling. Another issue would involve repeating the same transition logic across all intercepting filters. We call it concern/code scattering. Transition logic cross-cuts all intercepting filters and as you might have guessed AOP yet again comes to mind as the technology of choice.
All we need is to write an around advice that will allow us to intercept the invocation to the target method of the actual filter class, evaluate input, and make a transition decision by either allowing or disallowing the target method to execute. The only caveat is that our target class happened to be intercepting filter itself. So essentially we are trying to intercept the interceptor. Spring AOP unfortunately can't help us here simply because it is proxy-based and since our intercepting filters are already a part of a proxy infrastructure we simply cannot proxy the proxy.

But one of the best features of AOP is that it comes in several different flavors and implementations (i.e., proxy-based, byte-code weaving etc...). And although we can't use proxy-based AOP to proxy another proxy, nothing stands in the way of using byte-code weaving AOP, which will instrument individual intercepting filters of our proxy by weaving (compile-time or load-time) our transition evaluation logic into them, thus keeping transition and business logic separate. This could be easily achieved using framework such as AspectJ. By doing so we are introducing a second AOP layer into our architecture, which has very interesting implications. We are using Spring AOP to address functional concerns such as instrumenting process with activities, while AspectJ is used to address non-functional concerns such as activity navigation and transition.

Diagram below documents the final structure of the process flow shown as two AOP layers, where Functional AOP Layer is responsible for assembling a process form the set of ordered intercepting filters, while Non-Functional AOP Layer addresses the issue of transition governance.

To demonstrate this architecture at work we are going to implement a sample use case - Purchase Item which defines a simple process flow.

4. Use Case (Purchase Item)

Imagine you are shopping on line. You've selected the item, placed it in the shopping cart, went through checkout phase, gave your credit card information and finally submitted purchase item request. The system will initiate Purchase Item process.

PREREQUISITE
Process must receive data which contains item, billing and shipping information

MAIN FLOW
1. Validate item availability
2. Get Credit authorization
3. Process shipping

This process currently defines 3 activities as shown in the diagram below:

This diagram also shows the ungoverned activity transition. But in reality, what should happen if item is not available? Should "Get Credit Authorization" activity execute? Should "Process Shipping" follow?

Another interesting caveat is based on the condition where credit authorization could not be achieved automatically (authorization network is down) and you or customer service representative has to call credit company directly to get authorization number. Once such authorization number is obtained and entered into the system, at what point should this process be restarted or continued? Beginning or go straight into shipping? I would say go into shipping, but how? How can we restart the process from the middle without maintaining and managing a lot of execution control?

The interesting thing is that using AOP we don't need to maintain neither execution control nor the direction of the process. It is done by the framework itself while proxying the request through the chain of intercepting filter activities. All we need to do is come up with a mechanism which will allow or disallow individual filters to execute based on the registered facts.

"Validate Item availability" will register the fact that item is available. This fact should serve as pre-requisite for "Get Credit Authorization", which will also register the credit authorized fact, which will serve as pre-requisite to "Process Shipping" activity. The existence or lack of a fact could also be used to determine when not to execute a particular activity, which brings us back to "manual credit verification" scenario and how to restart the process from the middle or even better question: How can we restart the process without repeating the activities that were already performed within the context of this process?

Remember, the shared data token(context) also represents the state of the process. This state contains all the facts which were registered with this process. These facts are evaluated to make transition decisions. So, in our "manual credit verification" scenario if we were to resubmit the process as a whole from the very beginning, our transition management mechanism, upon encountering the first activity "Validate Item Availability", would quickly realize that item available fact is already registered and this activity does not have to be repeated, so it will skip to the next activity - "Credit Verification". Since credit authorized fact was also registered (through some type of manual entry), it would skip again to the next activity which is "Process Shipping" allowing only this activity to execute and complete the process.

Before we move to the actual example there is one more important topic yet to be discussed and that is the order in which activities are defined.
Although it might seem like at first, the order of the activities does not play any role in transition decision of the process defined by these activities.

The order of the Activities in the process only represents the equilibrium of the process itself - the strategy based on the likelihood and probability of facts, existence of which would create an ideal environment for the next activity to either execute or preclude its execution. Changes to the order of the activities must never affect the overall process.

Example:
Legend:
  d - depends
  p - produces
Process:
  ProcessContext = A(d-1, 2; p-3) -> B (d-1, 3; p-4, 5) -> C(d-4, 5);

According to the formula above, when a process is started within a given ProcessContext, the first activity to be looked at is A, which depends on Facts 1 and 2 to exist before it is invoked. Assuming the facts 1 and 2 do exist within the ProcessContext, activity A is going to execute and will produce fact 3. The next activity in line is B which depends on Fact 1 and 3. Knowing our process we determined that the likelihood and probability of Fact 3 occurring before activity A is extremely low. However the likelihood and probability of Fact 3 to exist after activity A is invoked is quite high, hence the order of activity B where B follows A.

But what would change if we were to flip the order of activities B and A?
  ProcessContext = B (d-1, 3; p-4, 5) -> A(d-1, 2; p-3) -> C(d-4, 5);

Not much. When process is invoked, ProcessContext which maintains the registry of Facts will quickly determine that not enough facts are registered to permit invocation of activity B, so it would skip to the next activity which is A. Assuming that facts 1 and 2 do exist, the evaluation of facts would determine that there are enough facts registered to permit invocation of A activity and so on. Activity C would be skipped as well, as it is missing prerequisites which are produced by activity B. If process is resubmitted with the same ProcessContext, activity B would be invoked, since Activity A in the previous invocation of the process registered fact required by activity B to satisfy its precondition. Activity A would be skipped since ProcesContext is aware that activity A did its job. Activity C would also be invoked, since activity B registered enough facts to satisfy precondition of activity C.

So as you can see switching the order of activities will not change the process behavior but might affect process's automation characteristics.

5. Example

The example consists of the following artifacts:

Implementation of the Generic Process, which as you can see contains no significant code, and in fact should never contain any significant code. The only purpose for this class is to serve as a target class to apply intercepting filter chain representing individual activities.

Its coresponding Spring definition:

The rest of the configuration of Purchase Item process consists of 3 parts:

Part 1 (line 14) - Process Assembly AOP configuration which consists of pointcut defining GenericProcessImpl.execute(..) method as Join Point. You can also see that we are using bean(purchseItem) pointcut expression to qualify which bean we are intercepting. This will allow us to define more then one process by creating another instance of GenericProcessImpl with different bean name applying different filter chain.
It also contains references to the Activity filters implemented as Aopaliance interceptors. By default, filters are chained in the order of top to bottom, however to be more explicit, we are also using order attribute.

Part 2 (line 30) - configures activity interceptors by defining three instances of ActivityFilterInterceptor. Each instance is being injected with corresponding POJO Activity bean defined later as well as facts attribute. Facts attribute defines a simple rule mechanism which allows us to specify a simple condition based on which an underlying activity will be allowed or disallowed to execute. For example: validateItemFilter defines "!VALIDATED_ITEM" fact rule, which will be interpreted as: Invoke activity unless VALIDATED_ITEM fact is registered within fact registry. This fact will be registered with the fact registry as soon as validatItemActivity executes, which will allow this activity to execute if fact is not yet registered and protect this activity from executing again if process is resubmitted with the same execution context where this fact is already registered.

Part 3 (line 47) - configures three POJO activities for our process.

ActivityFilterInterceptor - All it is doing is invoking the underlying POJO Activity and registers the facts returned by this activity (line 53), allowing POJO Activity to remain agnostic as to the location of the Fact Registry or any other infrastructure components of this process (see below). However, as we will see later, the invocation of this interceptor itself is controlled by AspectJ advice (second AOP layer) based on the fact rules specified in the configuration of each interceptor, thus controlling the execution of individual activities.

Individual POJO Activities simply return the String array of all facts they want to register which are then registered with Fact Registry by owning interceptor (see above).

TransitionGovernorAspect - is an AspectJ component that intercepts invocation of each Spring AOP interceptor which represents individual Activity. It does so by using Around advise where it evaluates the fact rules vs. registry of current facts, making a decision about proceeding or skipping the invocation of underlying activity interceptor. It does so by either invoking proceed(..) method of its own invocation object(ProceedingJoinPoint thisJoinPoint) or invoking proceed(..) method of intercepting filter's invocation object (MethodInvocation proxyMethodInvocation).

Since it is implemented as AspectJ aspect we need to provide configuration in META-INF/aop.xml (see below).

Since we will use load-time AOP, we need to register the weaver in Spring configuration. We'll do it by using context name space:

<context:load-time-weaver/>

At this point of time we are ready to test. As you see there is nothing special about the test. Here are the steps:

  • Get ApplicationContext
  • Get GenericProcess object
  • Create a list of Fact Registry
  • Create Object (Map in our case) which represents the input data as well as execution context
  • Invoke process method

Since we are utilizing AspectJ load-time weaving we need to provide -javaagent option as our VM argument.
The VM argument is:

  -javaagent:lib/spring-agent.jar

There is already spring-agient.jar in the lib directory.
After execution you should see output similar to this:

As you see from the test, the initial list of facts is empty, but if you were to populate it with existing facts, the process flow would be altered.
Give it a try by un-commenting the line which adds fact to the registry.
Uncomment the following line in your test:

// factRegistry.add("VALIDATED_ITEM");

And your output should change:

6. Conclusion

The approach demonstrates how to use two layers of AOP to assemble, orchestrate and control the process flow. The first layer is implemented using Spring AOP and assembles the process as the chain of intercepting filters, where each filter is injected with individual activity. The second layer is implemented using AspectJ and provides for orchestration and flow control of the process. Proxying our process through the chain of intercepting filters allows us to define and maintain direction of the process. But proxying mechanism also provides execution environment without requiring a separate engine such as BPM. We do it by using existing technology (Spring AOP) which provides control and execution mechanism.

The approach is a light weight and embedded. It uses existing Spring infrastructure and is build on the premise where process is a collection of orchestrated activities. Each activity is a POJO and is completely unaware of any infrastructure/controller components that manage it. This presents several benefits. Aside from typical architectural benefits of loose coupling and with the ever growing popularity and adoption of technologies such as OSGi, keeping activities and activity invocation control separate, opens the door for an activity to be implemented as an OSGi service, allowing each activity to become independently managed units (deployed, updated, un-deployed etc...). Testing is another benefit. Since activities are POJOs, they could be tested as POJO, outside of the process they are applied. They have a very well defined input/output contract (data it needs and data it expects to output). You can test each activity in isolation.

Separating control logic (intercepting filters) from business logic (POJO Activities) will also allow you to plug in more sophisticated rules façade to process fact rules, as well as testing where testing transition control logic should not affect the business logic implemented by the underlying activity.

Activities are independent building blocks and could be re-used as part of some other process. For example "Credit Validation" activity could be easily re-used while assembling some other process which requires credit validation.

7. References and Resources

8. About the Author

Oleg Zhurakousky is an IT professional currently working as Senior Consultant for SpringSource, with 14+ years of experience in software engineering across multiple disciplines including software architecture and design, consulting, business analysis and application development.
After starting his career in the world of COBOL & CICS in the early nineties, he has been focusing on professional Java and Java EE development since 1999. Since 2004 Oleg has been heavily involved in using several open source technologies and platforms (Spring in particular), while working on a number of projects around the world spanning industries such as Telecommunication, Banking, Law Enforcement, US DOD and others.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Small change by Tuomas Kassila

Thanks about your very nice and interesting article! I would change an order of the lines 51-54 and 55 (at first), and then add a new 56 line with return statement. Because if executing a method does not succeed, if it is throwing an exception and there should not be false fact in the registery!

Re: Small change by Oleg Zhurakousky

Thank you Tuomas.
One thing to remember is that this example was greatly simplified and generalized from the actual implementation that was delivered, but here are couple of points to keep in mind.
Although at first it might seem like each Activity delivers a single fact, in reality that is not true. Each fact is nothing more then a signal of some accomplishment therefore if you have a complex activity there could me more them a single fact registered per such Activity. However, if you decide that a particular Activity is transactional and must output all the facts or none, then you should implemented as such, but by default I am not implying transactional characteristics on fact registry, which means that a failed Activity could still register few facts based on things it managed to accomplish.

Branching, etc by D S

Your method looks good for linear process models with conditional activities, but would you agree that most process models will also require branches and other process model features?

Re: Branching, etc by Oleg Zhurakousky

I completely agree and in no way I think of this particular approach as replacement for BPM engine. Just a quick example on how using AOP and Spring in general, one can quickly build and orchestrate a light-weight, embedded process flow.
Having said that, I do have to acknowledge that this sample and this article is greatly simplified from what I am currently working on along the lines of Workflow Orchestration and BPM, where branching/forking, asynchronous processes, event handling and cloud-deployable tasks are all part of the implementation.

Tool Support by John Reynolds

Just curious on your take on Tool Support - specifically Process Visualization via BPMN or something similar.

Re: Tool Support by Oleg Zhurakousky

John

That's a good question and I'll be honest, I haven't given it much thought yet. For now it's more about experimenting/prototyping with several different implementations at the conceptual level without set scope or constraint.
However, BPMN (in my view) is notation mechanism to document and understand "business processes" while formalizing them around basic elements such as Events, Activities, Gateways, Connections etc. . . It also recognizes and formalizes various process modeling patterns such as Basic Control, Branching and Synchronization, Iteration etc. . . However, BPMN doesn't mandate an implementation model (although one might argue this from BPEL perspective). In any event, I am currently more curious in trying to experiment with different (lighter-weight) approaches of implementing such concepts and patterns outside of current execution models implemented around BPEL. So, although I don't see any conflict in how Business process is documented (BPMN or Flow Chart), I am drawing a clear separation between Process Modeling and Process Execution.

Re: Tool Support by Suresh K

Isn't this Chain of reponsibility pattern?

I am somewhat confused as to whether AOP is really adding any advantage to a simple bean doing the same.
[code]
<bean id="workflowExecutor" class="com..WorkflowExecutor">
<property name="processors">
<list>
<ref bean="bean1"/>
<ref bean="bean2"/>
<ref bean="bean3"/>
</list>
<processors></processors>
<bean></bean>
[/code]
I am not advantage of AOP but in this particular case, is it really needed to facilitate the activities?
</property></bean>

Scalability by Tom McCuch

Since AOP is bound to a single JVM, have you thought about how this light-weight BPM can scale to support high-concurrency?

Re: Tool Support by Oleg Zhurakousky

Well
As far as AOP adding any advantage is rater objective and is up to the community to decide.
Having said that. . .
In your example WorkflowExecutor must maintain the code to build the list (although Spring's property editor will take care of it) of activities, then it has to iterate through it and invoke each activity etc. . . In other words you are defining the execution model, which in AOP you don't have to since AOP already defines the execution model to trigger invocation of every interceptor in the order they were defined. Second, in your example, bean1,2,3 seem to be invoked without any rule, unless the rules are programmatically maintained by WorkflowExecutor. Applying second layer of AOP allows you to introduce control and transition governance model separate from the execution model. It also allows you to introduce transition changes without changing individual activities.

Re: Scalability by Oleg Zhurakousky

In this case it's really not about AOP, but rather about the transitioning between the activities which is strictly goverend by the registered facts.
AOP in this particular example is just an enabler to address two concerns:
1. Execution and direction of the activities
2. Activity transition which is based on the facts

Concurrency and parallelism comes from the actual implementation of the Activity and Transitioning interceptor(s)
Let's assume the process made of 3 activities A, B and C. Following the formula described in the article let's say it looks like this:

ProcessContext = A(d-1, 2; p-3) -> B (d-1, 3; p-4, 5) -> C(d-4, 5);

Let's say Activity B is implemented as asynchronous activity which means that B and C will be invoked "almost" concurrently. However C depends on the facts that must be registered by activity B, which most likely will not be present in the fact registry when C is invoked so it will be skipped. Activity B, however based on its implementation could easily, upon completion and fact registration, resubmit the actual process, thus allowing activity C to complete. You could also implement the wait with timeout inside of transitioning interceptor allowing activity C to wait for required facts. Don't forget, activities could register more then one fact throughout its execution which means that while activity B is still executing it might register enough facts to allow activity C to begin its execution before B is actually finished.
As for multiple VMs, activities do not have to be bound to the single JVM and could reside in the distributed environment, as long as they are reachable by some remoting mechanism(i.e., RMI or going all the way to utilize technologies such as Terracota, GridGain etc. . .)

Re: Scalability by Suresh K

bean1,2,3 are equivalent of your *filter beans.

Nevertheless, I completely agree with your solution.

Ah! Execution Model is handled by AOP in your example. My question was more around the benefit provided by AOP for this particular use case - "light-weight" workflow orchestration solution.

Re: Scalability by Oleg Zhurakousky

There are two layers of AOP.
1. Execution (functional)
2. Transition/orchestration (non-functional)

Read the previous post for more details.

What about debugging? by Ze Ro

We are currently using Ant as kind of "workflow engine" in a project(don't look at me). The problem is that Ant seems to start new processes for some tasks, which makes debugging of a whole "process" impossible.
I want to re-design the project to a single process model based on Spring. This article is really great and gave me a lot of ideas, but:

1. What about debugging? Is Aspectj debuggable with load time weaving?

2. I am not experienced in AOP (know the principles), so all the configuration is a bit hard to catch. I need an *easy* way to design new processes with exisiting Activities. This example does not look like easy and I think I would rather prefer Suresh's version of defining a process, since it's more intuitive and easier to grasp (although from the SW architectural point of view I understand the good ideas behind the Oleg's article).

Thanks, Roman

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

13 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT