BT

Article: Rationalizing the presentation tier

by Niclas Nilsson on Jul 09, 2008 |

Thin client paradigm characterized by web applications is a kludge that needs to be repudiated. Old compromises are no longer needed and it’s time to move the presentation tier to where it belongs.

In this article, Ganesh Prasad and Peter Svensson explains why this is the case, and what a modern approach looks like.

From the article:

However, let us be blunt about the impact of these frameworks. Although they bring order and rationality to server-side presentation logic, they only serve to perpetuate a kludge. There is low cohesion within the Presentation Tier, because presentation responsibilities have been split between browser and web server for extraneous reasons that have nothing to do with sound architecture. Simultaneously, there is tight coupling between presentation logic and business logic on the server side. Specifically, current web frameworks create the client on the server from several variants of server-side templates, configuration files, annotations and the like, which increases the complexity of building something that should have been straightforward. Today, not only are web frameworks no longer required, their acceptance as a natural component of every system hobbles us significantly in our effort to build applications better.

To dive in and see examples of a rational presentation tier, here is the full article.

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Good to hear by David Karr

It's good to hear someone speaking out about this idea. I've been thinking along these lines for quite a while. Several server-side frameworks take the approach of completely embedding the client-side framework within it, which leaves very little flexibility on the client-side.

However, it's also useful to be pragmatic. There's no reason to go all the way in this direction. It's a good idea for the server-side to consist more of smaller services used by the client, as opposed to having the server drive all of the client-side logic, but there's nothing wrong with, for instance, using the templating features of the server-side framework to give the client side a shortcut to its initial view.

I would also point out that building a "service-oriented architecture" here doesn't mean you need to use SOAP, or even XML. JSON works perfectly fine, and is better in some circumstances (that doesn't mean that you skip authentication and security).

Also, the Yahoo UI framework works just as well on the client side as the other client side frameworks you mention.

RE: A lean architectural model with seamless integration between... by Yuen Chi Lian

Exactly.



From the past projects that I have worked in (note: those are great products), the architects/developers use Velocity, Struts/WebWork, etc. to keep things seperated, within the same container, but things still turned ugly. After reading more about SOA and Domain-Driven Design, I am applying a more seamless-and-integrated layered architecture in my new project.



I agree with David too, as I am not a fan/believer of SOA+WS. I need something neat/lite, therefore XMLRPC or JSON or even plain-text will come first into consideration.



- yc

Nothing new by haiko van der schaaf

A bit of a dissapointing article. The trend of moving presentation logic to the client-side is seen as a way to seperate the presentation layer from business logic. But having the presentation logic on the server-side does not implicate it is impossible to make a clean separation between business and presentation logic. For years people are designing their web applications in multiple layers to separate the different concerns.

The technology like Flex and Ajax frameworks just make it possible to shift the presentation-layer to the client.

Physical and Logical Coupling by Julian Jones

The thrust of this article seems to be that, by moving the presentation logic to the client, the effect is to automatically decouple business logic from presentation logic.

I don't think this holds true. There's no reason why server-side presentation logic has to be tightly coupled with server-side business logic. There is also a reverse argument. If you are going to locate the presentation logic on the client, what guarantees are there that the business logic won't leak into that client-side code ?

Oversimplification by Claus Augusti

As much as I like reading about the demise of "classic" (read clumsy) web application frameworks I think the article describes a world that we don't live in yet and contains wrong or oversimplified statements that lead to a wrong conclusion.

First and foremost, what would you define as presentation logic? I think there's a great deal of misconception here, commonly defining it as everything that's not business logic. Defining templating as a part of presentation logic I clearly don't see that happening in the browser in the near future as JavaScript-based solutions or XSLT in the browser are either too slow or not available reliably on all platforms. Which leads me to "XML as a native data type". Having the clumsy DOM interfaces in modern browsers doesn't mean it's easy to work with XML from a data-centric point of view (any idea why native SOAP never took off in browsers?). I don't see "newer languages" available in the browser, JavaScript is your only choice and only in JavaScript 1.5 (Firefox 1.5+) you can find E4X implemented that finally gives you XML as a first-class citizen. As sad is it is, we still need to "minimize our dependence on a whimsical platform". Platforms aren't as bad as they used to be, but developing economically feasible web applications still boils down to finding the least common denominator. Add in poor memory handling, tons of bugs to work around, cross-domain access and many more issues...

Stating that "web frameworks [are] no longer required" is simply wrong. Pushing responsibilities from one layer to the other doesn't simplify your development process or architecture. Web frameworks will always be required. Certainly, they will look completely different than today, really embracing the open and distributed nature of the web rather than following old paradigms simply being renamed. What we really need are frameworks that don't really make a distinction anymore between client and server technology on the presentation tier. Frameworks that make it easy for us to build mashups from a SOA world. Frameworks that transparently distribute work cleverly between client and server, adapting to the device and it's capabilities. There's still a long way ahead of us.

This is the model followed by WSO2 products by Afkham Azeez

Good to see this article. At WSO2, this is the approach we've been following in our product management consoles. For example, see WSAS and Mashup Server. The management functionality (bizlogic) is provided by a set of Web services, and the front end is based on AJAX + XSLT. The browser client directly talks to the services. This approach really simplifies the development & maintenance.




Afkham Azeez

Re: Good to hear by Peter Svensson

I agree that the basic premise is not moved by how to format the data or how to get the client out in the first place.

However, by using a server-side framework to geenrate the client you immediately mix things up again. And if you do not intend to use the templating features at all, you might as well use a separate html/js client with a clean separation from the server.

Also, I agree that YUI also have a good data-model. My bad. :)

Cheers,
PS

Re: Nothing new by Peter Svensson

Hi Haiko. The idea is not to separate presentation from business logic, but to put the presentation on the client.

I absolutely agree with you (as we note in the article) that Flex and Ajax are ways to put the presetnation on the client.

Cheers,
PS

Re: Physical and Logical Coupling by Peter Svensson

Actually there is a proven tight coupling between business and presentation logic on the server (as we point out) being the server-side templating and 'meta-files' coupling template features to business logic.

The reverse argument does not hold true either, unless you explicitly make an error.

If you build a true client with cache and data layers in the page, which consumes services that only deliver data (or the other way around), you have a very clean separation of layers both physically and logically.

Cheers,
PS

Re: Oversimplification by Peter Svensson

Hi Claus :)

Regarding presentation logic: If I take that you feel that presentation logic is part of the templating languages, and your assertion that this is not taking off in the browser, consider that Dojo have three different client-side templating engines, one being DTL (DJango Templating Langauge). XSLT is not a very good idea to use for dynamic, reactive web-based apps.

The reason a true separation of client with presentation/view from server which produces and consume 'clean' data only is that you get two things;

1) A contract by protocol between presentation and business logic, making both processes less prone to mixing things up, also providing the possibility of having two different teams coding to a common protocol.

2) A simple server, being relieved of any client-side-in-a-can logic and meta-files and annotations.

I would really recommend you to check out the data layers and message buses in Ajax frameworks such as Dojo, YUI, ExtJS and others.

Cheers,
PS

Try reading the original articles by Ganesh Prasad

Perhaps we haven't been able to adequately explain all the concepts we have introduced in this brief article. More detailed explanations can be found here, here and here.

Re: Oversimplification by pims pims

Peter, Ganesh,


After reading your article a couple of times I still feel uncertain about several things.



1 - By offloading the presentation logic to the client you're unfortunately bringing uncertainty in the execution of the application.



When dealing with 95% of the Presentation-Logic server-side, you're operating in an environment you're more than familiar with, fully aware of its capabilities and limits. You have control almost over anything, which enables you to react quickly if something misbehaves. Which is obviously the other way-around when you're dealing with the client platform. Consistent VM on the client is still, as of today, an utopia. Of course, some have done the most tedious work by providing libraries which work cross-platforms but at the cost of speed or weight. (or both in worst case)



2 - By offloading the presentation logic to the client, you're also trading I/O access for HTTP requests. Is it worth it ?


You now have to deal with network issues (which were 99% of the time out of the equation on the server) and will certainly lead to more complexity on the client.



Until we can rely on Google Gears, (or sqlite) for every client-platform out there, I'm not sure about it. Caching extensively is way much easier on the server rather than on the client.



If I got it right, your service oriented architecture is now only serving up data for consumption on the client. But how much does raw data account for the total amount of data required to present the information ? In my experience, there's almost as much "cosmetic" as data in most of pages being served these days.



3- By putting presentation logic on the client, you're also making it accessible to everyone.

No more secret weapon, no more competitive advantage. Once it's on the client, it's not yours anymore. I do believe in sharing knowledge, and practical examples of its implementation, but from a business standpoint, this might be a step only a few will take.





I'd really like to see the view out of the server, as it would make my job much easier, but recent experience, has proven me that it's not an easy job, and is, as of today, not worth the effort & time to get it right accross every client platform :)



It's great to see that some people are challenging current "broken" implementation. as web goes, nothing will ever be set in stone

Re: Oversimplification by Peter Svensson

Peter, Ganesh,


After reading your article a couple of times I still feel uncertain about several things.



1 - By offloading the presentation logic to the client you're unfortunately bringing uncertainty in the execution of the application.



When dealing with 95% of the Presentation-Logic server-side, you're operating in an environment you're more than familiar with, fully aware of its capabilities and limits. You have control almost over anything, which enables you to react quickly if something misbehaves. Which is obviously the other way-around when you're dealing with the client platform. Consistent VM on the client is still, as of today, an utopia. Of course, some have done the most tedious work by providing libraries which work cross-platforms but at the cost of speed or weight. (or both in worst case)
-----------------------------------


Hi Pims pims.
I would say that your quote "more than familiar with" is the problem we're battling here. People tend to argue that they want to do everything on the server, simply because that's the only thing they know, a little bit like looking for the keys where there's light rather than where they're dropped :)



Having 'cross-trained' for a couple of years, I can still draw upon my 8+ years of Java/J2EE experience, but I can also contrast that with a not isignificant number of years creating real clients in JavaScript. Many, mostly well-meaning, dismissals of the idea come from people not having the requisite experince, and so the comparison lack actual experience. I don't mean to nag on you (too much), but I feel that you might discuss on general grounds, rather from a perspective where you can actually compare the two methods of developments side by side.



Also, recent years' development of Ajax frameworks have isolated most cross-browser issues (that you have to battle anyway, only now you have an insulated layer of best-of-breed code) . The latest Dojo loader came around 6K, loading the rest of the components and client-side logic on-demand (or as you wish).








2 - By offloading the presentation logic to the client, you're also trading I/O access for HTTP requests. Is it worth it ?


You now have to deal with network issues (which were 99% of the time out of the equation on the server) and will certainly lead to more complexity on the client.



Until we can rely on Google Gears, (or sqlite) for every client-platform out there, I'm not sure about it. Caching extensively is way much easier on the server rather than on the client.



If I got it right, your service oriented architecture is now only serving up data for consumption on the client. But how much does raw data account for the total amount of data required to present the information ? In my experience, there's almost as much "cosmetic" as data in most of pages being served these days.
-------------





I'm not really sure I understand what you mean here. I'd say we're trading whole page reloads for transferring only data, when it is needed. The alternative to load data asynchronously is to do full page reloads each time.



The amount of latency is the same, the amount of data is not. If the (single-page, download-once, browser-centric) application loads and posts data asynchronously, the user gets a smoother experience. This has been proven so many times I don't feel the need to do it again.



Also, when it comes to the part of 'raw data' in comparison to the 'cosmetic', that's my point! :) By downloading comsetics only once, you offload your server tremendously.







3- By putting presentation logic on the client, you're also making it accessible to everyone.

No more secret weapon, no more competitive advantage. Once it's on the client, it's not yours anymore. I do believe in sharing knowledge, and practical examples of its implementation, but from a business standpoint, this might be a step only a few will take.
----------





You have a point here. You basically give everyone a simple way of looking at how your ap is ticking. This is already happening, as you might have noticed (and used) some site using client-heavy logic and a lot of Ajax.



I'd say that in certain cases this might be an argument to use a more complex model of programming, but in general my take is that if a compatitor really want to copy your site, he will. Whatever you do.






I'd really like to see the view out of the server, as it would make my job much easier, but recent experience, has proven me that it's not an easy job, and is, as of today, not worth the effort & time to get it right accross every client platform :)
--------------------




I think that just separating the client from the server, and creating 'data' endpoints on the server (or consuming existing WSDL, or RESTive ones), you are free to research which client-side solution is best for you. I'm very much into creating things from scratch with Dojo, as I'm involved in the community, but maybe OpenLazslo, Tibco GI or a flash-based approach will fit the bill.





It's great to see that some people are challenging current "broken" implementation. as web goes, nothing will ever be set in stone

----------------





I agree completely. Sorry for my snotty tone above, though. Please do comment more on this, or mail us.



Cheers,
PS

Re: Oversimplification by pims pims

Hi Peter,

Thanks for taking the time to reply point by point to my previous comment.



Regarding my first point, (controlled vs uncertain environment) it wasn't meant as a lack of knowledge of the other environment (the client). It was more about the control that you have over the platform. The server is yours. You do whatever you want with it. Upgrade, downgrade, anything. You never have that kind of control on the client. You face obsolete browsers, outdated VM versions, under powered computers...



As for real-life examples, none of our clients has agreed to rely on the Flash Player 9 as the de-facto VM on the client. 97% adoption is not enough for them. They want is as close as possible to 100%. That leaves us with poor XML support (no namespaces, no effective native xpath), with no native regexp. sub-optimal sorting functions. All of this needs to be computed on the server, because we can't rely on the client to be up to date. And forcing them to upgrade is really not a viable option.



I/O vs HTTP.



My point is that to display data in its final state (human readable and pleasant to the eye), we need more than access to raw data. we need to load templates. we need to render those templates. We need to change some bits of information on the page, to add a couple of things, remove some others. We all know how complicated templates can get sometimes (because the desired output is overly complicated too) , they're often divided in several reusable pieces. When dealing with this server side, loading a template is as fast as reading a file from the disk. When it comes to loading all this on the client, it's as fast as the client network connection is. I/O access limitations are really not that common, while limits on concurrent http requests, much more frequent, have a real influence on loading times (and the bad perception it can generate).


Caching power intensive tasks seems to be easier on the server than on the client (until Gears, sqlite becomes ubiquitous)



Of course, the trade-off is transferring redundant information. By operating the template transformation on the client, the template needs to be transfered only once, where as it would be part of every request in a "traditional" way. I'm not challenging the advantages that asynchronous loading has brought us, rather pointing that with gzip compression, sending fully rendered html/js pages to the client, is really not a big deal. Dealing with client-side events to trigger other actions, making sure the browser doesn't get frozen while rendering memory intensive javascript generated html isn't always trivial.



Shifting the view from the server to the client will surely highlight some technical aspects we didn't have to worry about before. Not saying it's a good or bad idea, just trying to point the finger at issues we might now have to deal with.



With no support for RTL languages (as of FP9), and some other important limitations, and version ranging from 6 to 9, the Flash VM isn't the ideal client side platform. I'll give Dojo a serious try soon, and no love for XSLT ;)

Not just for browser clients by Ganesh Prasad

Peter's doing a good job of addressing each comment individually.



Rather than replicate what he's saying, let me make a general observation: this article seems to have created the erroneous impression that the architecture we're talking about is specific to browsers or to thin clients. Consequently, the maturity or otherwise of browser technology is a critical issue that calls into question its viability.



In fact, what we're proposing is just an *architecture*, and designers are free to use any technology, "rich/fat" or "thin", to implement it. You may find that some rich client technology fits the bill pretty well, in which case you should use it. What about deployment? That's an orthogonal issue unrelated to browser technology. Technologies such as OSGi and Java WebStart (JNLP), to name just two, are means to deploy rich applications to client platforms.



Another important point to keep in mind is that we approached this architecture from more than one angle. In today's world, business logic is increasingly being exposed as services - whether SOAP or REST. We thought it important that application front-ends should be capable of being layered seamlessly on top of a service interface. So rather than see this architecture as just a replacement for web frameworks, try and look at it also as an architecture for service-oriented front-ends.



Regards,

Ganesh

A Point Not Emphasized -- Performance And Money by John Tullis

Two factors should also be emphasized -- better performance and lower costs. The key is that typical web architectures involve shoving big fat CSS style sheets and lots of images -- the formatting and eye candy very much outweigh the actual data in many cases. Furthermore, the emphasis on POST means these big fat pages are not cached.



So -- by combining AJAX + REST + SOA and we get a situation where the style sheet data gets loaded -- once. And it was a GET so it is cached. And the AJAX engine was also a GET so it is cached. And then using REST principles many of the asynchronous requests are GETS so they are also cached. Combined with the judicious use of forward proxies (for internal corporate applications working across geographically distributed areas) the result is -- a huge drop in the amount of bytes being shifted across the WAN, and a reduction in the amount of server side processing.



The use of the forward proxies plus GET caching means then many accesses are LAN accesses not WAN accesses. Fast. Then only pulling data and doing the formatting locally means little bitty network transmissions instead of big fat ones. Fast. And the reduced load on the servers means less JVM garbage collection, less memory required, fewer CPUs required. And since application server licensing is typically CPU (or core) based, this means fewer licenses. So overall, less data center cost. Less SLA cost. A way for IT to cut costs.



What's not to like? Well, security is an issue, so this architectural model I think is best used for internal corporate applications with trusted users (employees), or for external users when the application cannot do database changes (e.g. read only). Because in the AJAX+REST model the state is maintained in the client -- and thus crackers can cause harm. With this caveat in mind, this architectural style is quite nice.

Re: A Point Not Emphasized -- Performance And Money by Sarma Pisapati

I concur. The major concerns are around security and performance that may increase costs and risks with this kind of architecture.

Re: A Point Not Emphasized -- Performance And Money by John Tullis

Sarma - this architectural style may increase risk (security) while decreasing overall costs and enhancing performance. I think for the correct circumstances the superior performance and reduced costs outweigh the security risks.

System behaviour still has to be consistent by Darrell Freeman

I'm totally in favour of this type of approach, but one of my concerns is that the overall system behaviour must be consitent no matter how many architectural layers or components are used.
I'm in Insurance and the systems are not simple. You cannot totally remove business rules out of the interface. If I want to create an interface that is essentially common to, say, an Investor product and a Personal product then the UI must exhibit different behaviour for each. When I collect data for each product there are differences; when I validate data on the server then it must have consistent behaviour for each product variation as well.
The solution is to create a "product definition" service that is common to whatever component needs to exhibit variant behaviour.
One of the reasons so many large systems find serverside programming easier, is that this type of complex variant behaviour is easier if it is implemented in one place.
In order to make this type of architecture more applicable to complex systems my feeling is that other technologies need to be added to the mix.

Re: Oversimplification by Imran Bohoran

Cant say I disagree on the concept of the "architecture" presented (dont think many of us do). However, the part of applying this with the correct blend of technology with the correct balance of cost vs. benefit is tricky. We talk about technology examples which were births from javascript, but some of us have to deal with accessibility issues that sometimes would affect you legally as well. Not saying that they cannot be solved but then their's cost.
And the fact that we are letting our application's functionality/data render to the client based on their sole capabilities (to the most) puts us in a spot in assurance of the output and performance.
I'm not going to ignore Ganesh's comment on 'Not just for browser clients', hence the agreement on the architecture. But the point is, as we deal with complexities of accessibility, usability, security, performance, cross browser compatibility etc. of the current web enabled era and with the technology options and their pros/cons which arent all that apparent in the eyes of business users all the time, we as architects/developers have to find the correct balance in this architecture and seperations - no simplication indeed (although I dont think the article was intended to simplify it).

This model can improve scalability and cost... by Douglas Stein

Assume you have 50,000 concurrent users. That means 10,000 computers you're not paying for. If you can separate "bulky, static, and cachable" from "lightweight, dynamic, and non-cachable" you have a formula that works.

In many cases, even fairly complex client side logic can be implemented using a finite-state-machine where the states and transitions (vector and adjacency matrix) can be transmitted as data.

As in any distributed architecture, you have to make sure that large data flows happen on the lowest-latency pathways. You also have to make sure your service interfaces have the right balance of chunky vs. chatty.

Nonetheless, it's worth the effort to go down this path.

Done That by Peter Mitchell

Back in 1999 we wrote an call centre application for a Telecom.
It used an HTA (hyper-text application) on the client side which supplied most (all) the presentation logic. Communication with the web server was a bit clunky given the technology of the time. We used message queues to keep everything asyncronous and a custom built call-back component on the client side that was reached using TCP/IP direct.
But the only communication back and forth was XML documents (which no longer conform to the current standards :-) with all rendering being done by VBScript in the client application.
The App had about 20-30 screens which guided the user through a number of fixed 'wizards' to allow them to connect/move/alter PSTN services on behalf of the user. All screen flow was controlled by a combination of client side and server side process maps.
It would be so nice to be able to use current technologies to do the same sort of thing. We had to custom build all the capabilities while today they would be standard in any number of frameworks.

Re: Oversimplification by Peter Svensson

Hi Peter,

Thanks for taking the time to reply point by point to my previous comment.



Regarding my first point, (controlled vs uncertain environment) it wasn't meant as a lack of knowledge of the other environment (the client). It was more about the control that you have over the platform. The server is yours. You do whatever you want with it. Upgrade, downgrade, anything. You never have that kind of control on the client. You face obsolete browsers, outdated VM versions, under powered computers...




--------------

Hi again Pims. Yes, I hear you, but again, nobody uses raw JavaScript to tackle cross-browser issues these days, when Dojo, jQuery,, Ext, YUI, etc. have solved that whell many times over.

What you do, when creating modern clients in the web browser (if we assume that this is our target)is leverage the existing wrappers in (for example) Dojo, which detects which platform it has to deal with and give you a standard API for various tasks anyway.


This means that you to a large extent today _can_ have control over the browser platform, in the way you mean.




----------------------

As for real-life examples, none of our clients has agreed to rely on the Flash Player 9 as the de-facto VM on the client. 97% adoption is not enough for them. They want is as close as possible to 100%. That leaves us with poor XML support (no namespaces, no effective native xpath), with no native regexp. sub-optimal sorting functions. All of this needs to be computed on the server, because we can't rely on the client to be up to date. And forcing them to upgrade is really not a viable option.



I/O vs HTTP.



My point is that to display data in its final state (human readable and pleasant to the eye), we need more than access to raw data. we need to load templates. we need to render those templates. We need to change some bits of information on the page, to add a couple of things, remove some others. We all know how complicated templates can get sometimes (because the desired output is overly complicated too) , they're often divided in several reusable pieces. When dealing with this server side, loading a template is as fast as reading a file from the disk. When it comes to loading all this on the client, it's as fast as the client network connection is. I/O access limitations are really not that common, while limits on concurrent http requests, much more frequent, have a real influence on loading times (and the bad perception it can generate).




-----------------
OK, these bits are going to go over to the client one way ot the other (most of the template bulk). The whole point (from my perspective) of TSA/SOFEA is to load HTML (and client-side templates) _once_ to the client (application download), and after that send only raw data to the client which then adds a few things, removes some others, sorts, manage forms, draw charts, et.c. without needing access to the server, since those things are part of the View.




-------------------

Caching power intensive tasks seems to be easier on the server than on the client (until Gears, sqlite becomes ubiquitous)



Of course, the trade-off is transferring redundant information. By operating the template transformation on the client, the template needs to be transfered only once, where as it would be part of every request in a "traditional" way. I'm not challenging the advantages that asynchronous loading has brought us, rather pointing that with gzip compression, sending fully rendered html/js pages to the client, is really not a big deal. Dealing with client-side events to trigger other actions, making sure the browser doesn't get frozen while rendering memory intensive javascript generated html isn't always trivial.





----------
No, precisely. Again, that is why you wouldn't reinvent the wheel but leverage an existing framework for this.
And the idea between Ajax is just to remove the round-trip time to the server for every little action for which data is already downloaded and present.




-----------


Shifting the view from the server to the client will surely highlight some technical aspects we didn't have to worry about before. Not saying it's a good or bad idea, just trying to point the finger at issues we might now have to deal with.



With no support for RTL languages (as of FP9), and some other important limitations, and version ranging from 6 to 9, the Flash VM isn't the ideal client side platform. I'll give Dojo a serious try soon, and no love for XSLT ;)

Lipstick on CGI by Matthew Quinlan

Great minds think alike..... and apparently, so do ours!


Lipstick on CGI blogpost.



Matthew Quinlan

Chief Evangelist

Appcelerator

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

24 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT