BT

Rationalizing the Presentation Tier

Posted by Ganesh Prasad, Peter Svensson on Jul 04, 2008 |

Introduction

We need to overhaul our current view of presentation technologies, because years of conditioning have caused many in the IT industry to view some very aberrant design patterns as normal and natural, which in turn creates significant impediments to building good distributed applications in the modern era. Our position in this paper is that the entire thin client paradigm characterized by web applications (as opposed to static web sites) is actually a "kludge" that needs to be repudiated. To understand why we say this, one needs to travel back in time to the mid-nineties, when the Web began.

History

Two mutually antagonistic developments occurred almost simultaneously as the Web exploded in popularity: (1) the overwhelming importance of the browser as a ubiquitous client-side "application platform" that made applications easy to deploy with a small footprint, and (2) the vendor-based fragmentation that robbed that same platform of much of its potential. By the latter, we're referring to the browser wars between Netscape and Microsoft, which resulted in two families of products that behaved very differently when rendering web pages and executing JavaScript code. It was infuriatingly common for web applications not to work on one or the other browser. Faced on the one hand with increased demand for the Web way of delivering applications to users, and on the other with unreliable platforms on which to deliver them, what could the industry do?

The most commonsensical approach was to rely on the browser for the barest minimum functionality - rendering simply-formatted web pages, following hyperlinks, submitting forms, etc., and moving all other presentation logic to the part of the system that the service provider could control - the Web Server. Many server-side web frameworks then began to emerge to help manage the complexity of combined business logic and presentation logic on a single platform. Struts was perhaps the earliest success among these frameworks. Today, there are literally more than 50 of them, each claiming some particular advantage over its predecessors.

However, let us be blunt about the impact of these frameworks. Although they bring order and rationality to server-side presentation logic, they only serve to perpetuate a kludge. There is low cohesion within the Presentation Tier, because presentation responsibilities have been split between browser and web server for extraneous reasons that have nothing to do with sound architecture. Simultaneously, there is tight coupling between presentation logic and business logic on the server side. Specifically, current web frameworks create the client on the server from several variants of server-side templates, configuration files, annotations and the like, which increases the complexity of building something that should have been straightforward. Today, not only are web frameworks no longer required, their acceptance as a natural component of every system hobbles us significantly in our effort to build applications better.

A tipping point has been reached

There have been at least three developments in the recent past that help us break with history.

  1. The new-found popularity of an age-old principle, or what is known as Service-Oriented Architecture (SOA), indirectly changes the outlook for the Presentation Tier as well. Define it as we may, SOA rationalizes the way business logic is organized and provides a uniform interface to it. Good architectures are based on discrete layers that encapsulate different aspects of an application, and SOA allows the User Interface (UI) to be architected more elegantly as a true presentation layer. This layer holds no business logic but is a consumer of business services.
  2. The browser fragmentation that characterized the first decade of web applications has largely healed. Virtually all modern browsers conform to industry standards. Our strategy need no longer be one of minimizing our dependence on a whimsical platform. We can now place all presentation logic on the browser platform where it belongs and be reasonably confident that it will work as expected regardless of vendor implementation.
  3. XML has become dominant as the lingua franca for data interchange between systems. From our viewpoint, at a practical level, SOA is mainly about the design of XML document interchange and secondarily about the plumbing required to achieve that interchange. There are better tools available to work with XML, and newer languages (especially scripting languages) have begun to treat XML as a native data type, making the use of XML far less painful. These modern tools make it more natural to build SOA-conforming systems.

 

All of these developments benefit from a common architectural model - SOFEA (Service-Oriented Front-End Architecture). Interestingly, the older rich client paradigm, which has been overshadowed by the thin client or web model, now points the way towards a better architecture for both.

The basic principle of the new model is the separation of presentation concerns from business logic concerns. The latter align neatly with SOA principles, so the presentation tier needs to be compatible with a service-oriented business tier, across a well-defined Service Interface. If we look at SOA as the exchange of well-designed, structured data between architecturally distinct domains (whether they be application domains like Marketing and Finance or technology domains like architectural "layers"), there is no reason why the presentation tier should be exempt from such principles.

One of the important aspects of such compatibility is respect for data integrity end-to-end across an application. This translates to being able to speak XML in the form of XML document payloads using either of today's popular service paradigms - SOAP/WS-* or REST.

Elements of the new paradigm

The diagram above shows the important physical components and logical processes in this architecture. The Application Container is a generic name for the platform on which the client-side application runs. It is important to note that this is never a server-side component. The Application refers to the code that runs on this platform. The Application is not part of the Application Container.

It is sourced from somewhere else and is loaded onto the Application Platform at some point in time before it actually runs. The Download Server is the term for the component that serves the Application up to the Client Platform so it can be locally installed before it is run. The Service Interface is the standardized interface to the Service Tier. It supports a well-understood mechanism to exchange XML-based documents between the presentation tier of an application and the business logic tier.

There are three basic processes that occur on the client side. Application Download refers to the process of getting an Application onto the Application Container before it can run. Usually, this takes place all at once, but it could also be done in lazy fashion, loading screens on demand from the Download Server when required. In this latter design though, the system does not behave like a conventional web application that uses a server-side web framework to drive the screen flow. A client application that loads screens (or pages) on demand still drives this flow. Presentation Flow refers to the logic by which screens (or other channel-specific artifacts) are presented to the user. Data Interchange refers to the exchange of (XML-formatted and Schema-conforming) data between the Presentation and Service tiers.

Underlying rationale

The core theme behind this model is the separation of orthogonal concerns. This has always been a requirement, but could never be met because of the client platform constraints we described earlier. Now at last, the situation is conductive to building applications the way they should be. The presentation tier becomes smaller, more cohesive, more understandable and more maintainable as business logic is stripped from it. The server side also benefits similarly as presentation logic is removed from it.

Implications of the paradigm

  • A lean architectural model with seamless integration between presentation tier and business logic tier with no impedance mismatch
  • Rationalization of the role of a "web server" (for the first time)
  • Support for MVC as the most natural design pattern for the presentation tier
  • Assurance of end-to-end data integrity in applications; unification of "thin client" and "rich client" models
  • Support for both SOAP and REST based services
  • The server is no longer burdened with presentation-related logic and can be lighter/thinner
  • Multiple user interfaces for the same set of business services can be built with much less cost
  • The pressure to reuse presentation tier artifacts reduces, if business tier design has been done well and sufficient reuse is obtained just by calling the right services from the presentation tier

 

Technology examples

Ajax frameworks (Dojo, jQuery, Ext, et.c.), GWT, TIBCO GI, XForms, Mozilla XUL, Microsoft Silverlight/XAML, Java WebStart, JavaFX Script, Adobe Flex, OpenLaszlo, etc.

Conclusion

The contribution of this model is not so much to introduce something new as to show that an old and entrenched compromise need no longer be made. We have tried to show how new technology makes it possible to design and build applications the way they should always have been built. This is a paradigm whose time has finally come.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Good to hear by David Karr

It's good to hear someone speaking out about this idea. I've been thinking along these lines for quite a while. Several server-side frameworks take the approach of completely embedding the client-side framework within it, which leaves very little flexibility on the client-side.

However, it's also useful to be pragmatic. There's no reason to go all the way in this direction. It's a good idea for the server-side to consist more of smaller services used by the client, as opposed to having the server drive all of the client-side logic, but there's nothing wrong with, for instance, using the templating features of the server-side framework to give the client side a shortcut to its initial view.

I would also point out that building a "service-oriented architecture" here doesn't mean you need to use SOAP, or even XML. JSON works perfectly fine, and is better in some circumstances (that doesn't mean that you skip authentication and security).

Also, the Yahoo UI framework works just as well on the client side as the other client side frameworks you mention.

RE: A lean architectural model with seamless integration between... by Yuen Chi Lian

Exactly.



From the past projects that I have worked in (note: those are great products), the architects/developers use Velocity, Struts/WebWork, etc. to keep things seperated, within the same container, but things still turned ugly. After reading more about SOA and Domain-Driven Design, I am applying a more seamless-and-integrated layered architecture in my new project.



I agree with David too, as I am not a fan/believer of SOA+WS. I need something neat/lite, therefore XMLRPC or JSON or even plain-text will come first into consideration.



- yc

Nothing new by haiko van der schaaf

A bit of a dissapointing article. The trend of moving presentation logic to the client-side is seen as a way to seperate the presentation layer from business logic. But having the presentation logic on the server-side does not implicate it is impossible to make a clean separation between business and presentation logic. For years people are designing their web applications in multiple layers to separate the different concerns.

The technology like Flex and Ajax frameworks just make it possible to shift the presentation-layer to the client.

Physical and Logical Coupling by Julian Jones

The thrust of this article seems to be that, by moving the presentation logic to the client, the effect is to automatically decouple business logic from presentation logic.

I don't think this holds true. There's no reason why server-side presentation logic has to be tightly coupled with server-side business logic. There is also a reverse argument. If you are going to locate the presentation logic on the client, what guarantees are there that the business logic won't leak into that client-side code ?

Oversimplification by Claus Augusti

As much as I like reading about the demise of "classic" (read clumsy) web application frameworks I think the article describes a world that we don't live in yet and contains wrong or oversimplified statements that lead to a wrong conclusion.

First and foremost, what would you define as presentation logic? I think there's a great deal of misconception here, commonly defining it as everything that's not business logic. Defining templating as a part of presentation logic I clearly don't see that happening in the browser in the near future as JavaScript-based solutions or XSLT in the browser are either too slow or not available reliably on all platforms. Which leads me to "XML as a native data type". Having the clumsy DOM interfaces in modern browsers doesn't mean it's easy to work with XML from a data-centric point of view (any idea why native SOAP never took off in browsers?). I don't see "newer languages" available in the browser, JavaScript is your only choice and only in JavaScript 1.5 (Firefox 1.5+) you can find E4X implemented that finally gives you XML as a first-class citizen. As sad is it is, we still need to "minimize our dependence on a whimsical platform". Platforms aren't as bad as they used to be, but developing economically feasible web applications still boils down to finding the least common denominator. Add in poor memory handling, tons of bugs to work around, cross-domain access and many more issues...

Stating that "web frameworks [are] no longer required" is simply wrong. Pushing responsibilities from one layer to the other doesn't simplify your development process or architecture. Web frameworks will always be required. Certainly, they will look completely different than today, really embracing the open and distributed nature of the web rather than following old paradigms simply being renamed. What we really need are frameworks that don't really make a distinction anymore between client and server technology on the presentation tier. Frameworks that make it easy for us to build mashups from a SOA world. Frameworks that transparently distribute work cleverly between client and server, adapting to the device and it's capabilities. There's still a long way ahead of us.

This is the model followed by WSO2 products by Afkham Azeez

Good to see this article. At WSO2, this is the approach we've been following in our product management consoles. For example, see WSAS and Mashup Server. The management functionality (bizlogic) is provided by a set of Web services, and the front end is based on AJAX + XSLT. The browser client directly talks to the services. This approach really simplifies the development & maintenance.




Afkham Azeez

Re: Good to hear by Peter Svensson

I agree that the basic premise is not moved by how to format the data or how to get the client out in the first place.

However, by using a server-side framework to geenrate the client you immediately mix things up again. And if you do not intend to use the templating features at all, you might as well use a separate html/js client with a clean separation from the server.

Also, I agree that YUI also have a good data-model. My bad. :)

Cheers,
PS

Re: Nothing new by Peter Svensson

Hi Haiko. The idea is not to separate presentation from business logic, but to put the presentation on the client.

I absolutely agree with you (as we note in the article) that Flex and Ajax are ways to put the presetnation on the client.

Cheers,
PS

Re: Physical and Logical Coupling by Peter Svensson

Actually there is a proven tight coupling between business and presentation logic on the server (as we point out) being the server-side templating and 'meta-files' coupling template features to business logic.

The reverse argument does not hold true either, unless you explicitly make an error.

If you build a true client with cache and data layers in the page, which consumes services that only deliver data (or the other way around), you have a very clean separation of layers both physically and logically.

Cheers,
PS

Re: Oversimplification by Peter Svensson

Hi Claus :)

Regarding presentation logic: If I take that you feel that presentation logic is part of the templating languages, and your assertion that this is not taking off in the browser, consider that Dojo have three different client-side templating engines, one being DTL (DJango Templating Langauge). XSLT is not a very good idea to use for dynamic, reactive web-based apps.

The reason a true separation of client with presentation/view from server which produces and consume 'clean' data only is that you get two things;

1) A contract by protocol between presentation and business logic, making both processes less prone to mixing things up, also providing the possibility of having two different teams coding to a common protocol.

2) A simple server, being relieved of any client-side-in-a-can logic and meta-files and annotations.

I would really recommend you to check out the data layers and message buses in Ajax frameworks such as Dojo, YUI, ExtJS and others.

Cheers,
PS

Try reading the original articles by Ganesh Prasad

Perhaps we haven't been able to adequately explain all the concepts we have introduced in this brief article. More detailed explanations can be found here, here and here.

Re: Oversimplification by pims pims

Peter, Ganesh,


After reading your article a couple of times I still feel uncertain about several things.



1 - By offloading the presentation logic to the client you're unfortunately bringing uncertainty in the execution of the application.



When dealing with 95% of the Presentation-Logic server-side, you're operating in an environment you're more than familiar with, fully aware of its capabilities and limits. You have control almost over anything, which enables you to react quickly if something misbehaves. Which is obviously the other way-around when you're dealing with the client platform. Consistent VM on the client is still, as of today, an utopia. Of course, some have done the most tedious work by providing libraries which work cross-platforms but at the cost of speed or weight. (or both in worst case)



2 - By offloading the presentation logic to the client, you're also trading I/O access for HTTP requests. Is it worth it ?


You now have to deal with network issues (which were 99% of the time out of the equation on the server) and will certainly lead to more complexity on the client.



Until we can rely on Google Gears, (or sqlite) for every client-platform out there, I'm not sure about it. Caching extensively is way much easier on the server rather than on the client.



If I got it right, your service oriented architecture is now only serving up data for consumption on the client. But how much does raw data account for the total amount of data required to present the information ? In my experience, there's almost as much "cosmetic" as data in most of pages being served these days.



3- By putting presentation logic on the client, you're also making it accessible to everyone.

No more secret weapon, no more competitive advantage. Once it's on the client, it's not yours anymore. I do believe in sharing knowledge, and practical examples of its implementation, but from a business standpoint, this might be a step only a few will take.





I'd really like to see the view out of the server, as it would make my job much easier, but recent experience, has proven me that it's not an easy job, and is, as of today, not worth the effort & time to get it right accross every client platform :)



It's great to see that some people are challenging current "broken" implementation. as web goes, nothing will ever be set in stone

Re: Oversimplification by Peter Svensson

Peter, Ganesh,


After reading your article a couple of times I still feel uncertain about several things.



1 - By offloading the presentation logic to the client you're unfortunately bringing uncertainty in the execution of the application.



When dealing with 95% of the Presentation-Logic server-side, you're operating in an environment you're more than familiar with, fully aware of its capabilities and limits. You have control almost over anything, which enables you to react quickly if something misbehaves. Which is obviously the other way-around when you're dealing with the client platform. Consistent VM on the client is still, as of today, an utopia. Of course, some have done the most tedious work by providing libraries which work cross-platforms but at the cost of speed or weight. (or both in worst case)
-----------------------------------


Hi Pims pims.
I would say that your quote "more than familiar with" is the problem we're battling here. People tend to argue that they want to do everything on the server, simply because that's the only thing they know, a little bit like looking for the keys where there's light rather than where they're dropped :)



Having 'cross-trained' for a couple of years, I can still draw upon my 8+ years of Java/J2EE experience, but I can also contrast that with a not isignificant number of years creating real clients in JavaScript. Many, mostly well-meaning, dismissals of the idea come from people not having the requisite experince, and so the comparison lack actual experience. I don't mean to nag on you (too much), but I feel that you might discuss on general grounds, rather from a perspective where you can actually compare the two methods of developments side by side.



Also, recent years' development of Ajax frameworks have isolated most cross-browser issues (that you have to battle anyway, only now you have an insulated layer of best-of-breed code) . The latest Dojo loader came around 6K, loading the rest of the components and client-side logic on-demand (or as you wish).








2 - By offloading the presentation logic to the client, you're also trading I/O access for HTTP requests. Is it worth it ?


You now have to deal with network issues (which were 99% of the time out of the equation on the server) and will certainly lead to more complexity on the client.



Until we can rely on Google Gears, (or sqlite) for every client-platform out there, I'm not sure about it. Caching extensively is way much easier on the server rather than on the client.



If I got it right, your service oriented architecture is now only serving up data for consumption on the client. But how much does raw data account for the total amount of data required to present the information ? In my experience, there's almost as much "cosmetic" as data in most of pages being served these days.
-------------





I'm not really sure I understand what you mean here. I'd say we're trading whole page reloads for transferring only data, when it is needed. The alternative to load data asynchronously is to do full page reloads each time.



The amount of latency is the same, the amount of data is not. If the (single-page, download-once, browser-centric) application loads and posts data asynchronously, the user gets a smoother experience. This has been proven so many times I don't feel the need to do it again.



Also, when it comes to the part of 'raw data' in comparison to the 'cosmetic', that's my point! :) By downloading comsetics only once, you offload your server tremendously.







3- By putting presentation logic on the client, you're also making it accessible to everyone.

No more secret weapon, no more competitive advantage. Once it's on the client, it's not yours anymore. I do believe in sharing knowledge, and practical examples of its implementation, but from a business standpoint, this might be a step only a few will take.
----------





You have a point here. You basically give everyone a simple way of looking at how your ap is ticking. This is already happening, as you might have noticed (and used) some site using client-heavy logic and a lot of Ajax.



I'd say that in certain cases this might be an argument to use a more complex model of programming, but in general my take is that if a compatitor really want to copy your site, he will. Whatever you do.






I'd really like to see the view out of the server, as it would make my job much easier, but recent experience, has proven me that it's not an easy job, and is, as of today, not worth the effort & time to get it right accross every client platform :)
--------------------




I think that just separating the client from the server, and creating 'data' endpoints on the server (or consuming existing WSDL, or RESTive ones), you are free to research which client-side solution is best for you. I'm very much into creating things from scratch with Dojo, as I'm involved in the community, but maybe OpenLazslo, Tibco GI or a flash-based approach will fit the bill.





It's great to see that some people are challenging current "broken" implementation. as web goes, nothing will ever be set in stone

----------------





I agree completely. Sorry for my snotty tone above, though. Please do comment more on this, or mail us.



Cheers,
PS

Re: Oversimplification by pims pims

Hi Peter,

Thanks for taking the time to reply point by point to my previous comment.



Regarding my first point, (controlled vs uncertain environment) it wasn't meant as a lack of knowledge of the other environment (the client). It was more about the control that you have over the platform. The server is yours. You do whatever you want with it. Upgrade, downgrade, anything. You never have that kind of control on the client. You face obsolete browsers, outdated VM versions, under powered computers...



As for real-life examples, none of our clients has agreed to rely on the Flash Player 9 as the de-facto VM on the client. 97% adoption is not enough for them. They want is as close as possible to 100%. That leaves us with poor XML support (no namespaces, no effective native xpath), with no native regexp. sub-optimal sorting functions. All of this needs to be computed on the server, because we can't rely on the client to be up to date. And forcing them to upgrade is really not a viable option.



I/O vs HTTP.



My point is that to display data in its final state (human readable and pleasant to the eye), we need more than access to raw data. we need to load templates. we need to render those templates. We need to change some bits of information on the page, to add a couple of things, remove some others. We all know how complicated templates can get sometimes (because the desired output is overly complicated too) , they're often divided in several reusable pieces. When dealing with this server side, loading a template is as fast as reading a file from the disk. When it comes to loading all this on the client, it's as fast as the client network connection is. I/O access limitations are really not that common, while limits on concurrent http requests, much more frequent, have a real influence on loading times (and the bad perception it can generate).


Caching power intensive tasks seems to be easier on the server than on the client (until Gears, sqlite becomes ubiquitous)



Of course, the trade-off is transferring redundant information. By operating the template transformation on the client, the template needs to be transfered only once, where as it would be part of every request in a "traditional" way. I'm not challenging the advantages that asynchronous loading has brought us, rather pointing that with gzip compression, sending fully rendered html/js pages to the client, is really not a big deal. Dealing with client-side events to trigger other actions, making sure the browser doesn't get frozen while rendering memory intensive javascript generated html isn't always trivial.



Shifting the view from the server to the client will surely highlight some technical aspects we didn't have to worry about before. Not saying it's a good or bad idea, just trying to point the finger at issues we might now have to deal with.



With no support for RTL languages (as of FP9), and some other important limitations, and version ranging from 6 to 9, the Flash VM isn't the ideal client side platform. I'll give Dojo a serious try soon, and no love for XSLT ;)

Not just for browser clients by Ganesh Prasad

Peter's doing a good job of addressing each comment individually.



Rather than replicate what he's saying, let me make a general observation: this article seems to have created the erroneous impression that the architecture we're talking about is specific to browsers or to thin clients. Consequently, the maturity or otherwise of browser technology is a critical issue that calls into question its viability.



In fact, what we're proposing is just an *architecture*, and designers are free to use any technology, "rich/fat" or "thin", to implement it. You may find that some rich client technology fits the bill pretty well, in which case you should use it. What about deployment? That's an orthogonal issue unrelated to browser technology. Technologies such as OSGi and Java WebStart (JNLP), to name just two, are means to deploy rich applications to client platforms.



Another important point to keep in mind is that we approached this architecture from more than one angle. In today's world, business logic is increasingly being exposed as services - whether SOAP or REST. We thought it important that application front-ends should be capable of being layered seamlessly on top of a service interface. So rather than see this architecture as just a replacement for web frameworks, try and look at it also as an architecture for service-oriented front-ends.



Regards,

Ganesh

A Point Not Emphasized -- Performance And Money by John Tullis

Two factors should also be emphasized -- better performance and lower costs. The key is that typical web architectures involve shoving big fat CSS style sheets and lots of images -- the formatting and eye candy very much outweigh the actual data in many cases. Furthermore, the emphasis on POST means these big fat pages are not cached.



So -- by combining AJAX + REST + SOA and we get a situation where the style sheet data gets loaded -- once. And it was a GET so it is cached. And the AJAX engine was also a GET so it is cached. And then using REST principles many of the asynchronous requests are GETS so they are also cached. Combined with the judicious use of forward proxies (for internal corporate applications working across geographically distributed areas) the result is -- a huge drop in the amount of bytes being shifted across the WAN, and a reduction in the amount of server side processing.



The use of the forward proxies plus GET caching means then many accesses are LAN accesses not WAN accesses. Fast. Then only pulling data and doing the formatting locally means little bitty network transmissions instead of big fat ones. Fast. And the reduced load on the servers means less JVM garbage collection, less memory required, fewer CPUs required. And since application server licensing is typically CPU (or core) based, this means fewer licenses. So overall, less data center cost. Less SLA cost. A way for IT to cut costs.



What's not to like? Well, security is an issue, so this architectural model I think is best used for internal corporate applications with trusted users (employees), or for external users when the application cannot do database changes (e.g. read only). Because in the AJAX+REST model the state is maintained in the client -- and thus crackers can cause harm. With this caveat in mind, this architectural style is quite nice.

Re: A Point Not Emphasized -- Performance And Money by Sarma Pisapati

I concur. The major concerns are around security and performance that may increase costs and risks with this kind of architecture.

Re: A Point Not Emphasized -- Performance And Money by John Tullis

Sarma - this architectural style may increase risk (security) while decreasing overall costs and enhancing performance. I think for the correct circumstances the superior performance and reduced costs outweigh the security risks.

System behaviour still has to be consistent by Darrell Freeman

I'm totally in favour of this type of approach, but one of my concerns is that the overall system behaviour must be consitent no matter how many architectural layers or components are used.
I'm in Insurance and the systems are not simple. You cannot totally remove business rules out of the interface. If I want to create an interface that is essentially common to, say, an Investor product and a Personal product then the UI must exhibit different behaviour for each. When I collect data for each product there are differences; when I validate data on the server then it must have consistent behaviour for each product variation as well.
The solution is to create a "product definition" service that is common to whatever component needs to exhibit variant behaviour.
One of the reasons so many large systems find serverside programming easier, is that this type of complex variant behaviour is easier if it is implemented in one place.
In order to make this type of architecture more applicable to complex systems my feeling is that other technologies need to be added to the mix.

Re: Oversimplification by Imran Bohoran

Cant say I disagree on the concept of the "architecture" presented (dont think many of us do). However, the part of applying this with the correct blend of technology with the correct balance of cost vs. benefit is tricky. We talk about technology examples which were births from javascript, but some of us have to deal with accessibility issues that sometimes would affect you legally as well. Not saying that they cannot be solved but then their's cost.
And the fact that we are letting our application's functionality/data render to the client based on their sole capabilities (to the most) puts us in a spot in assurance of the output and performance.
I'm not going to ignore Ganesh's comment on 'Not just for browser clients', hence the agreement on the architecture. But the point is, as we deal with complexities of accessibility, usability, security, performance, cross browser compatibility etc. of the current web enabled era and with the technology options and their pros/cons which arent all that apparent in the eyes of business users all the time, we as architects/developers have to find the correct balance in this architecture and seperations - no simplication indeed (although I dont think the article was intended to simplify it).

This model can improve scalability and cost... by Douglas Stein

Assume you have 50,000 concurrent users. That means 10,000 computers you're not paying for. If you can separate "bulky, static, and cachable" from "lightweight, dynamic, and non-cachable" you have a formula that works.

In many cases, even fairly complex client side logic can be implemented using a finite-state-machine where the states and transitions (vector and adjacency matrix) can be transmitted as data.

As in any distributed architecture, you have to make sure that large data flows happen on the lowest-latency pathways. You also have to make sure your service interfaces have the right balance of chunky vs. chatty.

Nonetheless, it's worth the effort to go down this path.

Done That by Peter Mitchell

Back in 1999 we wrote an call centre application for a Telecom.
It used an HTA (hyper-text application) on the client side which supplied most (all) the presentation logic. Communication with the web server was a bit clunky given the technology of the time. We used message queues to keep everything asyncronous and a custom built call-back component on the client side that was reached using TCP/IP direct.
But the only communication back and forth was XML documents (which no longer conform to the current standards :-) with all rendering being done by VBScript in the client application.
The App had about 20-30 screens which guided the user through a number of fixed 'wizards' to allow them to connect/move/alter PSTN services on behalf of the user. All screen flow was controlled by a combination of client side and server side process maps.
It would be so nice to be able to use current technologies to do the same sort of thing. We had to custom build all the capabilities while today they would be standard in any number of frameworks.

Re: Oversimplification by Peter Svensson

Hi Peter,

Thanks for taking the time to reply point by point to my previous comment.



Regarding my first point, (controlled vs uncertain environment) it wasn't meant as a lack of knowledge of the other environment (the client). It was more about the control that you have over the platform. The server is yours. You do whatever you want with it. Upgrade, downgrade, anything. You never have that kind of control on the client. You face obsolete browsers, outdated VM versions, under powered computers...




--------------

Hi again Pims. Yes, I hear you, but again, nobody uses raw JavaScript to tackle cross-browser issues these days, when Dojo, jQuery,, Ext, YUI, etc. have solved that whell many times over.

What you do, when creating modern clients in the web browser (if we assume that this is our target)is leverage the existing wrappers in (for example) Dojo, which detects which platform it has to deal with and give you a standard API for various tasks anyway.


This means that you to a large extent today _can_ have control over the browser platform, in the way you mean.




----------------------

As for real-life examples, none of our clients has agreed to rely on the Flash Player 9 as the de-facto VM on the client. 97% adoption is not enough for them. They want is as close as possible to 100%. That leaves us with poor XML support (no namespaces, no effective native xpath), with no native regexp. sub-optimal sorting functions. All of this needs to be computed on the server, because we can't rely on the client to be up to date. And forcing them to upgrade is really not a viable option.



I/O vs HTTP.



My point is that to display data in its final state (human readable and pleasant to the eye), we need more than access to raw data. we need to load templates. we need to render those templates. We need to change some bits of information on the page, to add a couple of things, remove some others. We all know how complicated templates can get sometimes (because the desired output is overly complicated too) , they're often divided in several reusable pieces. When dealing with this server side, loading a template is as fast as reading a file from the disk. When it comes to loading all this on the client, it's as fast as the client network connection is. I/O access limitations are really not that common, while limits on concurrent http requests, much more frequent, have a real influence on loading times (and the bad perception it can generate).




-----------------
OK, these bits are going to go over to the client one way ot the other (most of the template bulk). The whole point (from my perspective) of TSA/SOFEA is to load HTML (and client-side templates) _once_ to the client (application download), and after that send only raw data to the client which then adds a few things, removes some others, sorts, manage forms, draw charts, et.c. without needing access to the server, since those things are part of the View.




-------------------

Caching power intensive tasks seems to be easier on the server than on the client (until Gears, sqlite becomes ubiquitous)



Of course, the trade-off is transferring redundant information. By operating the template transformation on the client, the template needs to be transfered only once, where as it would be part of every request in a "traditional" way. I'm not challenging the advantages that asynchronous loading has brought us, rather pointing that with gzip compression, sending fully rendered html/js pages to the client, is really not a big deal. Dealing with client-side events to trigger other actions, making sure the browser doesn't get frozen while rendering memory intensive javascript generated html isn't always trivial.





----------
No, precisely. Again, that is why you wouldn't reinvent the wheel but leverage an existing framework for this.
And the idea between Ajax is just to remove the round-trip time to the server for every little action for which data is already downloaded and present.




-----------


Shifting the view from the server to the client will surely highlight some technical aspects we didn't have to worry about before. Not saying it's a good or bad idea, just trying to point the finger at issues we might now have to deal with.



With no support for RTL languages (as of FP9), and some other important limitations, and version ranging from 6 to 9, the Flash VM isn't the ideal client side platform. I'll give Dojo a serious try soon, and no love for XSLT ;)

Lipstick on CGI by Matthew Quinlan

Great minds think alike..... and apparently, so do ours!


Lipstick on CGI blogpost.



Matthew Quinlan

Chief Evangelist

Appcelerator

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

24 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT