BT

Revisiting the Need for Asynchronous Servlets

by Scott Delap on Jul 26, 2006 |
As we transition from a page based view of web application development to a Ajax style data based view, a few problem areas have begun to crop up. Ajax applications often desire a dedicated client/server connection supporting things such as server-push data streaming. Techniques such as COMET have been developed to fill this need. However these techniques leverage web servers and JEE constructs such as servlets as building blocks for solutions in ways they were originally not designed to handle. Gregg Wilkins, lead developer on the Jetty web container, has been examining the need for an Asynchronous Servlet API, concluding recently that  continuations are the best solution at the moment:  In May, Gregg examined 5 problem cases:
  1. Non-blocking input - The ability to receive data from a client without blocking if the data is slow arriving.
  2. Non-blocking output - The ability to send data to a client without blocking if the client or network is slow.
  3. Delay request handling - The comet style of Ajax web application can require that a request handling is delayed until either a timeout or an event has occurred.
  4. Delay response close - The comet style of Ajax web application can require that a response is held open to allow additional data to be sent when asynchronous events occur.
  5. 100 Continue Handling - A client may request a handshake from the server before sending a request body.
Greg proposed standardizing a coordinator which could be called by the container in response to asynchronous event and would coordinate the call of the synchronous Servlet service method. The coordinator would handle the async request before a Servlet is invoked. The Coordinator entity can be defined and mapped to URL patterns just like filters and servlets.  An interesting discussion also emerged on TSS at the time.

Revisiting this topic in July, Gregg has reviewed the current available and proposed solutions:

  • BEA WebLogic - BEA added AbstractAsyncServlet in WebLogic 9.2 to support threadless waiting for events (be they Ajax Comet events or otherwise eg. an available JDBC connection from a pool) ... It's not really a servlet - The user cannot implement doGet or service methods to generate content so there is limited benefit from tools or programmer familiarity.
  • Tomcat 6.x - After my initial blogging on this issue, the tomcat developers added CometProcessor and CometServlet (unfortunately without engaging in an open discussion as I had encouraged). It is essentially the same solution as BEAs, but with a few extras, a few gotchas and the same major issues.
  • Jetty 6 Continuations - The Jetty 6 Continuation mechanism is not an extension to the Servlet API. Instead it as a suspend/resume mechanism that operates within the context of a Servlet container to allow threadless waiting and reaction to asynchronous events by retrying requests.
  • ServletCoordinator - My proposed ServletCoordinator suffers from many of these same issues. It does meet one of my main concerns in that responses are generated by normal servlets code using normal techniques and within the scope of the applicable filter chain.
Based on these solutions Gregg concluded that continuations are the best API for supporting the majority of the asynchronous use-cases within the servlet model.  

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Does JRockit's thinthreads solve this problem? by Bill Poitras

JRockit has the option of not allocating a native OS thread per java thread. Would solve the scalability issue? If so, it would be a good solution since there would be no code change required.

Interesting by karan malhi

This is pretty interesting. However, there should be a way of not to resubmit the request after the continuation resumes. Couldnt the previous request object be stored in the Continuation itself (since continuation represents a snapshot of a state)?
Secondly, as per my understanding, the servlet container is being freed up for more upcoming requests, but is basically queuing up the requests internally and then responding to the queued requests. Wouldnt "handling the queued request" involve a "thread" (some system resource). Maybe I am missing something here, but how is this more scalable. To me it appears that we are focussing more on taking care of one side of the load (requests) and letting the other side (responses) take care of itself.
Why couldnt something like this be done in an MVC framework and why we needed continutations (After all the snapshot of the state for a particular request could be stored in the HttpSession anyways).

Re: Does JRockit's thinthreads solve this problem? by Alex Popescu

IMO the thin threads may represent an optimization, but not the solution. Indeed when refering to async tasks, most of the time we are looking for more responsiveness. And according to JRockit, thin threads may offer more responsiveness. But, not all async tasks are only about responsiveness, and that's the reason I said that they may be an optimization, but not the solution.

./alex
--
.w( the_mindstorm )p.

Re: Interesting by Alex Popescu

This is pretty interesting. However, there should be a way of not to resubmit the request after the continuation resumes. Couldnt the previous request object be stored in the Continuation itself (since continuation represents a snapshot of a state)?


IMO this should be pretty doable, but I am not sure if it covers all usage scenarios.

Secondly, as per my understanding, the servlet container is being freed up for more upcoming requests, but is basically queuing up the requests internally and then responding to the queued requests. Wouldnt "handling the queued request" involve a "thread" (some system resource). Maybe I am missing something here, but how is this more scalable. To me it appears that we are focussing more on taking care of one side of the load (requests) and letting the other side (responses) take care of itself.


If I read it correctly, for this part the non-blocking response should be the answer. Indeed, you would need a parallel handling of responses, but with a non-blocking response system, you will however increase the responsiveness of the app.

Why couldnt something like this be done in an MVC framework[...]


I guess because it would be better to have a standard way to deal with it and not tens of custom solutions as per each framework (I think we have tens of web frameworks around :-]).

./alex
--
.w( the_mindstorm )p.

Re: Interesting by karan malhi


I guess because it would be better to have a standard way to deal with it and not tens of custom solutions as per each framework


Yes, I agree.


with a non-blocking response system, you will however increase the responsiveness of the app

So does this mean that we would need NIO support in the next version of the Servlet spec?

Re: Interesting by Alex Popescu

with a non-blocking response system, you will however increase the responsiveness of the app

So does this mean that we would need NIO support in the next version of the Servlet spec?


I am not a servlet expert, but I think this will happen.

./alex
--
.w( the_mindstorm )p.

Re: Does JRockit's thinthreads solve this problem? by Mircea Crisan

As far as I know, the non-native threads are implemented un SUN's JDK since a long time ago.

Re: Does JRockit's thinthreads solve this problem? by Alex Popescu

Are you refering to green threads?

./alex
--
.w( the_mindstorm )p.

AsyncWeb project by Sean Sullivan

There was a presentation about AsyncWeb at the O'Reilly OSCON this week.

asyncweb.safehaus.org/

AsyncWeb (built on top of the excellent Mina network framework) employs non-blocking selector driven IO at the transport level, and is asynchronous thoughout - from the initial parsing of requests, right through to and including the services implemented by users.
AsyncWeb breaks away from the blocking request / response architecture found in todays popular HTTP engines. This allows it to be highly scalable and capable of supporting very high throughput - even in high processing latency scenarios.


The AsyncWeb API is interesting:

asyncweb.safehaus.org/APIOverview

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

9 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT