BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Udi Dahan on increasing scalability by making things asynchronous

Udi Dahan on increasing scalability by making things asynchronous

Bookmarks

Making things asynchronous is a proven way to increase scalability, and yet, many things seem to be naturally synchronous. But does that mean that these problems really are impossible to divide in an asynchronous way, or does that mean that we are simply stuck in particular way of thinking about these types of problems? Udi Dahan challenges this thinking:

Often during my consulting engagements I run into people who say, “some things just can’t be made asynchronous” even after they agree about the inherent scalability that asynchronous communications pattern bring. One often-cited example is user authentication - taking a username and password combo and authenticating it against some back-end store.

In the article Asynchronous, High-Performance Login for Web Farms, Udi shows how to do this differently; how to solve the problem in an asynchronous and more scalable way.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Thats a good approach

    by Billy Newport,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The alternative and probably more scalable approach would be a network attached cache like IBM ObjectGrid or one of the gigoherence competitors. The network attached cache can hold millions if not hundreds of millions of such pairs in the collective memory of the web farm and then provide a login service to the farm. You're adding a little latency because of the network hop to fetch the data but the benefit is that you are no longer limited to what fits in a single address space in terms of how much you can store. As the farm scales out, the grid scales out in parallel to keep up. You're now limited to what fits in the memory of the farm, not a single process.

  • Isn't this an article about caching?

    by John DeHope,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I don't see where there is any asynchronous activity here. The user shows up on the login screen, which blocks until he provides credentials, which are then sent on to the authentication mechanism (cached or DB, either way), which blocks, and then the results are returned and the user is routed to either a welcome screen, or an error. There is no asynchronous activity here is there?

    Now the idea of caching credential data in memory is great, and will certainly speed that up. But how does it make it asynchronous?

    What I thought the article was going to be about was an AJAXy sort of thing, where users are allowed into the app as soon as they have provided their credentials. Initially they have no more access than a guest, but at least the UI has options they can start using right away. Meanwhile, in the background, asynchronously, their credentials are being authenticated (however that happens) and the results are applied against their first activity that requires authentication. So they don't wait for authentication up front, they only wait for it after they have initiated an activity that requires it.

    I enjoyed reading this, but was just confused about exactly what the technique brings to the table.

  • Re: Isn't this an article about caching?

    by Julian Browne,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I agree. I too enjoyed the article. The approach, whilst not new, is elegant and has clearer options for scalability than what you might call "traditional" options. But if you have a user at one end, sending authentication data to an authenticator at the other end, waiting for a yes/no result that is elemental to what can happen next, then that to me is synchronous in nature.

    The Ajax model could work well in handling the to and fro over two asynchronous steps, but you'd need timeouts and retries because ultimately you need that authentication yes/no to get to the secured stuff.

    Nice article though.

  • Re: Isn't this an article about caching?

    by Udi Dahan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    John,

    Glad you enjoyed it.

    The asynchronous part of the solution deals with registering new users and the "long-running" workflow involved. Keeping the cache updated across the farm is also handled asynchronously/push-based with respect to servers that didn't have the user register there.

    Does that make sense?

  • Re: Isn't this an article about caching?

    by Udi Dahan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Julian,

    Thank you for your kind words.

    From the user perspective, logging in is a blocking process. Asynchrony is not a concept found in human-computer interaction design. Users care about blocking and speed (particularly for blocking processes).

    The speed that is achieved is by having the entire login process occur on the web server. From an overall system perspective, the load on the DB decreases thus increasing the scalability of other aspects of the system.

    Does the make it any clearer?

  • Re: Thats a good approach

    by Udi Dahan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Billy,

    The choice of technology you bring up is an interesting one. I could definitely see changing the implementation of the Cache object on the web server to store its data in a distributed, in-memory cache rather than just locally in memory. However, the overall solution could still look the same.

    You do bring up an interesting architectural trade-off - an extra network hop vs using less memory. I'll definitely be looking at that in greater detail in my consulting engagements.

  • Re: Thats a good approach

    by Christian Schneider,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Of course there is asynchrony in the example. But it is only used to implement a distributed cache. I would really not try to implement such a scheme myself. It is much better to simply use a mature cache like billy mentioned.

    By using this you can even simplify the registry of users. When a new user is registered you simply add it to the distributed cache. So all machines know the user. This way you do not need explicit message passing.

  • Re: Thats a good approach

    by Udi Dahan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Christian,

    "only" :-)

    I agree that this isn't necessarily the end of the line for even the architectural analysis. Given that we have such a cache that efficiently utilizes memory, we'd want an intelligent load/data partitioning scheme such that requests go to servers that have the data needed by that request locally (in the distributed cache), so that we can save the extra network hop (which is critical in some environments).

    I posted on this topic a while ago here,
    and more recently here.

  • Re: Thats a good approach

    by John DeHope,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I'm really not trying to be argumentative here. But I believe words have meaning, and that their meanings are important, especially in technology.

    The asynchronous part of the solution deals with registering new users and the "long-running" workflow involved.

    I don't see how the login process is long-running or asynchronous. Can you explain how it is? In my mind it blocks for less time, since it is going to a cache, but it still blocks.

    Keeping the cache updated across the farm is also handled asynchronously/push-based with respect to servers that didn't have the user register there.

    I doubt it, but I can't be sure. In every caching example I've seen, going to the cache is a blocking synchronous action. If the cache is fresh, you at least have to block the caller long enough to retrieve the data from the cache. And if the cache is stale, the caller will block for as long as it takes to refresh the cache.

    From the user perspective, logging in is a blocking process.

    Right, and that's why I am confused that the article is about "asynchrony" when from the user's perspective it is not.

    Asynchrony is not a concept found in human-computer interaction design. Users care about blocking and speed (particularly for blocking processes).

    I'm not sure those two sentences jive. Users definitely understand asynchrony, and they like it! When I rename a folder in Lotus Notes, and the entire application blocks, instead of just the one folder I am renaming, I very much understand that is asynchrony at play. Like you said, users care about blocking, and blocking is just another way of saying "not asynchronous".

    The speed that is achieved is by having the entire login process occur on the web server. From an overall system perspective, the load on the DB decreases thus increasing the scalability of other aspects of the system.

    There is no question that there is a speed increase here, and that the DB load is decreased. But speed and asynchrony are two completely different things. A process can be slow, and asynchronous, such as the delivery of snail mail. A process can be fast and asynchronous, such as sending somebody an IM. A process can be slow and synchronous, such as driving to work in traffic. A process can also be fast and synchronous, such as flying to work in your private jet.

    I am going to go re-read the article just to make sure I am not missing something here.

  • Re: Thats a good approach

    by Cameron Purdy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The alternative and probably more scalable approach would be a network attached cache like IBM ObjectGrid or one of the gigoherence competitors.


    Cute ;-)

    Peace,

    Cameron Purdy
    Oracle Coherence: Data Grid for Java and .NET

  • Re: Thats a good approach

    by Udi Dahan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    John,

    I don't view your comment as argumentative - just a friendly technological discussion. Please keep it coming :-)

    The long-running part is user registration - starting with the submission on the site, followed up with clicking on the email validation link. The asynchronous part is the use of one-way messaging.

    Actually, at this point I'd like to invoke one of my betters - Pat Helland: "Asynchronous and Synchronous are subjective terms":

    blogs.msdn.com/pathelland/archive/2007/08/23/as...

    You're correct, though, about the login process blocking on the local cache - and, indeed, for much less time than when working with the DB.


    And if the cache is stale, the caller will block for as long as it takes to refresh the cache.


    That's one of the differences that this solution embodies. The local cache does not deal with refreshing itself - so the calling thead will still not block while something (the cache) is going to the DB.


    When I rename a folder in Lotus Notes, and the entire application blocks, instead of just the one folder I am renaming, I very much understand that is asynchrony at play.


    I would submit that you're not representative of most users (neither am I). From my experience, there are also quite a few programmers who don't understand asynchrony. Anyway, at least we agree that the solution leads to a faster login process and that the end user would benefit from that.

    I will wrap up by saying that the title could have been made more precise, but I found that it was long enough already. You have my apologies for the lack of clarity.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT