BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google GData/Atom Publishing Protocol too limited for Microsoft

Google GData/Atom Publishing Protocol too limited for Microsoft

Dara Obasanjo writes about the limitations of the Google Data API (Google's implementation of the Atom Publishing Protocol with some extensions) as a general purpose protocol and explains why Microsoft will not support or standardize on GData.

Shortly after praising "the simplicity and uniform interface that is GData" Dare Obasanjo states that "GData/APP Fails as a General Purpose Editing Protocol for the Web". According to Obasanjo "certain limitations in the Atom Publishing Protocol become quite obvious when you get outside of blog editing scenarios for which the protocol was originally designed". Thus Microsoft will not provide support for GData but more "likely standardize on a different RESTful protocol", which Dara Obasanjo does not name or discuss. He promises to do so in a future post.

He points out the following limitations:

  1. Mismatch with data models that aren't microcontent: The Atom data model fits very well for representing authored content or microcontent on the Web such as blog posts, lists of links, podcasts, online photo albums and calendar events. [...] Most of the elements that constitute an Atom entry don't make much sense when representing [a business object with a very specific structure]. Secondly, one would have to create a large number of proprietary extension elements to anotate the atom:entry element to hold all the [...] specific fields for the [business object]. It's like trying to fit a square peg in a round hole. If you force it hard enough, you can make it fit but it will look damned ugly. [...]
  2. Lack of support for granular updates to fields of an item: [...] each client is responsible for ensuring that it doesn't lose any XML that was in the original atom:entry element it downloaded. The second problem is more serious and should be of concern to anyone who's read Editing the Web: Detecting the Lost Update Problem Using Unreserved Checkout. The problem is that there is data loss if the entry has changed between the time the client downloaded it and when it tries to PUT its changes. Even if the client does a HEAD request and compares ETags just before PUTing its changes, there's always the possibility of a race condition where an update occurs after the HEAD request. After a certain point, it is probably reasonable to just go with "most recent update wins" which is the simplest conflict resolution algorithm in existence. Unfortunately, this approach fails because the Atom Publishing Protocol makes client applications responsible for all the content within the atom:entry even if they are only interested in one field. [...]
  3. Poor support for hierarchy: The Atom data model is that it doesn't directly support nesting or hierarchies. You can have a collection of media resources or entry resources but the entry resources cannot themselves contain entry resources. This means if you want to represent an item that has children they must be referenced via a link instead of included inline. This makes sense when you consider the blog syndication and blog editing background of Atom since it isn't a good idea to include all comments to a post directly children of an item in the feed or when editing the post. On the other hand, when you have a direct parent<->child hierarchical relationship, where the child is an addressable resource in its own right, it is cumbersome for clients to always have to make two or more calls to get all the data they need.

Bill de hÓra replies to the post and provides some possible solutions for the limitations pointed out by Dare Obasanjo. He also adds two issues to the list that "strike [him] as much more substantial":

  1. Update resumption: some clients need the ability to be able to upload data in segments. Aside from a poor user experience and general bandwidth costs, this is important for certain billing models; otherwise consumers have to pay on every failed attempt to upload a fote. APP doesn't state support for this at all; it might be doable using HTTP more generally, but to get decent client support you'd want it documented in an RFC at least.
  2. Batch and multi-part uploads: This feature was considered and let go by the atom-syntax working group. The reason was that processing around batching (aka "boxcarring") can get surprisingly complicated. That is, it's deceptively simple to just say "send a bunch of entries". Still, it would be good to look at this at some point in th future.

James Snell and Joe Gregorio disagree emphatically, and point out that tin their view, the weaknesses pointed out by Dare Obasanjo are not weaknesses at all. James calls Dare's post "silly", and Joe asks:

I am led to wonder about the timing of his complaints as the APP is close to getting an RFC number. What spurred this sudden bout of sour grapes?

Google has definitively provided a simple but powerful enough API for accessing their services. They state that they do not want to solve 100% of all use cases but rather provide an obviously simple and uniform API for the majority (80%). It remains interesting to see with what RESTful protocol Microsoft will come up in order to provide a General Purpose Editing Protocol for the Web.

Rate this Article

Adoption
Style

BT