BT

Is The Internet More Fundamental Than REST?

by Mark Little on Dec 19, 2007 |
Just when some people thought the debate had reached an impas or was over, Ganesh Prasad has tried to add fuel to the discussion by suggesting that there is something more fundamental (and better) than REST. As he says, the debate has been going round in circles for a while:
Though I like REST and consider it a very elegant model for SOA, it's a little tiresome to hear day in and day out that it's so much more elegant than the SOAP-based Web Services model. In fact, I'm getting so tired of this shrill posturing that I'm going to stick it to the RESTafarians right now, in their own style.
He then outlines some of the core principles that RESTafarians use to backup their arguments:
REST exploits the features of that most scalable application there is - the web, right? REST doesn't fight the features of the web, it works with them, right? HTTP is not just a transport protocol, it's an application protocol, right? URIs are the most natural way to identify resources, HTTP's standard verbs are the most natural way to expose what can be done with resources, and hyperlinks are the most natural way to tie all these together to expose the whole system as a State Machine, right??
However, in his view, with the Internet being older and more extensible, it is also more scalable and resilient than REST. Working with the Internet model ("it's about message passing between nodes using a network that is smart enough to provide resilience, but nothing more") pushes you towards:
... the Internet philosophy is "Dumb network, smart endpoints" (remembering of course, that the network isn't really "dumb", just deliberately constrained in its smartness to do nothing more than resilient packet routing). And the protocol used by the Internet for packet routing is, of course, Internet Protocol (IP).
Adding a new capability is as simple as:
Just create your own end-to-end protocol and communicate between nodes (endpoints) using IP packets to carry your message payload. You don't need to consult or negotiate with anyone "in the middle". There is no middle.
The distinction between low-level communication protocol and higher level "application" stack is made clear when Ganesh discussed the role of TCP in the Internet (UDP is once again consigned to the bin of history):
TCP is interpreted at the endpoints. That's why the networking software on computers is called the TCP stack. Each computer connected to an IP network is an endpoint, identified by nothing more than - that's right - an IP address. Processing of what's inside an IP message takes place at these endpoints. The Internet (that is, the IP network) is completely unaware of concepts such as sockets and port numbers. Those are concepts created and used by a protocol above it. The extensibility and layered architecture of the Internet enables the operation of a more sophisticated networking abstraction without in any way tampering with the fundamental resilient packet-routing capability of the core Internet.
Of course this discussion is similar in many ways to what REST, SOA, WS-* and message-oriented architects have been discussing as some of the core principles to achieving loose coupling and scalability: hide as much as you can behind the service endpoints. As an example of how resilient and flexible the Internet is to changing requirements, Ganesh reference IPSec and asks (rhetorically) whether the addition of that protocol required a major overhaul of the Internet:
... they just created another end-to-end protocol called ESP (Encapsulating Security Payload) and endpoints that understood it. Then they "layered" it between TCP and IP.
With further references to how the Internet has "has beaten the Telco network even in telephony" Ganesh goes on to illustrate how REST (and the Web) isn't superior to other forms of distributed computing infrastructure.
The Internet is a platform for decentralised innovation. (Heck, the very web that you RESTafarians wave in people's faces is an example of the innovation that the Internet enables. HTTP is an end-to-end protocol, remember?)
But where does this leave WS-*? So far there has been a discussion about the Internet versus REST (really the Web), but nothing about Web Services or SOAP. However, after pushing SOAP-RPC into the pit of oblivion, Ganesh makes it clear that his definition of a SOAP message is "SOAP message with WS-Addressing headers", because the use of WS-Addressing gives you a message that can be independently routed, like an IP packet. It is this parallel that he then builds on, providing some nice diagrams to help illustrate the duality.
Now imagine a messaging infrastructure that knows how to route SOAP messages and nothing more. ... How do we innovate and build up higher levels of capability? Why, follow the Internet model, of course. Create protocols for each of them and embed all their messages into the SOAP message itself (specifically into the SOAP headers). There's no need to layer them as in the TCP stack, because reliable delivery, security, transactions, etc., are all orthogonal concerns. They can all co-exist at the same level within the SOAP header block ...
Given this approach, we end up with what the entry calls "plumbing required for a loosely-coupled service ecosystem". Obviously this isn't sufficient for building applications based on SOA (as with some others, Ganesh does seem to equate SOA with Web Services), but XML comes to the rescue:
So now define document contracts using an XML schema definition language and a vocabulary of your choice (for Banking, Insurance or Airlines), stick those conforming XML documents into the body of the SOAP messages we've been talking about, and ensure that your message producers and consumers have dependencies only on those document contracts. Ta-da! Loosely coupled components! Service-Oriented Architecture!
If only it were that easy in the real world of standards, but you get the general gist. But if it's that easy, the obvious question is: Why aren't we there yet? There are 6 reasons that Ganesh believes are behind the current state of affairs:
  1. The legacy of SOAP-RPC: "There's a school of thought that says the use of WSDL implies RPC even today, but I'm content with the wrapped document/literal style that takes us away from blatant RPC encoding".
  2. The lack of standards around bindings for SOAP other than HTTP.
  3. The need to tunnel through firewalls, which is related to the default SOAP/HTTP binding. "We need to define a port for SOAP itself, and get the firewalls to open up that port. No more flying HTTP Airlines just because we like the Port 80 decor."
  4. "Vendors of centralised message broker software have been pretending to embrace the SOAP/WS-* vision ("We're on the standards committees!") but in reality they're still peddling middleware, not endpointware."
  5. The dominance of close-source WS-* stacks in the industry.
  6. The fact that many of the critical WS-* specifications (transactions, security, reliability etc.) have only recently come to fruition (despite being used and in development for 5 years).
Finishing up, he believes that there are only really two valid approaches to SOA, which are REST and "SOAP-messaging-using-Smart-Endpoints" and that REST isn't the superior approach.

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Ignores Metcalfe/Reed's Laws by John Heintz

The argument Ganesh puts forwards (IP messages on resilient network) is a great one. It is absolutely true that REST (and any further constrained architecture) has key trade-offs and limitations.

What is left out from this is the disconnected-ness of IP only services. Both Metcalfe's and Reed's Laws point out that the value of a network of systems/services isn't in the point-to-point connections alone, but in how many possible and realized groups of communicating nodes there are.

REST imposes constraints (uniform interface and linked hypermedia data) exactly to enable connections between any newly added services.

Re: Ignores Metcalfe/Reed's Laws by Stefan Tilkov

One idea that I'm actually quite happy with would be SOAP over TCP (which is yet unstandardized) -- this would clearly establish that it's not abusing HTTP.

Re: Ignores Metcalfe/Reed's Laws by Udi Dahan

+1 on the disconnected aspects. Going to UDP and RTP seems a lot more interesting than TCP, in terms of standardization.

In terms of the REST vs MEST, I have a blog post up where I relay a conversation I had with Benjamin Carlyle on the topic. Take a look and tell me what you think:

udidahan.weblogs.us/2007/05/01/does-rest-simpli...

Re: Ignores Metcalfe/Reed's Laws by Dan Diephouse

Practicality gets in the way though due to this lovely thing known as firewalls. (Which actually causes problems for well done RESTful applications as well.)

I really don't think SOAP is quite as evil as people make it out to be. Use SOAP if you need a transport neutral, message oriented solution. Otherwise leave it alone. If there wasn't firewalls I highly doubt we'd be using SOAP/HTTP at all. But for now, we're forced to tunnel through it.

Then there is another practical aspect in that it is just too damn easy to take a POJO and do RPC with it. While that is insanely evil, I find myself not caring a lot of time (my evil quotient is out of control). For instance, I developed an application where I had control over both endpoints - one was Java and one was a mobile .NET app. It was wayyy easier to develop a SOAP/RPCish app than a RESTful one thanks to the miracle of wsdl2java. I really wouldn't have received too many benefits if I went down a more RESTful route. Caching was a non issue as it was a pretty low bandwidth application. Linkability was a non issue because no one was ever going to link to this stuff - it was for the mobile devices only. Etc.

OK, I'm rambling and I'm not sure these thoughts are that connected. Cheers :-)

Re: Ignores Metcalfe/Reed's Laws by Paul Beckford

I agree that the focus on HTTP probably comes down to firewalls, but I also think the concept of a uniform connector and self describing data is a sound one if you want to maximise connectedness.

Mr Reed as gone one to work on Croquet. Croquet takes another slant and relies on object replication and synchronised message broadcasting. This is done on a peer-to-peer basis. It still has a uniform connector, URIs and replicated code on demand as "self describing data" but is more scalable than the web because it doesn't rely on central servers:

www.opencroquet.org/

Tradeoffs by Kirstan Vandersluis

Excellent article by Ganesh, and entertaining too! I think John Heintz raises an important point, that REST has key trade-offs and limitations. I believe the WS-* standards are better thought out and more extensible, as Ganesh explains in good detail. The tradeoff I see in the REST vs. WS-* debate is that REST is much more expedient for a developer. At the implementation level, HTTP client libraries and server side support required for REST is ubiquitous, mature, and easily accessed by any programmer. WS-* implementations are much more murky in terms of understandability and availability. Until programmers can download an Apache version of WS-* client libraries, where the complexity is abstracted away, I believe REST will continue to garnish favor from programmers.

Correction: SOA and Web Services by Ganesh Prasad

Mark, you wrote:

"as with some others, Ganesh does seem to equate SOA with Web Services"

Not true. You may recall that towards the end of my post, I said, "In short, I think there are only two valid approaches to building Service-Oriented Architectures today - (1) REST and (2) SOAP-messaging-using-Smart-Endpoints"

And that's just the technology aspect. SOA has business aspects as well, such as Business Modelling and Business Process Re-engineering.

Ganesh

Re: Correction: SOA and Web Services by Mark Little

Ganesh, you misunderstood me, but your follow up just emphasizes what I was trying to say anyway:

"In short, I think there are only two valid approaches to building Service-Oriented Architectures today - (1) REST and (2) SOAP-messaging-using-Smart-Endpoints"

SOA is not a technology. It does not come in a shrink-wrapped box. It's as much about the people who deploy applications, develop them, lay down the principles behind how the businesses work etc. Therefore, SOA does not have to be tied to SOAP anymore than REST has to be tied to HTTP. It is just as easy to write SOA "compliant" applications in Web Services as it is to write traditional CORBA/DCOM-style applications in it. There are many good examples of SOA that were developed way before SOAP was even a glimmer in Don Box's eye. People are still developing SOA today without a single piece of SOAP in their bathtubs.

Even *more* extensible! by Mark Nottingham

When I first read the linked article, I couldn't decide if it was merely insulting to the intelligence, or actual sophistry.

Of course TCP is more "fundamental" than HTTP. So is IP. Ethernet even more so.

Of course it's more extensible and flexible. So is IP. Ethernet even more so.

Strangely, I don't see WS-* advocates running around saying that we should be holding Ethernet up as a paragon of virtue, since it's obviously so much better than HTTP. Not that someone won't try...

HTTP and REST will certainly constrain their users, and that's the point; any good architecture will, just as using any framework or toolkit will constrain what you can do with it. The point is to offer enough value in the tools that you give people for a broad enough selection of use cases that they'll be able to leverage it and avoid re-inventing the wheel time and again; it's not to take over the world. This is what standards do. This is indeed what WS-* does; it just picked different tradeoffs, and evolved under different conditions.

I would absolutely love to impose mandatory IETF participation for all who espouse WS-*; just about every insight that they have (e.g., message-orientation, end-to-end, self description) is a reflection of what the IETF has been doing for decades, and isn't revolutionary; it's just common sense.

/rant

P.S. I weary to see people continue to hold extensibility up as the yardstick of quality. Yes, extensibility is necessary in distributed systems, to assure that you don't paint yourself into a corner, but it's hardly the point of them.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

9 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT