BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Is The Internet More Fundamental Than REST?

Is The Internet More Fundamental Than REST?

This item in japanese

Just when some people thought the debate had reached an impas or was over, Ganesh Prasad has tried to add fuel to the discussion by suggesting that there is something more fundamental (and better) than REST. As he says, the debate has been going round in circles for a while:
Though I like REST and consider it a very elegant model for SOA, it's a little tiresome to hear day in and day out that it's so much more elegant than the SOAP-based Web Services model. In fact, I'm getting so tired of this shrill posturing that I'm going to stick it to the RESTafarians right now, in their own style.
He then outlines some of the core principles that RESTafarians use to backup their arguments:
REST exploits the features of that most scalable application there is - the web, right? REST doesn't fight the features of the web, it works with them, right? HTTP is not just a transport protocol, it's an application protocol, right? URIs are the most natural way to identify resources, HTTP's standard verbs are the most natural way to expose what can be done with resources, and hyperlinks are the most natural way to tie all these together to expose the whole system as a State Machine, right??
However, in his view, with the Internet being older and more extensible, it is also more scalable and resilient than REST. Working with the Internet model ("it's about message passing between nodes using a network that is smart enough to provide resilience, but nothing more") pushes you towards:
... the Internet philosophy is "Dumb network, smart endpoints" (remembering of course, that the network isn't really "dumb", just deliberately constrained in its smartness to do nothing more than resilient packet routing). And the protocol used by the Internet for packet routing is, of course, Internet Protocol (IP).
Adding a new capability is as simple as:
Just create your own end-to-end protocol and communicate between nodes (endpoints) using IP packets to carry your message payload. You don't need to consult or negotiate with anyone "in the middle". There is no middle.
The distinction between low-level communication protocol and higher level "application" stack is made clear when Ganesh discussed the role of TCP in the Internet (UDP is once again consigned to the bin of history):
TCP is interpreted at the endpoints. That's why the networking software on computers is called the TCP stack. Each computer connected to an IP network is an endpoint, identified by nothing more than - that's right - an IP address. Processing of what's inside an IP message takes place at these endpoints. The Internet (that is, the IP network) is completely unaware of concepts such as sockets and port numbers. Those are concepts created and used by a protocol above it. The extensibility and layered architecture of the Internet enables the operation of a more sophisticated networking abstraction without in any way tampering with the fundamental resilient packet-routing capability of the core Internet.
Of course this discussion is similar in many ways to what REST, SOA, WS-* and message-oriented architects have been discussing as some of the core principles to achieving loose coupling and scalability: hide as much as you can behind the service endpoints. As an example of how resilient and flexible the Internet is to changing requirements, Ganesh reference IPSec and asks (rhetorically) whether the addition of that protocol required a major overhaul of the Internet:
... they just created another end-to-end protocol called ESP (Encapsulating Security Payload) and endpoints that understood it. Then they "layered" it between TCP and IP.
With further references to how the Internet has "has beaten the Telco network even in telephony" Ganesh goes on to illustrate how REST (and the Web) isn't superior to other forms of distributed computing infrastructure.
The Internet is a platform for decentralised innovation. (Heck, the very web that you RESTafarians wave in people's faces is an example of the innovation that the Internet enables. HTTP is an end-to-end protocol, remember?)
But where does this leave WS-*? So far there has been a discussion about the Internet versus REST (really the Web), but nothing about Web Services or SOAP. However, after pushing SOAP-RPC into the pit of oblivion, Ganesh makes it clear that his definition of a SOAP message is "SOAP message with WS-Addressing headers", because the use of WS-Addressing gives you a message that can be independently routed, like an IP packet. It is this parallel that he then builds on, providing some nice diagrams to help illustrate the duality.
Now imagine a messaging infrastructure that knows how to route SOAP messages and nothing more. ... How do we innovate and build up higher levels of capability? Why, follow the Internet model, of course. Create protocols for each of them and embed all their messages into the SOAP message itself (specifically into the SOAP headers). There's no need to layer them as in the TCP stack, because reliable delivery, security, transactions, etc., are all orthogonal concerns. They can all co-exist at the same level within the SOAP header block ...
Given this approach, we end up with what the entry calls "plumbing required for a loosely-coupled service ecosystem". Obviously this isn't sufficient for building applications based on SOA (as with some others, Ganesh does seem to equate SOA with Web Services), but XML comes to the rescue:
So now define document contracts using an XML schema definition language and a vocabulary of your choice (for Banking, Insurance or Airlines), stick those conforming XML documents into the body of the SOAP messages we've been talking about, and ensure that your message producers and consumers have dependencies only on those document contracts. Ta-da! Loosely coupled components! Service-Oriented Architecture!
If only it were that easy in the real world of standards, but you get the general gist. But if it's that easy, the obvious question is: Why aren't we there yet? There are 6 reasons that Ganesh believes are behind the current state of affairs:
  1. The legacy of SOAP-RPC: "There's a school of thought that says the use of WSDL implies RPC even today, but I'm content with the wrapped document/literal style that takes us away from blatant RPC encoding".
  2. The lack of standards around bindings for SOAP other than HTTP.
  3. The need to tunnel through firewalls, which is related to the default SOAP/HTTP binding. "We need to define a port for SOAP itself, and get the firewalls to open up that port. No more flying HTTP Airlines just because we like the Port 80 decor."
  4. "Vendors of centralised message broker software have been pretending to embrace the SOAP/WS-* vision ("We're on the standards committees!") but in reality they're still peddling middleware, not endpointware."
  5. The dominance of close-source WS-* stacks in the industry.
  6. The fact that many of the critical WS-* specifications (transactions, security, reliability etc.) have only recently come to fruition (despite being used and in development for 5 years).
Finishing up, he believes that there are only really two valid approaches to SOA, which are REST and "SOAP-messaging-using-Smart-Endpoints" and that REST isn't the superior approach.

Rate this Article

Adoption
Style

BT