How REST replaced SOAP on the Web: What it means to you

| Posted by Ross Mason 0 Followers on Oct 20, 2011. Estimated reading time: 7 minutes |

A note to our readers: You asked so we have developed a set of features that allow you to reduce the noise: you can get email and web notifications for topics you are interested in. Learn more about our new features.

Its been slowly creeping up on us, creating exciting new possibilities for our applications; APIs are changing the face of the Web. Since 2005 Programmable Web have been tracking SOAP and REST APIs available to the public.  In 2005 they tracked 105 APIs, the prominent ones being from Amazon, Google, SalesForce and eBay.  By 2008 this number had grown 6-fold to 601 APIs with social and traditional media seeing value in opening up their data to third parties. By the end of 2010 developers had over 2,500 APIs at our disposal.  While forward thinking companies like Zappos published a REST API, we also saw government and brick and mortar retail join the fray. Tesco allows developers to order groceries, Instagram created a Twitter for pictures, Face offers facial recognition as a service and you can create telephony applications with a few calls to Twilio. In 2011 we’re seeing the number of public APIs climb towards 5,000, there has never been a better time to build killer applications.

(Click on the image to enlarge it)

source: Programmable Web

The SOA Holy Grail

Anyone working on enterprise systems in the last 10 years will remember the initial tenets of Service Orientated Architecture were to decouple applications and to provide a well defined service interface, which can be reused by applications and composed into business processes. The idea of reuse and composition made SOA an attractive proposition that sent thousands of organizations on a very challenging treasure hunt. We have since read SOA's obituary and its resurrection with many stories of woe peppered with some success, but with very few achieving the holy grail of SOA. Meanwhile, the web has essentially become a service oriented platform, where information and functionality is a available through an API; the Web succeeded where the enterprise largely failed.
This success can be attributed to the fact that the web has been decentralized in its approach and has adopted less stringent technologies to become service oriented. Many early APIs were written using SOAP but now REST is the dominant force (though some are more REST than others).  The publication of REST APIs has been rapidly increasing.

source: Programmable Web

Some offer both SOAP and REST APIs, but this practice has been on the decline and REST is now preferred for most new APIs.

source: Programmable Web

XML or JSON

One of the reasons REST has been favored on the Web is client accessibility. SOAP was defined with the enterprise in mind and while the protocol is platform agnostic, SOAP XML is verbose and often painful to use in web technologies like JavaScript and Ruby. JSON is favoured due to it more compact representation, its easier to read and is the native data format in JavaScript. Interestingly, newer APIs which only support JSON are on the rise and 45% of APIs now support JSON with many new APIs offering JSON as the only data format.

source: Programmable Web

REST Implementation Woes

The biggest challenge with the shift to APIs is the nature of REST. Representational State Transfer is an architectural style for defining an interaction model over HTTP where HTTP verbs map to service operations for doing things like listing all users, updating an account or deleting an order entry.  These principles came from Roy Fielding in a paper he published in 2000. Since then REST in all its forms has taken a firm grip on software development. The biggest challenge is that the original paper only provided a set of constraints, it did nothing to prescribe a URL scheme, versioning, authentication and authorization, error handling (HTTP codes are not sufficient) or even the correct way to pass parameters to a restful resource. RESTafarians may shut me down at this point since there are strong opinions here, but observing the wildly different opinions out there, there is no agreed way to do REST.

The right way to build a RESTful architecture is not well defined and anyone that has worked with 3rd party REST APIs will have seen that there are many interpretations of REST that makes working with different APIs difficult.  These include:

• Inconsistent authentication, even from APIs from the same provider
• Inconsistent use of HTTP Verbs. REST falls foul to human error and misunderstanding like everything else
• Inconsistent use of HTTP return codes. No well-defined scheme for handling errors. Some APIs have great error handling, other have almost none; HTTP return codes are not enough
• Varied URI schemes that don’t fit the REST model
• No agreed way to do versioning, yet it is so important given a REST API is the gateway to an application
• Hashing and signing parameters for a REST API call is often annoying and fiddly and never consistent between different APIs
• Request and Result messages are often not consistent between services, makes queries and object binding more challenging. There are no clear guidelines for creating DTOs (data transfer objects)
• Unlike SOAP web services there is usually no contract which means rest services are not self describing and with no contract things change on you without warning

The Developer API challenge

Any developer that has worked with public APIs knows that you can code to these APIs directly, but it can be awkward and frustrating. This is fine if you are just integrating with Facebook, but when you start composing your applications from multiple APIs you need a new approach.  Many API vendors have made developer's lives easier by providing clients in PHP, Ruby, and Java. However, your mileage varies with these clients since many are often not maintained.

Embracing APIs

With public APIs doubling every year, developers cannot ignore the value of these new capabilities.  Developers need help to iron out some of the REST wrinkles that make one API very different from another. Often the API clients don’t meet expectations, which has lead to frameworks as API Smith emerging as well as API services such as Gnip that provide a consistent API interface to a wealth of Social Media APIs. Additionally companies such as apigee and Mashery offer tools for exploring APIs.

Another Way

Our approach to embracing APIs is to provide a consistent interaction model.  Mule Cloud Connect provides such a model.  APIs are represented as simple objects with meta-data configured using annotations that handle the URI scheme, authentication, signing, hashing, session management and even streaming.
These Cloud Connectors can then be orchestrated in flows to allow you to do things like filter social media data to send updates to your phone, provide automated voice access to your billing account or back up your CRM data to a database.
Of course you need a place to run these flows. Mule iON is an integration Platform as a Service (iPaaS) which allows developers to compose API functionality and publish the results to their own applications. Mule iON acts as the integration layer to your application, removing the integration logic from the application code.

The value of iPaaS is that developers can decouple integration logic from their application. If you integrate with multiple APIs you’ll need custom code in your application for each API and if any one of those APIs change, you’ll need to update the application code too. With an iPaaS, all that logic can be delegated to an integration layer that is much better suited to working with different APIs handling things like security, retries, session management, throttling and errors. The iPaaS can then publish the results (a mashup) of data or functionality to mobile applications or web applications, whether traditionally hosted or running on a PaaS such as Heroku, force.com, CloudBees or Azure.

The proliferation of APIs on the Web means developers have a treasure trove of new functionality and data to incorporate into their applications.  New applications increasingly make use of these APIs to provide more context and a more interesting user experience.  However, due to the lack of standardization of APIs working with different providers is a real challenge. Developers should be looking for ways to keep their application code clean by removing integration logic and delegating it to a platform tailored to make integration much easier.

Ross Mason is the CTO and Founder of MuleSoft. He founded the open source Mule® project in 2003. Frustrated by integration "donkey work," he set out to create a new platform that emphasized ease of development and re-use of components. He started the Mule project to bring a modern approach, one of assembly, rather than repetitive coding, to developers worldwide. Now, with the MuleSoft team, Ross is taking these founding principles of dead-simple integration to the cloud with Mule iON, the world's first integration platform as a service (iPaaS). Ross holds a BS (Hons) in Computer Science from Bristol, UK and has been named in InformationWeek's Top 10 Innovators & Influencers and InfoWorld's Top 25 CTOs. Twitter: @rossmason, @mulejockey

Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

eMule won't replace missing standardization

While eMule is surely a good tool, I just can't see how it provides any solution to the problem of incompatible interfaces.

You have always two interfaces:
[desired interface] <------ [actual interface]

Now, if you're in charge for both, it's easy, you shape your actual interface according to your desire, hence
actual interface = desired interface.

Also, if you use a single API, like facebook apps, and you're quite confident it won't change, or that change would mean extensive rework anyway, you can shape your desires, so that

desired interface = actual interface

(Note, in our little calculus, equality is not symmetrical, I should have rather used pascal-ish := let-it-be operator)

Otherwise, you need to take a glue:

[desired interface] <-- implements -- [Glue] <-- uses -- [actual interface]

OK,let's say eMule is this glue. what'll happen?

[desired interface] <-- [Glue] -- [emule's actual interface] <-- [emule's desired interface] <-- [Glue] <-- [actual interface]

So, you have two glues!

Of course, eMule wants you that
[your desired interface] = [eMule's actual interface]

Which makes sense, until you start to realize, you still have to write the glue between eMule and the service you use.

Of course, eMule can start to provide glues for the most used APIs, like Spring does with their Spring Social. It's funny to see the officially same OAuth protocol implemented basically separately for each provider in it, as there's so much variation...

Still, you'll have a separate application (not a library!) running which may or may not cause you headaches. Maybe you want to concentrate all the 3rd party API traffic through a component, maybe you don't want a single point of failure, you'd prefer each of your application servers requesting data separately instead.

The only way to solve this problem is to make a standard built on REST. There's simply no other solution.<//pre><//pre><//pre>

Re: eMule won't replace missing standardization

eMule? Clearly someone spends a little too much time indoors on the file sharing networks.

Re: eMule won't replace missing standardization

Ignoring the fact eMule is a totally different tool, I'm going to assume you're talking about Mule ESB..

While I totally agree, in the situation you mentioned (two connected systems) it may not make sense to implement an ESB. However what if there are 40 systems, each require connectivity with the other? That is hundreds of custom coded connections, numerous protocols and file formats.... and what if they go down? If one service was changed by the owner, you would need to re-write up to 40 connectors. If, in this situation, you were plugged into an ESB, you would only need to write the glue between the service and the bus.

Imagine the time (and money) you would save bringing an ESB into this environment... Precious time that could be spent downloading on eMule :-)

Re: eMule won't replace missing standardization

Imagine the money I'd be saving if I got Mule ESB via eMule ;-)

Re: eMule won't replace missing standardization

Thanks for the comprehensive comment, I will make one correction upfront: eMule is a P2P file sharing client, Mule is an integration platform.

I'm assuming you haven't used Mule before but Mule is the 'glue', it doesn't add more integration points. One of the benefits of Mule is that it doesn't mess with the message coming in, it just enables the user to change it if needed. So it really is:

[desired interface] <-- implements -- [Mule] <-- uses -- [actual interface]

You might ask why do I need Mule to do this for me? I could replace [Mule] with custom code? This would be called point to point (or P2P) integration and the challenges of this approach is that you:

• Couple your applications to each other and/or external services. This is bad because when one system changes you have to change all the points where that system is integrated

• Integration platforms are built to make it easier to consume data and functionality from other systems. Mule provides a lot of support for handling different protocols, message formats, transactions, retries, error handling, monitoring etc. You don't get that writing the code by yourself but over time you be wishing you had the features of a platform like Mule

• Using Mule as the glue between different applications and APIs gives you a clean separation of concerns, in fact Mule decouples transport, protocol and your logic from each other so that you can affect changes easily. decoupling your application becomes a lot more important as you start consuming data and services from more applications since any one of those application interfaces can (and do) change, so having a layer in between responsible for handling that change as well as making it easier to orchestrate between systems becomes very important.

For a quick introduction to decoupling and the benefits, I did a short video on ZDNet about SOA, the principles apply just as well for this discussion.

Cheers,

@rossmason

<//pre>

Re: eMule won't replace missing standardization

Couldn't help but laugh :-D

Of course Mule iON is a service, and like other platforms you just pay for what you use. Take a look at hhtp://muleion.com

Cheers,

@rossmason

Re: eMule won't replace missing standardization

I couldn't help but laugh :-D

Of course Mule iON, is a platform that uses Mule under the covers (100% compatible). Like other platforms, there is a free tier and after that you pay for what you use. No installing hardware, tuning or software to leech from eMule

Cheers,

@rossmason

Nice overview, but REST is the answer, not proprietary middleware!

Anyone who knows anything about REST knows that this article (a) identifies a true issue, but (b) proposes an unsatisfactory solution.

The true issue is that the so-called "REST" APIs being discussed here are all incompatible because they're nothing to do with REST at all! They're just poor-man's Web Services: HTTP tunnelling.

Consider how the Web works: there's no "API" for each site you visit, there's just HTTP and HTML.

So a better solution is for all those "REST" APIs to be replaced by actual REST interfaces; to be made to work like the Web does, in other words!

First we need to define the JSON syntaxes for all the common data types. I suggest starting with: contacts based on vCard, events based on iCalendar, news based on Atom, publication info based on Dublin Core, etc.

This is the approach I'm proposing in the form of "OTS" or "Object Type Specifications". OTS will define these JSON formats and describe expected HTTP interactions in the same way that AtomPub does for Atom XML.

While waiting for API designers to get a clue, there is indeed value in providing a REST adaptor layer - servers that offer a REST interface to these HTTP APIs.

But we need to start thinking like the Web, and can start now!

If you've got an API you want to convert to REST using OTS, get in touch. See the OTS site for more info. It's early days for this project, so I'd be very happy to be contacted by potential contributors!

Duncan Cragg

Re: Nice overview, but REST is the answer, not proprietary middleware!

Anyone who knows anything about REST knows that this article (a) identifies a true issue, but (b) proposes an unsatisfactory solution.

The true issue is that the so-called "REST" APIs being discussed here are all incompatible because they're nothing to do with REST at all! They're just poor-man's Web Services: HTTP tunnelling.

Welcome to the Web! I understand the utopia dreamed up by Sir Tim and friends but in reality that isn't what the Web is any longer. The Web is now a mix of content readable by humans and 4000+ machine interfaces to applications that all do things differently. There is no longer a free flow of information, APIs are the gatekeepers

Consider how the Web works: there's no "API" for each site you visit, there's just HTTP and HTML.

So a better solution is for all those "REST" APIs to be replaced by actual REST interfaces; to be made to work like the Web does, in other words!

There is unlikely to be a mass rip and replace on the Web. Even if the major APIs from SaaS and Social media and infrastructure changed their API, there would be thousands of other that will not change. The problem with APIs is that once they are published they are very hand to change without breaking the clients. Since more and more companies are starting to see their API as the main traffic source the likelyhood of switching is miniscule.

First we need to define the JSON syntaxes for all the common data types. I suggest starting with: contacts based on vCard, events based on iCalendar, news based on Atom, publication info based on Dublin Core, etc.

I would love to see a common JSON data model, and like the idea of leveraging existing well known constructs like vCard and iCal. But having witnessed EDI, HL7 and others, the idea of one consistent model seems a distant and somewhat unobtainable goal.

This is the approach I'm proposing in the form of "OTS" or "Object Type Specifications". OTS will define these JSON formats and describe expected HTTP interactions in the same way that AtomPub does for Atom XML.

Of course OTS is as proprietary as anything else. At least with Mule you have adoption, with many of the F2000 companies using the platform, and being open source everyone has access and visibility. Also, Mule adopts open standards. And if OTS became more than just a one man band and got some real traction, we might consider adopting it as a defacto-standard (common standard because it gets a critical mass of use).

While waiting for API designers to get a clue, there is indeed value in providing a REST adaptor layer - servers that offer a REST interface to these HTTP APIs.

Are you basically saying almost every API out there sucks and the developers are clueless?

I'm saying that for any system there is going to be complexity and misalignments and sometimes blatant disregard for the end user. And with a platform as big as the Web, this holds even more true. I propose an integration layer to take this complexity out of your applications to allow you to better control change (good or bad) and take advantage of all the great stuff behind APIs without tearing your hair out. As more REST APIs emerge you still need a plafrom to help compose these APIs together into something more meaningful, REST doesn't help with that.

Cheers,

@rossmason

Re: Nice overview, but REST is the answer, not proprietary middleware!

Are you basically saying almost every API out there sucks and the developers are clueless?

The APIs are fit for their perceived, narrow purpose, and their designers are more or less clueless about how to design for usability, mashability and scale. Which is not too surprising - like you said, they don't have a REST model to work from, or any reason to do things like the Web.

So what is actually needed is for some bold people to propose something like OTS - a simple JSON/REST approach. And to implement something like a data browser, or shared client code at least, that delivers benefits of scale and mashability to site developers from exposing their data in a common way.

As more REST APIs emerge you still need a platform to help compose these APIs together into something more meaningful, REST doesn't help with that.

That's exactly what REST helps with - the Web is the visible evidence of REST's success. I could rewrite your sentence to show what I mean:

As more Web sites emerge you still need a platform to help compose these sites together into something more meaningful, HTTP and HTML don't help with that.

I don't think anyone would argue that you "need a platform" when put like that! Like I said, you may need API-to-REST adaptor servers for a while.

We need REST because those "emerging APIs" aren't REST - they're not even particularly of the Web. They're HTTP APIs.

Actually, it would be interesting to compare the growth of the Web with your API growth graphs.. :-)

Re: Nice overview, but REST is the answer, not proprietary middleware!

Mostly agree. As I have said before the very term REST API is an oxymoron. The whole point of REST is to get rid of APIs.

Firstly, while 'Web APIs' have replaced SOAP on the web, there is no consistent way of describing such services with metadata. The only one that comes close is OData (www.odata.org).

Just recently I had to access the 'REST' API of an enterprise micro-blogging service and was dumbfounded that none of the JSON structures were documented at all (some were quite complex). I spent almost 3 days trying to understand the various json formats! Perhaps the authors thought that because JSON is a standard media type, no explanation is needed :).

Secondly, 'proper' REST has not proven to be successful in practice (yet). I for one am not willing to put my eggs in the REST basket unless it is shown to work.

Re: Nice overview, but REST is the answer, not proprietary middleware!

Welcome to world of NoContracts, touted by so many a few years back. In this video the Google Discovery API team defines REST as:

an API is a collection of:
- resources
- methods on resources
- parameters for each method

Examples of "methods" on resources?

so there is the URL "shortener" API, here are the "methods"
get to retrieve the full URL from the shortened version (so far so good)
insert to shorten an API
list (as a verb) to retrieve the URLs shortened by a user

>> Secondly, 'proper' REST has not proven to be successful in practice (yet).

No one is looking for a resource-oriented programming model (that was the bait, a diamond in the ruf that the industry had conveniently missed). REST is just an excuse to do whatever you want to do, starting at CRUD. Even Google can define REST by whatever they want it to be.

Re: eMule won't replace missing standardization

Hi,

So, first, sorry for everyone for the mistake - the world Mule seemingly pushed a synapsis in my brain which made me think eMule is a shortening for ESB Mule :).

Of course, I'm aware of the product although never any of my teams used it in production, despite evaulating the ESB concept (and the mule software itself) multiple times. It's not Mule's fault, Mule is a pretty good ESB.

Let me explain what's my problem with this:

[desired interface] <-- implements -- [Mule] <-- uses -- [actual interface]

My problem is that while it looks perfect, the true is:

[desired interface] <-- implements -- [custom glue written with Mule] <-- uses -- [actual interface]

The problem with ESB is, is that it doesn't bring value to the architecture sometimes, that is, if I remove the "written with Mule" part, I get this:
[desired interface] <-- implements -- [custom glue] <-- uses -- [actual interface]

And you're back where you started.

Of course, putting all your 3rd party communication into one place is a good feature. But this one place can be vendor/communication-libraries/3rd_party/[servicename].jar if you manage to deploy your systems in sync to all your application servers.

Of course, when a change comes from a 3rd party, you have to deploy that change to all of your application servers at once, changing this library, restarting them, or you can restart only a single application, the ESB server, and changing a lib (or a script, or anything) there.

Which makes sense? Depends on context.

If your application is so reliant on 3rd parties that your application is useless without them (like, farmville w/o facebook), it can run on the appservers as well. If the 3rd parties just provide secondary functionality, it's better perhaps if it's a single component.

Also, network-wise, if your apps make a lot of traffic with the 3rd parties, you don't want them to be concentrated on a single point of failure.

The use of an ESB is not always a rational decision.

It has to be understood: ESB makes a change in your deployment diagrams, but won't make a change in your logical component diagrams. You'll still need the glue, you'll still have a difference of the 3rd party's interface and your desired interface.

The only way to eliminate the glue is to make actual and desired interfaces meet. As long, as the actual interfaces are based on random understandings of Fieldings' dissertation, extended with technical constraints, ease of use and actual beliefs, you still have to write a glue.

You can pack a lot of glues (connectors) together commercially, you can provide them as Mule ION, or Spring Social, you can give it for free or for money, but it won't help you with the remaining 3500 services.

Only, and only if the remaining 3500 services will provide a rather standard interface, usable out-of-the-box, and virtually unchanged (eg. uses versioned clients), will it work seamlessly, but then, for a lot of cases, the last thing you'd need is an ESB.<//pre><//pre><//pre>

Re: eMule won't replace missing standardization

Hi,

So, first, sorry for everyone for the mistake - the world Mule seemingly pushed a synapsis in my brain which made me think eMule is a shortening for ESB Mule :).

You are not the first

Of course, I'm aware of the product although never any of my teams used it in production, despite evaulating the ESB concept (and the mule software itself) multiple times. It's not Mule's fault, Mule is a pretty good ESB.

I'd argue its the best for lots of reasons, but I am biased :-)

Let me explain what's my problem with this:

[desired interface] <-- implements -- [Mule] <-- uses -- [actual interface]

My problem is that while it looks perfect, the true is:

[desired interface] <-- implements -- [custom glue written with Mule] <-- uses -- [actual interface]

The problem with ESB is, is that it doesn't bring value to the architecture sometimes, that is, if I remove the "written with Mule" part, I get this:

[desired interface] <-- implements -- [custom glue] <-- uses -- [actual interface]

And you're back where you started.

With that mode of thinking applied to the database tier everyone should be using raw SQL calls rather than using an ORM layer.
Mule is the integration layer that provides a lot of benefits for working with lots of different and incompatible application interfaces. You can code to these directly yourself but you are coding directly to the interfaces and if you want error handling, retries, monitoring, security, transactions, etc you need to build it yourself. Mule does this stuff for you; its a similar concept that ORM tools provide to manage CRUD operations to your database and giving the developer an easier object model to work with.

Of course, putting all your 3rd party communication into one place is a good feature. But this one place can be vendor/communication-libraries/3rd_party/[servicename].jar if you manage to deploy your systems in sync to all your application servers.

Of course, when a change comes from a 3rd party, you have to deploy that change to all of your application servers at once, changing this library, restarting them, or you can restart only a single application, the ESB server, and changing a lib (or a script, or anything) there.

Which makes sense? Depends on context.

In almost all situations its better to isolate change in your architecture. putting your integration logic in a separate tier and decoupling your application from the systems it integrates with makes a lot of sense. The point of this article is that its becoming common to integrate with lots of application interfaces, the logic for this should be modeled in a separate tier away from your application logic

If your application is so reliant on 3rd parties that your application is useless without them (like, farmville w/o facebook), it can run on the appservers as well. If the 3rd parties just provide secondary functionality, it's better perhaps if it's a single component.

Also, network-wise, if your apps make a lot of traffic with the 3rd parties, you don't want them to be concentrated on a single point of failure.

Integration platforms like Mule are built to handle failures since they often run in mission critical environments where message loss is not an option. Of course there are architecture decisions

The use of an ESB is not always a rational decision.

Totally agree, and is the same for any technology. The decision should always be driven by the needs of the application.

It has to be understood: ESB makes a change in your deployment diagrams, but won't make a change in your logical component diagrams. You'll still need the glue, you'll still have a difference of the 3rd party's interface and your desired interface.

The only way to eliminate the glue is to make actual and desired interfaces meet. As long, as the actual interfaces are based on random understandings of Fieldings' dissertation, extended with technical constraints, ease of use and actual beliefs, you still have to write a glue.

Yes, but its the way you choose to implement the glue that will determine how brittle or hard to maintain your application will be.

You can pack a lot of glues (connectors) together commercially, you can provide them as Mule ION, or Spring Social, you can give it for free or for money, but it won't help you with the remaining 3500 services.

Mule iON and others still help with generic cases as well. You still get the benefits of cross-cutting concerns such as error handling, data handling/marshalling, retries, idempotency, monitoring, alerts, connectivity, protocol support, etc

Only, and only if the remaining 3500 services will provide a rather standard interface, usable out-of-the-box, and virtually unchanged (eg. uses versioned clients), will it work seamlessly, but then, for a lot of cases, the last thing you'd need is an ESB.

The fact that there is no uniformity is the exact reason you need an integration platform. If all applications and services were compatible and aligned, there wouldn't be a 214bn integration space.

Cheers,

@rossmason

<//blockquote>

Re: eMule won't replace missing standardization

I just blogged about the topic of this thread, it boils down to Loose coupling

Cheers,

@rossmason
Close

by

on

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

15 Discuss

Login to InfoQ to interact with what matters most to you.