BT
x Your opinion matters! Please fill in the InfoQ Survey about your reading habits!

Resource-Oriented Architecture: The Rest of REST

Posted by Brian Sletten on Dec 03, 2009 |

Series Introduction

Think for a moment, if you can, back to a time before the Web. Imagine trying to explain the impending changes to your hapless contemporaries. It is likely they would simply not be able to fathom the impacts that the Web's emergence would have on nearly every aspect of their lives. In retrospect, it feels like a tsunami caught us off-guard and forever altered the landscape around us. The reality is more pedestrian, however. It was a deliberate series of technical choices that built upon each other that yielded the results we have experienced.

Now, pause and reflect upon the idea that you are probably in a similar position to those incredulous pre-Web types you were just trying to enlighten. Unless you have been paying close attention, you are about to be caught off-guard again as it feels like a new wave crashes upon our economic, social, technological and organizational landscapes. While the resulting changes will feel like they occur overnight, the reality is that they have been in the works for years and are just now producing tangible results. This new wave is about a Web that has evolved beyond documents into Webs of Data, both personal and private. We will no longer focus on information containers, but on information itself and how it is connected.

This wave has been in the works for years and is again being driven by the deliberate adoption of specific choices and technologies. These choices are combining to solve the problems caused by the inexorable march of technological change, business flux, new and varied data sources and the ubiquitous, expensive and failure-prone efforts that have cost millions and delivered insufficient value. Web Services and Service-Oriented Architectures (SOA) were supposed to be part of the answer, but the elegance of their visions have been forever stained by the inelegance of their technical solutions.

The beauty is that we are not starting from scratch. We are building upon the technology we have in place to grow these data webs organically. We can wrap our databases, libraries, services and other content sources with a new set of abstractions that will help us off the treadmill we have been on. We are integrating the public Web of Data with our own, privately held data. The incremental adoption of these technologies is yielding new capabilities that will, in turn, unlock further capabilities.

This is the first article in a new series to highlight the evolution of information-oriented systems that got us to where we are and provide a roadmap to where we are going. Despite what it may seem on the surface, these choices are neither ad hoc nor esoteric, but rather foundational decisions based on a long tradition of academia and applied engineering.

We will start by revisiting the REpresentational State Transfer (REST) architectural style. Oft quoted and even more often misunderstood, this manner of building networked software systems allows us to merge our documents, data and information-oriented services into a rich, logical ecosystem of named resources. From there, we will introduce the vision of the Semantic Web and walk through its core technologies represented by a flexible and extensible data model and the ability to query it. We will see how to incorporate relational data, content from documents, spreadsheets, RSS feeds, etc. into a rich web of reusable content.

After we present the basics, we will walk through a variety of successful efforts building on these technologies and then return to reclaiming the vision promised to us by proponents of Web Services technologies. We will describe a process where we can achieve something of a Unified Theory of Information Systems; one that not only handles, but embraces the kind of technical and social change that has been painful and intractable to manage in the past.

There has been too much hype surrounding the Semantic Web, but there have also been a steady stream of quiet successes. This series will be a pragmatic guide into both new and familiar territory. We will connect the technologies in deeper ways than perhaps you have seen before. We will highlight events and actions by companies, government organizations and standards bodies that indicate that this is happening and it will change everything. We will show how a very large difference in your system implementation can often be made through subtle shifts in perspective and adoption of standards that are designed to facilitate change.

The first step, is to embrace a common naming scheme for all aspects of our infrastructure. A Service-Only Architecture usually ignores the data that flows through it. At the end of the day, our organizations care about information first and foremost. REST and the Web Architecture puts this priority up front and lays the foundation for the remainder of our discussion.

The Rest of REST

It has become fashionable to talk about the REpresentational State Transfer (REST) as something of a weapon in the War On Complexity. The enemies in this war, according to some, are SOAP and the Web Services technology stack that surrounds it. This Us vs Them rhetoric brings passion to the table, but rarely meaningful dialogue so people remain confused as to the underlying message and why it is important. The goal is not to replace SOAP; the goal is to build better systems.

REST is not even a direct replacement for SOAP. It is not some kind of technology of convenience; a simple solution for invoking Web Services through URLs. The management of information resources is not the same thing as invoking arbitrary behavior. This confusion leads people to build "RESTful" solutions that are neither RESTful, nor good solutions.

REST derives its benefits as much from its restrictions as it does its resultant flexibility. If you read Dr. Roy Fielding's thesis (which you are encouraged to do), you will learn that the intent was to describe how the combination of specific architectural constraints yields a set of properties that we find desirable in networked software systems. The adoption of a uniform interface, the infamous Uniform Resource Locator (URL), contributes to the definition of REST, but is insufficient to define it. Likewise, interfaces that simply expose arbitrary services via URLs will not yield the same benefits we have seen so successfully in the explosion of the Web. It takes a richer series of interactions and system partitioning to get the full results.

Most people understand that REST involves requesting and supplying application state of information resources through URLs via a small number of verbs. You retrieve information by issuing GET requests to URLs, you create or update via POST and PUT, and remove information via DELETE requests.

This summary is not incorrect, but it leaves too much out. The omissions yield degrees of freedom that unfortunately often allow people to make the wrong decisions. In this gap, people create URLs out of verbs which eliminates the benefit of having names for "things". They think REST is just about CRUD operations. They create magical, unrelated URLs that you have to know up front how to parse, losing the discoverability of the hypertext engine. Perhaps most unforgivably, they create URLs tied solely to particular data formats, making premature decisions for clients about the shape of the information.

Understanding the full implications of REST will help you avoid these problems; it will help you to develop powerful, flexible and scalable systems. But it is also the beginning of a new understanding of information and how it is used. Upon this foundation of Web architecture, the application of the remaining technologies of the Semantic Web will yield unprecedented power in how we interact with each other as individuals, governments, organizations and beyond. This is why we begin with a deeper dive into the parts of REST that many people do not understand and therefore do not discuss. These topics include the implications of:

  • URLs as identifiers
  • Freedom of Form
  • Logically-connected, Late-binding Systems
  • Hypertext as the Engine of State Transfer (HATEOS)

URLs as Identifiers

We have already established that most people know about URLs and REST. It seems clear that they understand that a URL is used for invoking a service, but it is not clear that they get the larger sense of a URL as a name for information. Names are how we identify people, places, things and concepts. If we lack the ability to identify, we lack the ability to signify. Imagine Abbott and Costello's infamous "Who's on First?" skit on a daily basis. Having names gives us the ability to disambiguate and identify something we care about within a context. Having a name and a common context allows us to make reference to named things out of that context.

The Uniform Resource Identifier (URI) is the parent scheme. It is a method for encoding other schemes depending on whether we want them to include resolution information or not. Librarians and other long-term data stewards like names that will not change. A Uniform Resource Name (URN) is a URI that has no location information in it; nothing but name is involved. The good news is that these names will never break. The bad news is that there is no resolution process for them. An example of a URN is an ISBN number for a book:

urn:isbn:0307346617

In order to find more information about this book, you would have to find a service that allows you to look up information based on the ISBN number.

If nothing about the context of our systems and information ever changed, we would probably always want to include resolution information in our resource names so we could resolve them. But anyone who has been handed a broken link knows we want longer-lived names for really important stuff. Looking at our history of using URLs, we have done some silly things when we created ones such as:

http://someserver.com/cgi-bin/foo/bar.pl
http://someserver.com/ActionServlet?blah=blah
http://someserver.com/foo/bar.php

The problem with these URLs is that the technology used to produce a result is irrelevant to the consumer of information. There is no good reason to create URLs like that. The focus should be on the information, not the technology. Implementation technologies change over time. If you abandon them, for instance, any system that has a link to the Perl, Servlet or PHP-based URL will break. We will address some infrastructure to solve this problem in future articles, for now, we will just try to make careful choices in the names we give our information resources.

Despite being fragile, the URL scheme does allow us to disambiguate information references in a global context.

http://company1.com/customer/123456

is distinct and distinguishable from

http://company2.com/customer/123456

in ways that a decontextualized identifier like '123456' is not.

To ground the concept into a larger information systems framework, you can think of a URL as a primary key that is not specific to a particular database. We can make references to an item via its URL in dozens of different databases, documents, applications, etc. and know that we are referring to the same thing because we have a unique name in a global context. We will use this property in future discussions to describe and connect RESTful systems to other content and metadata.

The next aspect of URLs that bears discussion is their universal applicability. We have a common naming scheme that allows us to identify:

  • documents (reports, blogs, announcements)
  • data (results, instance information, metadata)
  • services (REST!)
  • concepts (people, organizations, domain-specific terms)

We do not need to come up with a different mechanism to refer to each different category of things. A careful application of some specific guidelines allows us to blur the distinctions between these things which brings us to the last point for now about URLs. Not only are these names useful in order to refer to information we care about, but systems that receive these references can simply ask for them. The 'L' in URL (locator) gives us the capacity to resolve the thing, not knowing anything else about it. We can usually invoke the same basic operations on everything we can name. Issuing a GET request to a URL representing a document, some data, a service to produce that data or an abstract, non-network-addressable concept all work fundamentally the same way. For those things we have the permission to manipulate, we can also create, modify or delete them using similar means.

Freedom of Form

Our experience of the Web has been somewhat passive with respect to the shape of information. When we click on a link, we expect the content to come back in a particular form, usually HTML. That is fine for many types of information, but the architecture supports a much more conversational style allowing clients to request information in a preferred form.

To understand why this is useful, consider a company's sales report. It is easy to imagine this being useful to executives, sales people, other employees, clients and investors as an indication of how a company is performing. A possible name for such a report could include the year and the quarter in the URL:

http://company1.com/report/sales/2009/qtr/3

We might contrast this with a sales report for the month of March:

http://company1.com/report/sales/2009/month/3

Both are good, logical names that are unlikely to break over time. It is a compelling vision that people could simply type such a URL into a browser and get the information they seek rendered as HTML. The reports could be bookmarked, e-mailed, linked to, etc.; all the things we love about the Web.

The problem is that the information is locked into its rendered form (until we introduce technologies like GRDDL and RDFa later in this series!). We used to try to scrape content from pages, but gave up in disgust. As the page layout changes, our scripts break.

If you were a programmer for this company and wanted to get to the information directly, you might like to request it as XML. You could get back raw, structured data that you could validate against a schema. HTTP and REST make this trivial as long as the server knows how to respond. By passing in an "Accept: application/xml" header to your request, you could indicate a preference (or requirement) for XML. On success, you will get back a byte-stream with a MIME type indicating that your request has been honored. On failure, the server will indicate via a 406 Error that it cannot help you. In that case, you might want to contact the department responsible for this information and request they add the support you need; something they can do without breaking any existing clients. If you were a business analyst, you might think that XML has sharp points and can hurt you, so you might like to request it back as a spreadsheet, a format that is easily incorporated into your existing workflows, tools and processes.

The point is that the logical name for the report is easily converted into various forms at the point it is requested. It is equally easy to run systems that accept modifications back in the various forms. The client has no visibility into how the information is actually stored, they just know that it works for them. This freedom is wholly underused by people building RESTful systems. When they stand up a service and decide that they will only return XML, they miss the potential value REST has to an organization.

Because many developers are either unaware of content negotiation or find it difficult to test in a browser, they define different URLs for the different formats:

http://company1.com/report/sales/2009/qtr/3/report.html
http://company1.com/report/sales/2009/qtr/3/report.xml
http://company1.com/report/sales/2009/qtr/3/report.xls

This developer convenience becomes a limitation once you escape the confines of a particular use. In essence, we now have three information resources, not one that can be rendered in different forms. Not only does this fork the identity in the global context, it also prematurely commits other clients to a particular form. If you pass a reference to a URL as part of a workflow or orchestration you are robbing the upstream clients from the freedom to choose the form of the data.

There are several ways to test a proper RESTful service without using a browser, for example:

curl -H "Accept: application/xml" -O http://company1.com/report/sales/2009/qtr/3

using the popular curl program. Any reasonable HTTP client will provide similar capabilities.

The benefits of supporting a rich ecosystem of negotiable data forms may not be immediately obvious, but once you wrap your head around it, you will see it as a linchpin toward long-lived, flexible systems that favor the client, not the developer.

Logically-Connected, Late-Binding Systems

Once you commit to good, logical names for your information resources, you will discover some additional benefits that fall out of these decisions. Named references can safely and efficiently be passed back as results without returning actual data. This has strong implications for large and sensitive data sets, but it also makes possible technical and architectural migration.

For the same reasons pointers are useful in languages like C and C++, URLs as references to data are more compact and efficient to hand off to potential consumers of information. Large data sets such as financial transactions, satellite imagery, etc. can be referenced in workflows without requiring all participants to suffer the burden of handling the large content volume.

Any orchestration that touches actual data must consider the security implications of passing it on to other systems. It quickly becomes untenable to provide perfect knowledge of who is allowed to do what at every step of a process. If a reference is passed from step to step, it is up to the information source to enforce access. Some steps may not require access to the sensitive information and could therefore be excluded from receiving it when they resolve the reference.

This means the late-binding resolution can factor in the full context of the request. A particular user accessing a resource from one application might have a business need to see sensitive information. The same person using a different application might not have a business justification to the same data. A RESTful service could inspect session tokens and the like to enforce this access policy declaratively. This level of specificity is required to prevent internal fraud, often the biggest risk in systems that deal with sensitive content. The details of such a system are going to be implementation-specific and are largely orthogonal to the process of naming and resolving logically-named content.

Dependency on a logical connection allows clients to be protected against implementation changes. When popular websites shift from one technology to another, they are usually successful at hiding these changes from their users. RESTful services do the same thing. This gives us the freedom to wrap legacy systems with logical interfaces and leave them in place until there is a business reason to invest in a new implementation. When that happens, clients can be protected from being affected.

In addition to mediating technology changes, RESTful systems allow you to embrace a variant of Postel's Law: Be Conservative in what you do; be Liberal in what you accept from others. You can maintain strict content validation of what you accept and return. However, if you have an existing client base that is providing you content in a given form, you are free to allow other clients to provide different forms, different schemas, etc. without affecting the existing clients. Systems that closely associate a contract with an endpoint tend not to have this freedom which makes them more brittle and quickly fragmented.

Hypertext As the Engine of State Transfer (HATEOS)

As systems come across references to information resources, many people think there needs to be some sort of description language to indicate what is possible or should be done with it. The reality is that a well-considered RESTful system usually does not require this concept. This is difficult for SOAP developers to accept, but it has to do with the constraints of the architectural style. Because we treat information resources as things to manipulate through a uniform interface (the URL!) and restrict our efforts to a small set of verbs, there really is no need to describe the service.

If you find yourself confused on this point, it is probably an architectural smell that you are conflating manipulating resources with invoking arbitrary behavior. The REST verbs provide the full set of operations to apply to an information resource. Certainly, you need to know what information is being returned so you know how to process it, but that is what MIME types are for. While it is usually preferable to reuse known types (application/xml, image/png, etc.), many developers do not realize that they can create their own application-specific data types if necessary.

In the larger arc of this article series, we will address the problems of finding and binding arbitrary resources using rich metadata. For now, we will simply keep in mind Roy's underscoring of the importance of "hypertext as the engine of state transfer" (obliquely referred to as "HATEOS" by RESTafarians). This is perhaps the most misunderstood portion of the thesis. To get its full implication, we need to revisit how the Web works.

You type a URL into the browser and it issues an HTTP GET request for that resource. Invariably, the server responds with a bytestream, a response code (usually 200 on success) and a MIME type indicating that the response is HTML. The browser decides it knows how to handle this type and parses the result into a document model of some sort. Within that model, it finds references to other resources: links, images, scripts, style sheets, etc. It treats each one differently, but it discovers them in the process of resolving the original resource. There is no service description; the browser, as a client, simply knows how to parse the result.

The same mechanism should be employed for RESTful services. The URLs themselves should not be "magical". A client should not be required to know how to parse a URL or have any special knowledge of what one level in the hierarchy means over another one. RESTful clients should retrieve a resource, investigate the returned MIME type and parse the result. As such, a client should know how to parse the returned type.

For example, a client might receive a reference to the main RESTful service for the reporting service we described above:

http://company1.com/report/

If requested from a browser, it could return an HTML document that has references to:

http://company1.com/report/sales

which the user could click through to find a list of years to browse. The point is that the browser has no special knowledge of the URL structure, but it knows how to parse the result and present the content to the user in a way she can explore.

The same can be true of other MIME type responses. For example, requesting the 2009 quarterly reports as XML:

http://company1.com/reports/sales/2009/qtr

could yield:

<reports>
    <description>2009 Quarterly Reports</description>
    <report name="First Quarter" src="http://company1.com/reports/sales/2009/qtr/1"/>
    <report name="Second Quarter" src="http://company1.com/reports/sales/2009/qtr/2"/>

    <report name="Third Quarter" src="http://company1.com/reports/sales/2009/qtr/3"/>    
</reports>

You can think of the URL as a vector through an information space. Each level points you closer to the ultimate resource. Different paths can yield the same results. The client will have to know how to parse these results, but by giving the response an identifiable type, we can trigger the appropriate parser. The structure can be spidered by descending through the references, or presented to a user to browse through some kind of interface. A RESTful interface becomes a way for clients to ask for information based on what they know. They start from a known or discovered point and browse the information like you browse the Web.

This is what HATEOS refers to. The application state is transferred and discovered within the hypertext responses. Just like the browser needs to know about HTML, images, sound files, etc., a RESTful client will need to know how to parse the results of resolving a resource reference. However, the entire process is simple, constrained, scalable and flexible -- exactly the properties we want from a networked software system.

Many people build "RESTful" systems that require the clients to know beforehand what each level in a URL means. Should the information get reorganized on the server side, clients of those systems will break. Clients that truly embody HATEOS are more loosely-coupled from the servers they communicate with.

Looking Forward

We struggle daily to solve the problems of rapidly changing domains, technologies, customer demands and actionable knowledge. We spend too much time writing software to link what we learn to what we know. Objects and databases have not kept pace with the changes we experience. We need a new way of looking at the information we produce and consume that is extensible and less fragile than the solutions of the past. We need technology to help us form consensus. We should not have to achieve consensus in the form of common models before we can use our technologies.

In this article, we have introduced the series and have begun to look at how REST and Web technologies can serve as the basis of a new information-oriented architecture. We have established a naming scheme that allows us to unify references to all manner of content, services and documents. Clients can leverage the freedom to negotiate information into the form they want. As they resolve references, they can discover new content connected through new relationships.

This architectural style and the technologies surrounding the Semantic Web combine nicely to create powerful, scalable, flexible software systems. Their capacity to create Webs of Data will have as much impact on our lives as the Web has already had. This will be an information systems revolution that will turn much of what we know on its head. It will not only reduce the cost of data integration, but it will enable new business capabilities we can only begin to imagine.

We are moving into a world where information can be connected and used regardless of whether it is contained in documents, databases or is returned as the results of a RESTful service. We will be able to discover content and connect it to what we already know. We will be able to surface the data currently hidden behind databases, spreadsheets, reports and other silos. Not only will we gain access to this information, we will be able to consume it in the ways we want to.

This is one of the main, modest goals of the Semantic Web. Achieving it, as we are now able to do, is starting to change everything.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Re: The Rest of REST by chris barrow

A clear and concise introductory article on REST. Well worth the read.

Should 'REST' clients be made to understand the structure of URIs? by Faisal Waris

From what I have read, REST applications start with a root URI and then everything happens through HATEOS after that. Hypermedia URIs are either created by the server (or by the client in some generic manner).

Asking service clients to understand the structure of URIs in order to access specific resources means one is exposing some kind of an information model and implicitly defining a contract. I don't believe this is REST.

If there is an information model that clients should understand, then we can be explict about it as in Open Data Protocol or SOA.

Re: Should 'REST' clients be made to understand the structure of URIs? by Faisal Waris

Sorry you have already pointed this out.

REST applications should be HATEOS driven but that is not the case in the industry.

I have not come across any significant examples of proper REST applications. Maybe proper REST is too hard or not practicle.

REST has been around for a while but it seems to be vastly misunderstood and misapplied.

Maybe we do need to be explicit about information models that services expose (such as in Open Data Protocol / SOA) as the practicle alternative for now.

Re: Should 'REST' clients be made to understand the structure of URIs? by Jamie Kirkpatrick

@Faisal:

Agree with everything you say, and the issue as I see it is that really no-one out there is giving definitive practical examples of fully RESTful applications (or more interestingly clients). I have yet to see anyone explain clearly and concisely what the correct balance of media-types / relationships is (some people say you should be using custom media-types for example, and others say that this is a bad idea!), and additionally how those can be conveyed to a client in a way that is not "out-of-band". Lots of ideas floating around but no concrete standards out there when it comes to RESTful services.

People hold up browsers as an example of a RESTful client but this misses the point for me: browsers only have to deal with a specific, well known set of media-types and relationships and those have been communicated "out-of-band" over many years to the point where there are standards and anyone can write a compliant browser. Show me some way of doing this for my own apps and we can talk...

Re: Should 'REST' clients be made to understand the structure of URIs? by Jean-Jacques Dubray

>> no-one out there is giving definitive practical examples of fully RESTful applications (or more
>> interestingly clients).

simply because REST is not a programming model and will never be. These examples don't exist and most likely will never exist. REST as a programming model is pure fiction.

>> Show me some way of doing this for my own apps and we can talk...
You bet

That being said, there are good things you can do with HTTP, and REST, as defined by Roy, introduces very interesting patterns. Understanding how to take advantage of HTTP and these patterns as you built your connected systems is key. However, REST as a or even "the" key building block of information systems is a myth and that myth is gone today. I can safely predict no one will be able to build a significant information system (e.g. ERP, PLM, Policy & Claims Management, ...) in a RESTful way.

Re: Should 'REST' clients be made to understand the structure of URIs? by Kingsley Idehen

You don't need to build RESTful ERP, PLM, or any other traditional Line-Of-Business Apps. REST is about building loosely coupled network interactions with purveyors of information (documents or resources).

The whole Linked Data meme is about the application of REST to the age-old issue of platform independent Data Access; basically doing this job better than ODBC, JDBC, OLE-DB, ADO.NET etc..

Fixing Data Access is a big deal since without fixing this you can never fix the bigger problem of Data Integration.


Kingsley Idehen
www.openlinksw.com/blog/~kidehen

Re: Should 'REST' clients be made to understand the structure of URIs? by Jean-Jacques Dubray

>> def: Purvey = to provide, furnish, or supply (esp. food or provisions) usually as a business or
>> service.

If you want to limit REST to "supply" information, I am fine. Though you run into the problem that REST couples access with identity.

Now, if you claim that REST can fix Data Access, I think you are seriously mistaken. Don't you need the beginning of a query language? (i.e. not something ad hoc)

Re: Should 'REST' clients be made to understand the structure of URIs? by Faisal Waris

Do we need a new name for what is actually being done?

Clearly most are not doing REST as Roy Fielding intended.

Yet, I have seen countless implementations using HTTP for data access being referred to as REST when none of the REST principles are in use (e.g. contract/rpc style of interface, no standard media types, no HATEOS, no 'discovery' of media type on the client, etc.).

Some use atom as the envelope format (maybe atom is the new SOAP) but then have specific structure within atom that clients need to understand.

Yet somehow they all claim the benefits on REST.

Terms such as Web Oriented Architecture (WOA) and Resource Oriented Architecture (ROA) are in use but there is no consensus on what these mean exactly.

Re: Should 'REST' clients be made to understand the structure of URIs? by William Martinez

Nice to see we're all coming to the same conclusion about REST Style not being actually implemented.
I did mention before many people do twist their apps into horrible semantics just to complain with REST constrains. That is, the goal is to obtain the "REST brand" tag and not to improve the architecture.

At the beginnings, I mentioned the so called RESTFull APIs were just Web APIs (WAPIS, please coin the term). Then there were REST Services (as opposed to SOAP services) which lead our friend Jean-Jacques to confusion in his posts.
Then we have ROA and WOA. Names that make no sense at all, and let me explain why:
1. Oriented Architecture means an architecture that follows the rules and behavior of a metaphor. In SOA, the metaphor is a business service.
2. What's the metaphor in ROA and WOA? How does a Resource behaves? What are the rules upon it? Is Web a metaphor? Same questions to web.
3. If the metaphor does not help or guides the definition and semantics of an architecture, then we are just making up names about what we use in the development to coin a term and make money talking about it. So, I can came up with some: Java oriented Architecture, Rails Oriented Architecture, RPC oriented Architecture.

Cheers.

William Martinez Pomares.

Useful start to good discussion by Lawrence Winkler

This article has the beginnings of the start of a useful discussion of SOA, ROA and the architecture required to handle the distributed nature of the web. Fieldings thesis and RESTful services is the initial and practical place to go. However, as important as Fieldings thesis is, I think it needs to be placed in a more theoretical context, then brought back into the practical world with a better understanding.

It is far beyond my meager understanding, but my hunch is that the theoretical basis for the underlying discussion is Robin Milner's pi-calculus, where RESTful services and Resource Oriented Architecture is but a special case, and a practical implementation.

In any case, that is how I will be reading this discussion. In answering whether clients should understand the structure of URIs, I have this feeling the answer is 'yes' in the pi-calculus framework, with part of the URI being a pi-calculus 'link', another part of the URI a value sent across the link, the GET/PUT/POST/DELETE a second value sent across the link, and the content-type a third value sent. The example of responses returning other URI's which can be used to get/find other resources fits nicely with the pi-calculus idea of transmitting links between agents.

HATEOAS is out, Hypermedia Constraint is in by Nicholas Gall

If you're following the discussions of REST at the REST-DISCUSS Yahoo Group, then you're aware that Roy et al hate the term HATEOAS and prefer "the hypermedia constraint."

Re: Should 'REST' clients be made to understand the structure of URIs? by Faisal Waris


I mentioned the so called RESTFull APIs were just Web APIs (WAPIS, please coin the term)...

William Martinez Pomares


I like this term. Lets see if it flies in our organization. It is simple and understandable and reflects what people are actually doing. It does not require a theoretical understanding (which apparently does not produce practical results) first.

From an Enterprise Architecture perpective, it allows us to provide understandable guidance on when to use WAPI vs other approaches.

Re: Should 'REST' clients be made to understand the structure of URIs? by Dilip Krishnan


Yet, I have seen countless implementations using HTTP for data access being referred to as REST when none of the REST principles are in use

Those are called websites :-)

Some use atom as the envelope format (maybe atom is the new SOAP) but then have specific structure within atom that clients need to understand.

Atompub is a RESTful protocol and the atom envelop is just the encoding. It is by no means a new SOAP because the semantics of an RPC like operation are not embedded in the payload.

Re: Should 'REST' clients be made to understand the structure of URIs? by Faisal Waris


Those are called websites...


Actually, I was referring to APIs such as facebook, flickr, yahoo, etc. which claim to be RESTful but are not. They are just early-bound Web APIs


Atompub is a RESTful protocol and the atom envelop is just the encoding. It is by no means a new SOAP because the semantics of an RPC like operation are not embedded in the payload.


Document oriented SOAP is RESTful (as stated by one of the REST sites itself). SOAP with Web Services is early bound and not RESTful.

If Atom is used as a container for application specific data structures - which clients need to understand before hand - then we are back to early binding.

Re: Should 'REST' clients be made to understand the structure of URIs? by Darren Cruse

@Jean-Jacques Dubray and Jamie Kirkpatrick - not meaning to start a flame war but I was reading some of these comments wondering how many people here have tried NetKernel?

Putting theoretical considerations aside of what is/isn't RESTful, from a practical standpoint I feel NetKernel is bring something very interesting to the table. I feel my experience using it so far contradicts your statements that:

>> no-one out there is giving definitive practical examples of fully RESTful applications (or more interestingly clients).

and

>simply because REST is not a programming model and will never be. These examples >don't exist and most likely will never exist. REST as a programming model is pure >fiction.

Again not trying to give trouble but I'm sincerely curious if you've tried it, and if you have would you mind replying here with what you found lacking about it?

Is using an "extension" on the URI really so bad? by Darren Cruse

In some of the RESTful stuff I've been working on I'd debated with myself whether I would follow the advice as given in this article and other stuff I've read that ending your URI like "foo.html" or "bar.xml" is a bad idea and content negotiation is a better way.

But I truly decided I like the convenience and simplicity of just having the extension right there in the URI.

e.g. I'd been doing some experiments with NetKernel and certainly it's URI grammar capability makes it quite easy to have something like "foo.html" or "bar.xml" which most people would assume was an actual file name be dynamically generated content. i.e. Just because we're *used to* the idea that a URI of that kind is exposing the underlying technology that doesn't mean it really is exposing the underlying technology...

Isn't the extension approach just as valid a way of specifying the desire format than the more complicated alternatives?

And part of what I like about the RESTful approach is the simplicity of "mocking" dynamically generated resources using static files, and I really liked the way these URIs that have extensions do map so nicely to simple files on the file system when doing this kind of thing.

In fact I was looking at this as useful for more than mocking (i.e. for testing), I was looking at this as a completely legitimate implementation strategy for certain use cases, i.e. where I liked the idea of seamlessly switching or intermingling resources being dynamically generated versus simple files on the file system (with extensions on them like we're all used to).

Did I convince anybody? :) Is it really so bad?

Re: Should 'REST' clients be made to understand the structure of URIs? by Jean-Jacques Dubray

Darren:

sorry, I just saw your questions today. NetKernel looks like a general CRUD mechanism. It seems to me that Web developers rediscover CRUD. Ever since the 70s, even before probably, CRUD was seen as the enemy. Now, you see a bunch of CRUD-oriented framework popping up as if somebody had discovered a new Eldorado. The claim developers can CRUD with ease and deliver value to their customers. This is generally true as long as what you are CRUDing has no action. The problem with CRUD is that the logic involved in transitioning from one state to another is pushed to the client/consumer side.

Roy understood actions and created the "hypermedia contraint" as a light weight document oriented action mechanism. Hardly anyone in the REST community is using "HC" (much nicer than HATEOAS). RESTfulie is the first REST framework to tackle the problem, but the first thing they had to do was to create a "contract". So we are back where we started. Actually, we are behind where we started, since these contracts are not bidirectional and they are vastly incomplete since they only deal with the HC aspect.

The day people will understand that the "I" in IT is "Information", the day so called "Computer Scientists" will understand that Information exists outside computers and there are semantics associated with information that have nothing to do with Computer Science, that day we will have made an enormous step forward.

Until then, they will all marvel at CRUD.

Happy CRUDing :-))

Re: Should 'REST' clients be made to understand the structure of URIs? by Andrés Ferrando

There's something I can't understand fully: do you expect that a system can "browse" another system using only URIs without having a common model? I mean, not using the same, but without even understand the other model?
If you propose HTML as an example of this, and a person being the one who navigates, I think there's a mistake in the analogy, as HTML is a common model (yes, for formating, but if a person is using the browser, he needs this to facilitate the task) and that person is able to understand the proposed model in the flight and then find what he's looking for.
I can't see how a system would achieve this, or if this is what you expect and call "real REST".

Andrés. www.pruna.com.ar/

Re: Should 'REST' clients be made to understand the structure of URIs? by Darren Cruse

Jean-Jacques:

I'm not familiar with RESTfulie I just googled I'll take a closer look. The honest truth is I'm not even that well read when it comes to some of the distinctions being made between CRUD, HATEOS, HC, etc.

From a purely pragmatic standpoint I'll just say that my experience building some things with NetKernel has been very positive compared to systems I've built using other (mostly java based) systems.

Mainly by that I'm referring to the ability to easily move software components to different nodes on the network, easily change from using http to JMS or other protocols for accessing those components, and built in dependency caching of resources to help performance and scalability.

It is true that the REST verbs are basically CRUD operations on resources, but the manner in which the software is abstracted away from the details of how those resources are generated, and where they live on the network, and whether those resources are generated now or previously generated and cached, is very powerful.

I know this isn't a very deep analysis of the theoretical concerns underlying these things, I'm not sure how things late HATEOS or HC play into this (I guess I need to re-read this article! :).

But I do think the NetKernel guys are onto some good ideas, and I'll take a look at RESTfulie and see if I can learn a thing or two from it too. :)

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

19 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT