BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Roy Fielding on Versioning, Hypermedia, and REST

Roy Fielding on Versioning, Hypermedia, and REST

Bookmarks

Designing, implementing, and maintaining APIs for the Web is more than a challenge; for many companies, it is an imperative. This series takes the reader on a journey from determining the business case for APIs to a design methodology, meeting implementation challenges, and taking the long view on maintaining public APIs on the Web over time. Along the way there are interviews with influential individuals and even a suggested reading list on APIs and related topics.

This InfoQ article is part of the series “Web APIs From Start to Finish”. You can subscribe to receive notifications via RSS.

 

Roy T. Fielding is a Senior Principal Scientist at Adobe and a major force in the world of networked software. While a graduate student at University of California at Irvine (UCI), Fielding worked on a class project to create a maintenance robot for the Web called MOMSpider. Along the way, he created the libwww-perl library and derived some of the underlying principles behind the architecture of the WWW; what Fielding originally called the HTTP Object Model. A few years later, when working on his Ph.D. dissertation, he renamed his model to Representational State Transfer or REST "to evoke an image of how a well-designed Web application behaves".

Fielding’s contributions to open standards are extensive. His name appears on more than a dozen RFC specifications including HTTP, URI Templates, and others. Fielding is also one of the editors for the W3C’s Do Not Track standards effort. As a founding member of the Apache HTTP Server project, he helped create the world's most popular Web server software, wrote the Apache License, incorporated the Apache Software Foundation, and served as its first chairman.

Recently Roy took some time while traveling between standards meetings to answer a series of questions on a topic that often starts debates; Versioning on the Web. He also talked about why hypermedia is a requirement in his REST style, the process of designing network software that can adapt over time, and the challenge of thinking at the scale of decades.

InfoQ: Back in August of 2013, you gave a talk for the Adobe Evolve conference and, in that talk, you offered advice on how to approach "versioning" APIs on the web. It was a single word: "DON’T". What kind of reaction have you seen from that guidance?

Roy: I think everyone in attendance had a positive reaction, since most are our customers and familiar with the design rationale behind the Adobe Experience Manager products. Of course, I wasn’t reading the slides for that audience; I was explaining the rationale behind the conclusions seen in them.  

The Internet reaction to the published slides was a little more mixed, with some folks misunderstanding what I meant by versioning and others misunderstanding the point about changing the hostname/branding. By versioning, I meant sticking client-visible interface numbers inside various names so that the client labels every interaction as belonging to a given version of that API.

Unfortunately, versioning interface names only manages change for the API owner’s sake. That is a myopic view of interface design: one where the owner’s desire for control ignores the customer’s need for continuity.

InfoQ: So, what happens when you version an API?

Roy: Either, (a) the version is eventually changed and all of the components written to the prior version need to be restarted, redeployed, or abandoned because they cannot adapt to the benefits of that newer system, or (b) the version is never changed and is just a permanent lead weight making every API call less efficient.

A lot of developers throw up their hands in disgust at this point and claim that I just don’t understand their problem. Their systems are important. They are going to change. New features are going to be provided. Data is going to be rearranged. They need some way to control how old clients can coexist with new ones.

Naturally, that is where I have to explain why "hypermedia as the engine of application state" is a REST constraint. Not an option. Not an ideal. Hypermedia is a constraint. As in, you either do it or you aren’t doing REST. You can’t have evolvability if clients have their controls baked into their design at deployment. Controls have to be learned on the fly. That’s what hypermedia enables.

But that alone is still not enough for evolvability. Hypermedia allows application controls to be supplied on demand, but we also need to be able to adapt the clients' understanding of representations (understanding of media types and their expected processing). That is where code-on-demand shines.

InfoQ: So, one of the reasons Hypermedia is a requirement in the REST style is to deal with change over time, right?

Roy: Anticipating change is one of the central themes of REST. It makes sense that experienced developers are going to think about all of the ways that their API might change in the future, and to think that versioning the interface is paving the way for those changes. That led to a never-ending debate about where and how to version the API.

The techniques that developers learn from managing in-house software, where they might reasonably believe they have control over deployment of both clients and servers, simply don’t apply to network-based software intended to cross organizational boundaries. This is precisely the problem that REST is trying to solve: how to evolve a system gracefully without the need to break or replace already deployed components.

Hence, my slides try to restore focus to where it belongs: evolvability. In other words, don’t build an API to be RESTful — build it to have the properties you want. REST is useful because it induces certain properties that are known to benefit multi-org systems, like evolvability. Evolvability means that the system doesn’t have to be restarted or redeployed in order to adapt to change.

InfoQ: Does that mean as long as I use the REST style I am free and clear of versioning issues?

Roy: No. It is always possible for some unexpected reason to come along that requires a completely different API, especially when the semantics of the interface change or security issues require the abandonment of previously deployed software. My point was that there is no need to anticipate such world-breaking changes with a version ID. We have the hostname for that. What you are creating is not a new version of the API, but a new system with a new brand.

On the Web, we call that a new website. Websites don’t come with version numbers attached because they never need to. Neither should a RESTful API. A RESTful API (done right) is just a website for clients with a limited vocabulary.

InfoQ: One of the things you talk about when referring to your architectural style REST is that it was designed to support "software engineering on the scale of decades." What does "scale of decades" mean, in tangible terms?

Roy: REST was originally created to solve my problem: how do I improve HTTP without breaking the Web. It was an important problem to solve when I started rewriting the HTTP standard in 1994-95. I was a post-Masters Ph.D. student in software engineering, trying not to screw up what was clearly becoming the printing press of our age, which means I had to define a system that could withstand decades of change produced by people spread all over the world. How many software systems built in 1994 still work today? I meant it literally: decades of use while the system continued to evolve, in independent and orthogonal directions, without ever needing to be shut down or redeployed. Two decades, so far.

InfoQ: You yourself have acknowledged that this is a level of engineering in which most architects, designers, and developers don’t operate. So why talk about this level of engineering scale?

Roy: I talk about it because the initial reaction to using REST for machine-to-machine interaction is almost always of the form "we don’t see a reason to bother with hypermedia — it just slows down the interactions, as opposed to the client knowing directly what to send." The rationale behind decoupling for evolvability is simply not apparent to developers who think they are working towards a modest goal, like "works next week" or "we’ll fix it in the next release".

If developers can conceive of their systems being used for a much longer time, then we can escape their own preconceptions about how it will need to change over time. We can then work back from decades to years (how long until you don’t know your users?) or even months (how long until you’ve lost control over client deployment?).

InfoQ: The HTTP application-level protocol is often cited as an example of successful engineering at the scale of decades. Yet, HTTP has gone through more than one version and the early versions of HTTP got a number of things wrong including the Host header problem, absolute time caching directives, and others. Does that run counter to your "DON’T" guidance for Web APIs?

Roy: No, HTTP doesn’t version the interface names — there are no numbers on the methods or URIs. That doesn’t mean other aspects of the communication aren’t versioned. We do want change, since otherwise we would not be able to improve over time, and part of change is being able to declare in what language the data is spoken. We just don’t want breaking change. Hence, versioning is used in ways that are informative rather than contractual.

BTW, it is more accurate to say that HTTP got almost everything right, but that the world simply changed around (and because of) it. Host would have been a stupid idea in 1992 because nobody needed multiple domains per IP until the Web made being on the Internet a business imperative. Persistent connections would have been a terrible idea up until Mosaic added embedded images to HTML. And absolute times for expiration made more sense when people hosting mirrors looked at those fields, not caches, and the norm was to expire in weeks rather than seconds.

InfoQ: Then what lessons can we draw from the fact that HTTP and even HTML have changed over time?

Roy: What we learned from HTTP and HTML was the need to define how the protocol/language can be expected to change over time, and what recipients ought to do when they receive a change they do not yet understand. HTTP was able to improve over time because we required new syntax to be ignorable and semantics to be changed only when accompanied by a version that indicates such understanding.

InfoQ: It seems Web developers struggle to handle change more today than in the past. Are we running into new problems? just seeing more of the same?

Roy: I think there are just more opportunities to struggle now than there were in the past. It has become so easy for people to create systems with astonishing reach, whereas it used to take years to get a company just to deploy a server outside its own network. It’s a good problem to have, most of the time.

Software developers have always struggled with temporal thinking.

InfoQ: Finally, in addition to "DON’T", what advice you would pass on to Web API designers, architects, and developers to help them deal with the problem of change over time?

Roy: Heh, I didn’t say DON’T change over time — just don’t use deliberately breaking names in an API.

I find it impossible to give out generic advice, since almost anything I could say would have to be specific to the context and type of system being built. REST is still my advice on how to build an application for the Web in a fashion that is known to work well over time and known to create more Web as a result (more addressable resources).

About the Interviewee

Roy T. Fielding is a Senior Principal Scientist at Adobe and a major force in the world of networked software. While a graduate student at University of California at Irvine (UCI), Fielding worked on a class project to create a maintenance robot for the Web called MOMSpider. Along the way, he created the libwww-perl library and derived some of the underlying principles behind the architecture of the WWW; what Fielding originally called the HTTP Object Model. A few years later, when working on his Ph.D. dissertation, he renamed his model to Representational State Transfer or REST "to evoke an image of how a well-designed Web application behaves".

 

Designing, implementing, and maintaining APIs for the Web is more than a challenge; for many companies, it is an imperative. This series takes the reader on a journey from determining the business case for APIs to a design methodology, meeting implementation challenges, and taking the long view on maintaining public APIs on the Web over time. Along the way there are interviews with influential individuals and even a suggested reading list on APIs and related topics.

This InfoQ article is part of the series “Web APIs From Start to Finish”. You can subscribe to receive notifications via RSS.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • REST for humans and machines

    by Abel Avram,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    There is a major difference between creating a new version of a website for humans and one for machines. When adding features to a website or redesigning its interface, humans can easily adapt to the new look or learn to use the new features, so one does not have to keep two versions of the website around. The old one is simply discarded.

    It is not the same with machines accessing a domain's API. A client (a machine) does not know how to automatically adapt to a new API. It cannot understand it, nor its new features no matter how RESTful they are. For example, if the server introduces a new operation, let's call it TRADE, the client has no idea how to "trade" in spite of the URL provided and obtained in a REST manner. It's not enough to get the URL. One needs to know what to do with the "trade" data. The client has to be enhanced to understand the new API.

    What is then to do when evolving an API? Stop the world and force all clients to change so they can deal with the new API, while trowing away the old one? No. The server needs to deal with "legacy" clients and also accommodate those which made the transition to the new API.

    I suppose that is the reason for which many companies are using versioning in their APIs as a simple solution for dealing with change.

  • Re: REST for humans and machines

    by Matt Briggs,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    REST is a term that is so misused at this point, that most people aren't aware of core principals, like HATEOAS, what the majority of this interview was directly talking about. REST directly directly addresses the problem of an API needing to change without breaking all of its clients by using hypermedia as the vehicle for state en.wikipedia.org/wiki/HATEOAS

  • Open Closed Principle

    by Paul Beckford,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    If you get your design model right, then yes your distributed system will naturally be open to extension and closed to (breaking) modifications. That way existing clients won't break and new clients can take advantages of new extensions....

    Where I feel Roy is being a a bit disingenuous is saying that existing clients should be able to take care of new extensions (features) too. I don't believe HATEOS gives you this.

    As the previous commentator says, this is true for human clients who can make sense of the new controls and states exposed by HATEOS, but for a machine this "making sense" happens at design time in the head of the programmer, not at runtime.

    So a self-describing API is a great thing for a designer, and late-binding helps old clients "bind" to newer version of the server, minimising what passes as a "breaking" change. All very nice.

    But if older clients want to take advantage of new features then they will need to be modified.

    "Hypermedia allows application controls to be supplied on demand, but we also need to be able to adapt the clients' understanding of representations (understanding of media types and their expected processing). That is where code-on-demand shines."

    I think the examples of "code-on-demand" shining are rather limited :) Engineering the the server so that it can *always* modifying the client "on-the-fly" seems to break the design tenet of encapsulation for me... Sure if you know that your client is a web browser (a standardised Agent acting on behalf of a human) then you can supply a bunch of javascript-on-demand to handle your new "controls", but what if it isn't? What if your client is acting on behalf of another system? Are you going to cater for all possibilities and dish out the appropriate code?

    Personally, I try to stick to the REST constraints as much as possible when using HTTP over the web, but like any model it clearly has limits :)

    It would be good if we heard more about what those limits are from Roy. That way people would be better informed and able to decide for themselves whether REST is the right solution for them!

    Were I agree with Roy is if you are willing to think differently the limits of REST turn out to be less then most people naturally assume.

    Paul.

  • Glad to see these comments

    by Jean-Jacques Dubray,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I am glad to see people finally coming to their senses and expressing their concerns/limits for server-to-server interactions. This has been one of my arguments since the very early days of "REST". HaTEoS is the UDDI of REST, looks nice on paper, in practice it does not solve any real/pressing problem (again in the Server-to-Server context).

    At least we never hear any more arguments around the "Uniform Interface" or "Human Readable Documentations".

    Distributed System Interfaces are intentional in nature. Thinking that translating intents into a noun+HTTP verb has any specific value is a bit of a stretch, especially considering that REST couples access with identity. The real value behind that translation is the nature of the interaction (idempotency, with/without side effects, ...). Without User-Assistance or Shared Understanding between the client and the server, these properties are wishful at best.

    Perhaps it's time to burry the nonsense that has been spread for nearly a decade that resulted in an ODBC-like world where any decent service has to ship a client library to make the co called "REST" API easy to consume.

    The way forward is paved with APIs, Actions and Intents. The Web itself, yes, is RESTful.

  • Re: Glad to see these comments

    by Abel Avram,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We are too time and resource constrained to build super intelligent systems that know how to deal with new features and data by themselves. We act more on a need basis. We don't implement "trade" until we need to trade. It is even considered an anti-pattern: over-architecting. Maybe the day will come when such smart systems will be needed, but not in the foreseeable future.

  • Re: REST for humans and machines

    by Abel Avram,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    HATEOAS relieves the client from dealing with hard-coded links but it does not solve the problem of new features.

    The client is broken when, for example, the server introduces a new operation. As I said in my previous comment, if the server introduces "trade", it is of no use to dynamically provide the link to it because the client does not know what to do with "trade" data which needs to be processed and stored. The client (a machine in my discussion) needs to be enhanced to be able to deal with trade operations.

  • Re: REST for humans and machines

    by Jean-Jacques Dubray,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    That's not entirely true, when you understand that actions are connected through a lifecycle (state machine) as the REST community is slowing warming up to the concept, you can add new (optional) states and transitions without breaking existing clients. The Web is architected on the lifecycle of (Web) pages (Create, Update, Delete), not every type share the same lifecycle :-)

    That being said, I'd like to suggest a re-interpretation of the Web architecture in the light of the STAR Architecture.

    A STAR Architecture is an architecture where States, Types, Actions and Relationships are supported by discrete semantics. For instance, SQL does not have any specific semantics to support relationships, the semantics are implied as a property of a type (column of a Table) or a type (Table) itself. States are reified again as properties. Similarly in OO States are reified as class properties and relationships as type composition mechanisms. Computer Science has anchored software semantics in the Action/Type area, at the expense of State/Relationships. This is a monumental issue because developers have to constantly (en)code these semantics as actions and types. This is a process that's error prone, varies from one developer to the next and makes it very hard to maintain the resulting "code".

    Now, the Web didn't go to the full extend of relationship semantics (a link to a page cannot be navigated upward, a major oversight in the HTTP protocol and Web Page lifecycle (with a "link" action and "linked" state). But the Web made it trivial for anyone (my mother could do it) to "architect" STAR information systems, no (en)coding required. That is the true power of the Web. You can marvel at protocols or DNSes all you want, if the Web had been built on Actions and Types like everything else, there wouldn't be too much Web to surf on.

    HTTP+HTML and specifically hypertext where the key enabler of the STAR architecture.

  • Re: REST for humans and machines

    by Paul Beckford,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I think you have misunderstood. Abel is spot on. Old clients cannot make sense of new "Actions" without being modified by a programmer.

    The whole notion that the web was designed for applications is a mistake. Applications do stuff, they exhibit behaviour.... The web doesn't do behaviour well, never has.. Behaviour was an after thought, starting with CGI and Perl :)

    Alan Kay described the web as what happens when you allow physicists to design software systems. What Alan suggests is that state should be transferred along with the program that makes sense of it. So state and behaviour should be transferred together... an idea he got from the US navy, and a concept we know today as object orientation.

    Would a web based on this concept be more robust when it comes to distributing applications (state and behaviour)? I think so. People have already prototyped such a web so you can decide for yourself:

    en.wikipedia.org/wiki/Croquet_Project

    Roys work as he states in this piece is a post rationalisation *after the fact*. As far as making the most of the existing ubiquitous web, designed for sharing hyperlinked documents goes, REST does very well.

    But that is all it does :)

    Paul.

  • Re: REST for humans and machines

    by Jean-Jacques Dubray,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Paul,

    Here are the articles/post I worked on versioning in the wake of pioneers like John Evdemon and David Orchard:
    - The cost of versioning an API
    - Contract Versioning, Compatibility and Composability
    -
    State Machines, Contracts and Versioning
    - REST Versioning

    The last link shows the complete view of "REST" versioning.

    The fact and the mater is that you could break the client by not changing the message types or operations, simply by changing the state machine behind the contract. With that respect, I was quite pleased to finally see the use of state machines to inform the design of APIs (thank you Mike), this works really well, I have applied it for over a decade. The BPMN specification even references two of my papers which speak about it.

    Yes, indeed, HaTEoS would be able to catch that some situations (but not all) as the client would not be able to invoke the expected course of action (so to speak ;-). One situation where HaTEoS would be of no help is temporal constraints on the interface. (e.g. you have to invoke that operation within 2 min).

    Let's stop pretending that magic bullets exist. As Abel pointed out, everything is hard work, Yes some approaches are a bit easier than other, but again, hypermedia does not solve any particular problem without Human Assistance or Shared Understanding (aka out-of-band). Anyone who pretends otherwise is ... as the saying goes.

  • Re: REST for humans and machines

    by Jean-Jacques Dubray,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Now, I'd like to respond specifically on this:

    Alan Kay described the web as what happens when you allow physicists to design software systems. What Alan suggests is that state should be transferred along with the program that makes sense of it.


    I feel quite strongly about it, because as I mentioned, when Computer Scientists design programming languages they are bound to the "physical" view: Actions and Types at the expense of the conceptual view: States (as in state machine not types) and Relationships.

    I am actually not surprised that Tim Berners-Lee, as a physicist, went well passed that myopic view and delivered an architecture where States, Types, Actions and Relationships are well articulated.

    So I find Alan's view of the Web (which I didn't come across before) quite ironic, especially in the context of the second part: "portable" objects. Can you explain to me how the location of a particular object would solve any semantic problem? Do you see a difference between interacting an object locally or remotely from a semantic perspective? I mean, really? When the RESTafarians like Tilkov, Vinoski or Burke took over Roy's REST they saw a better version of distributed objects, it was the next-gen CORBA in their eyes.

    The fundamental problem of our industry is that the principles set forth by Alan's generation have never been questioned, ever. The problem is that an Object-Oriented view of the world does not work, it would have never produced the Web. Though the Web can easily be constrained to be object-oriented.

    Here is an attempt at raising some questions: "Revisiting Liskov's assumptions".

    The reality is that Software Engineering has made virtually no progress since these old days when you factor the extended Moore's law (CPU, Storage, Network). Worse each time something new breaks in Computer "Scientists" rush to lock all the doors that are opened and make sure we never escape the Abstract Data Type vortex.

    So again, I respectfully suggest a variant on the interpretation of the Architecture of the Web that goes well beyond its protocol, a view that can inform the design of programming languages at large, a view that is much better aligned with the way the world works than Abstract Data Types.

    Abstract Data Types would have been able to create the Web, not in a million years. Perhaps it's time to understand that and start rebuilding Software Engineering.

  • Re: REST for humans and machines

    by Paul Beckford,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "Let's stop pretending that magic bullets exist. As Abel pointed out, everything is hard work, Yes some approaches are a bit easier than other, but again, hypermedia does not solve any particular problem without Human Assistance or Shared Understanding (aka out-of-band). Anyone who pretends otherwise is ... as the saying goes."

    Oh. So we all agree :)

    Good.

  • Re: REST for humans and machines

    by Ryan R,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I think I am with Paul on this one:

    The web doesn't do behaviour well, never has.. Behaviour was an after thought, starting with CGI and Perl :)


    The REST API versioning "debate" is just one of many debates about web architecture that is purely a consequence of how we, as an industry, have chosen to use (abuse?) the web and http.

  • solving different problems

    by Paul Topping,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    It seems like Fielding is talking about a different problem than most folks dealing with versioning of REST interfaces. The idea that my interfaces can stay the same for even a single decade is unimaginable. The semantics of the problem to be solved are expected to change much faster than that.

    This leads me to believe that he's talking about versioning as solving a different problem than what everyone else is trying to solve. He's trying to maintain an API that doesn't change what it does in the face of changes imposed by the world outside. Virtually everyone else (me included) is talking about changes in what the API does: changes in requirements, new features, fix design flaws, etc. Fielding suggests that we should be talking about a new API with a new hostname in the face of these kinds of changes.

    I find changing hostname with every API improvement (or set of API improvements) to be impractical. The hostname reflects part of my API's brand. If I want to keep my API's branding, I am forced to choose between api.foov1.com and api.foo.com/v1. The latter seems better to me.

  • Re: solving different problems

    by Mike Glendinning,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yes, indeed. As I have written before, REST is fundamentally a system of intermediaries, not endpoints, as hinted by the terms "user *agent*" and "origin server".

    REST strips off a weak set of semantics at the level of intermediaries, largely about presentation (e.g. HTML) and this "uniform interface" can be versioned and evolved at a very slow rate. This has benefits in allowing us to ignore many of the technical issues of distributed systems plumbing and concentrate on the behaviour of the actual application.

    As you say, the full semantics of endpoints such as "resources" and "applications" will generally need to change much more quickly. Although hypermedia can help here, the full REST answer to this is "code on demand". In effect, the client application can be re-programmed by the server on each request.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT