SOA Practioners Should Define Standards First
Over the years we have had a number of articles and presentations on standards related activities as they relate to SOA, Web Services and REST. Most agree that standards are important to prevent vendor lock-in and try to ensure interoperability between heterogeneous implementations. However, recently Steve Jones has raise an important issue to do with when standards should be used in the SOA lifecycle. As he says, it "stuns" him that developers, business stake-holders etc. do not define the standards they need to use at the very start. As far as technical standards are concerned (e.g., WS-*), Steve believes that there are few good excuses for not defining these at the start. And although the article is applicable to a range of technologies, Steve concentrates on REST initially, where he states:
Saying "but its REST" and claiming that everything will be dynamic is a cop-out and it really just means you are lazy.
He then goes on to illustrate what this means for developers using REST.
1. Agree how you are going to publish the specifications to the resources, how will you say what a "GET" does and what a "POST" does
2. Create some exemplar "services"/resources with the level of documentation required for people to use them
3. Agree a process around Mocking/Proxying to enable people to test and verifying their solutions without waiting for the final solution
4. Agree the test process against the resources and how you will verify that they meet the fixed requirements of the system at that point in time
Steve's points concerning REST, illustrated by a personal example ("Some muppet tried to tell me last year that as it was REST that the resource was correct as it was in itself it was the specification of what it should do ...") are similar to those Bill Burke raised when announcing the REST-* effort in 2009:
Much of REST has been described with using the Human Web as an example. By "Human Web", I mean browsers and the humans using these browsers. How machine-based clients interact with a REST architecture is, IMO, very much in its infancy. Enterprise IT is used to using specific sets of middleware technologies to implement their distributed applications. The advent of REST gives us a chance to rethink how traditional Enterprise IT development intersects with middleware.
So the lack of agreed approaches (standards?) in REST may be a reason for the kind of careful consideration and iterative approach that Steve mentions, and has discussed in an earlier entry of his. However, as Steve points out in this entry, if you've chosen Web Services as your implementation approach then there really are very few good reasons to ignore the standards that have evolved over the past decade. WS-I Basic Profile 1.1 and SOAP 1.1 are a must, and the version of WSDL (1.1 or 2.0) will depend upon the interaction pattern for your services:
Now if you want call-backs its into WSDL 2.0 and there are technical advantages to that but you can get hit by some really gnarly XML marshalling and header clashes that exist when going between non-WS-I compliant platforms. You could choose to define your own local version of WS-I compliance based around WSDL 2.0 but most of the time you are better off investing in some decent design and simple approaches like having standard matched schemas for certain process elements and passing the calling service name which can then be resolved via a registry to determine the right call-back service.
As for the various standards within the WS-* stack, there are only a few which Steve believes are core: WS-Security and WS-Reliable Messaging (presumably WS-Addressing are implicit here, since the OASIS standards require it these days.) As he points out though, even deciding on these standards is just the first step because when choosing an implementation it is just as important to understand which version of the standard they support, if at all. Some Web Services stacks the claim compliance are actually based on pre-standardization releases of specifications, or target older standards, possibly making them less useful to users. Finally Steve has something to say to those who complain about the performance of HTTP:
The other pieces is to agree on your standard transport mechanism being HTTP. Seriously its 2010 and its about time that people stopped muttering "performance" and proposing an alternative solution of messaging. If you have real performance issues then go tailored and go binary but 99.999% of the time this would be pointless and you are better off using HTTP/S.
In conclusion, Steve raises some important points: standards are good, but simply paying lip-service to them does not benefit the SOA developer or business stake-holder. Choosing the right standards at the start of the SOA lifecycle is an important first step and one that is still overlooked by many practitioners today, with resultant problems.
Middleware and Programming Models
>> intersects with middleware
This is much needed, however, how would one think that starting from a "solution" (i.e. REST) to solve an arbitrary problem is the optimal approach? This is an extremely complex problem. This kind of approach eventually always lands (even with REST or REST-*) into an extension of monolithic programming models. The first thing people like Bill do is binding whatever concept du jour to an OO method. In case you have not noticed, that has been tried for 20+ years or so, and has failed to provide any significant solution to information system construction.
The day people will start from analyzing the problem and looking for solution, that day we will have made a big step forward.
That being said, defining "standards" should go as close as possible to the overarching programming model used in "Enterprise IT Developments". Monolithic programming models are all but gone, they have become an unfit scripting capability of Enterprise IT Developments.