Harvesting Service Orientation
In order to get optimal results, it's worthwhile to look back every now and then, and ask the question: "What did we learn?" In our days of Red Queen-talk, the paradox is that effective learning, and thus taking some time every now and then in order to harvest from past seeding, is more necessary than ever to get on-time quality results. Today's conundrum is the same as it has been for long, today's answer is: services. So I'll point my arrows at services.
Models and systems
Models are simplified representations of everyday reality. Systems are a special kind of models in which the universe or a part of it, is modeled in the form of elements and relations between those elements. Within systems we can define subsystems, subsets of elements sharing certain characteristics, and aspect-systems, subsets of the relations sharing certain characteristics. On top of that the whole thing is recursive: a set of systems is a system in his own right. Founded in the 1940's, systems theory offers a vast amount of conceptual groundwork, ready to aid twenty-first century problem analysts and solvers in their quest to get insight in the complexities under investigation , , .
The reason we construct models and systems is to get a better understanding of certain aspects and parts of the universe around us or within us. That understanding is gained at the expense of insight in and understanding of the parts and aspects we left out, we abstracted from. Choosing the scope of our model follows from our objectives, from what we want to learn or to communicate: the making of models is a purposeful undertaking. Mathematical models differ in an important aspect from other models: mathematics create, by defining axioms, a universe of their own. Within a mathematical model things are perfectly clear, even when doing fuzzy logic. In the set of natural numbers, every number is either odd or even, and there is not one even number that is more or less even than any other even number. How different from the universe as we know it!
Everything is vague
In the real world, everything is vague to a degree you do not realize till you have tried to make it precise, and everything precise is so remote from everything that we normally think, that you cannot for a moment suppose that is what we really mean when we say what we think .
Being so familiar with the mathematical abstractions we are used working with, we tend to forget this. And when we forget it, we mistake our model for the real world. Mathematics are great. Thanks to mathematics lots of real world problems are solved. But every engineer, enthusiastically using mathematical models of the real world, knows the model is just that, the model. Because the real thing has its equivocalities, its aberrations from what, in terms of the model, is normal. It is this two-facedness of models that led one of my teachers to the maxim "Don't believe the model. Don't ignore the model". Don't mistake the model for the real thing, be aware that things are left out of the model: surprise ahead. Having said that: use the model if every way you can. It will help you survive in a world that's too complex for a mortal brain.
Models adding to complexity?
Models, once described, become a part of the universe. That's kind of counterintuitive and unwelcome: starting out to make a model with the intention to get a grip on complexity, we end up with a universe even more complex. There are three things to be said about this. First: It all depends on your viewpoint. From an overall viewpoint it's true that complexity is raised. But from a local viewpoint a well-formed model will surely help in getting a grip on complexity. It's an example of the law that there is no such thing as a free lunch: the cost of the local gain is a global loss-just as it often is. Second: Adding the model to the universe bring forth ambiguity. For the car-object instantiated from the car-class certainly differs from the car coming from the GM production line. So we should carefully choose our words, when both the real world and the model are in the scope of the conversation we're having. Third: Professionals have a responsibility to the company they're working for, and to society as a whole: they should not add to complexity without a very good reason.
Services, service orientation, service oriented architectures. Something old, something new, something borrowed, something blue. Old wine or real innovation? In my opinion: both. There is a lot to be gained if we stay away from the either-or fallacy in the services area. Instead of searching for arguments that it's CORBA revisited or what have you, think broader. Because there is a lot of old, well understood and practically applied theory that can help us harvesting the profits of the innovation part of the services-world. Just for the sake of this article, let's build a model to get a firmer grip on the notion "service". We'll undo the model at the end of the article, so there will be no harm done to overall complexity.
The model I propose consists of four components: a memory, a processor, a connection between the memory and the processor, and an actuator. The memory represents all the internal and machine readable memory in the universe, and the same goes, m.m., for the processor and the connection, the last one thus including things like the internet. The actuator stands for all the machines and human beings in the world that, for the sake of simplicity, can interact directly with the memory, i.e.: they can read from and write to memory. Since the combination of processor, connection and memory is a machine, the model can be used to describe recursive processes.
Gaining some insight from our model
I leave it to the reader to do some detail design on this model. You might, for example, position executable programs in memory or in the processor, with certain consequences for the description of the execution process. But even in the loosely described model, it's clear that a call to a subroutine in the 1950's assembler era is identical to a PERFORM in time-enduring Cobol and to a web service in the 2008 Web 2.0+ era.
What is happening? What is the process? There is optionally some data gathered. Then there is a request to a different part of the universe to do something, often with the gathered data supplied as parameters. This is followed, either directly or after some time has passed, by execution of code - as part of which a request might be sent to yet another part of the universe, and so on. When execution is done, a message might be send to the requester that the works done, and this might be accompanied by some data.
So what's up? My first observation is that in all this, you can't point your finger at the service. In all the currents flowing in our simple model, it's hard to point out what that service really is. To a large degree it's the name by which we know it. For the rest there are process steps: calling, executing code, changing the values stored in certain memory locations, taking action when a value in a certain memory location gets or exceeds a certain value, and so on. Might we conclude that a service is a reification, that by naming it we have given thing-like existence to an abstraction? Or that a service is an emergent property, an epiphenomenon, coming into existence because all of these activities happening? I don't think playing with words has much value, so I guess we should be happy with the ability to name a service, no further questions asked.
Now what has changed since the branch to subroutine-era? Since Darwin we know there are no essentials in the real world, but IT systems come so close to their mathematical models that within certain constraints it's fair to ask the question: What essential differences are there in contemporary applications, when compared to their legacy counterparts, that makes the services notion worthwhile? The answer to this question is simple: complexity. In the assembler-days a cross-reference listing was about all you needed to have an overview of your, well, services and where and by whom they were used. In the 2008 web service version of the story, you really need models and systems to guide the development and use of services. Services are complex on all levels. If you need examples of devils hiding in details, go investigating service orientation being put to practical use. Making services available in a form that can be used by the business to create business processes using those services is complex, it requires service directories that state the goal of the service in business speak. At the technical level there is a lot of complexity following from all the different platform and network technologies that have to seamlessly work together in a services environment. At the design and build level these business oriented and technology oriented elements are to be woven together. Once deployed, service levels must be met. And all this in a world that just loves a daily dose of change in all those areas. So building to last is no good, services have to be built to change. And I did not even dare to mention information security!
This would all be quite irrelevant if services were of marginal interest. In fact, it's just the opposite. They bring with them the promise of agility so your business loves them!
So, in order to move fast, we have to stand on giants' shoulders. The notions of coupling and cohesion, structured analysis and design, encapsulations, the use of patterns, it's all there for the taking , , .
Words that are used in combination with service orientation are, among others: architecture, maturity model, roadmap and governance. Is there any complexity reduction to be done in this area? I think there is.
Maturity models are about best practices. That means they are about real, existing processes which, under certain conditions, yield high quality results. Maturity models can be used to guide organizations to reach higher performance levels by directing their attention to process areas that have proven to be of importance. A maturity model used in this way acts as a roadmap: it guides you on the trip from here to there. But another kind of roadmaps is out there, in which the destination is not known in the sense that we can point our finger to a real existence of the so called destination. Instead of that, the destination is a situation that is perceivable, and declared desirable. The next step in a situation like that is a planning process to fill the gap between the situation at hand and Nirvana by means of intermediate steps. Nothing wrong with this kind of migration, on the contrary. But what should not be done, is calling the resulting model a maturity model. When you do that, you're stretching the meaning of the notion maturity model to an extend that makes it meaningless. When it is done this way, there's only global complexity added, without a countervailing local reduction in complexity.
Is service orientation something completely different? From certain perspectives, certainly. Does it need its own maturity model, governance and so forth, just based on its being different? That depends. The development of a comprehensive maturity model for something as broad as service orientation, potentially ranging in scope from business processes to XML and semantic interfaces, is not a trivial task. If possible, we should use elements readily available, to get such a maturity model ready in time for planned use. Applications as far apart as Photoshop and Eclipse have the same mechanism for doing just that: the plug-in. Especially in a service oriented context, evaluating existing models for reuse should be the first step. When taking the plug-in perspective, lots of models like CMMI, ITIL and COBIT can contribute to service orientation implementation speed and business success.
So, once more, "keep it simple" is the maxim. Don't add models unless not adding them would be worse, adhering to Einstein's plea to express everything as simple as possible, but no simpler than that. Stick to the established meaning of words: maturity models are about best practices, roadmaps are used to get to known destinations in a Rand McNally-fashion, and migration plans are for reaching Nirvana. And last but not least: reuse, reuse, reuse.
Last word on words: ABC
There is an infamous trio known as ABC, academics, business and consultants. The academics forging new theories. The consultants, translating these in their own words to palatable piecemeal for the business. And the business, paying the consultants, and giving the academics the opportunity to wander around in their organizations, gathering material to base new theories on. It is clear that, in order to earn some money in this area, you should not use words with a clear given meaning. Instead of that, you play the redefinition-game. In marketing-speak: you differentiate. But with the same effect: lots of word games, less work done. Could a process like this one explain why we see so many new models, and so few initiatives to make models fit for change - or even fit for use - and to keep them up to date?
End of file
Keep in mind that models are there to abstract from aspects or parts that are of less concern to you and your objectives. The models that are made to communicate a consultant's story are not necessarily helping in solving your problem. As Mark Anthony Luhrmann said: "Be careful whose advice you buy, but, be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than it’s worth."
If this article were a service, the request might be: "How can I harvest service orientation", and the data sent back would read: "Try to make clear what's new. Invest in mastering the new. Don't forget that there is a lot you, and others before you, have learnt about things more or less like it. Reuse that. The new parts are complex. Reuse is hard. Let the force be with you."
 Kenneth E. Boulding, 1956. General Systems Theory, The Skeleton of Science. In: Management Science, 2, 3 (Apr. 1956) pp.197-208; reprinted in General Systems, Yearbook of the Society for General Systems Research, vol. 1, 1956.
 W. Ross Ashby, 1956. An Introduction to Cybernetics, London, Chapman & Hall.
 Ludwig von Bertalanffy, 1968. General System Theory: Foundations, Development, Applications, New York: George Braziller, revised edition 1976: ISBN-13: 978-0807604533.
 Bertrand Russell, 1918. Lecture: The Philosophy of Logical Atomism. Bertrand Russell, David Peers (ed.), 1985. The Philosophy of Logical Atomism, Open Court Classics. ISBN-13: 978-0875484433. Also available via the Amazon Online Reader.
 Edward Yourdon and Larry L. Constantine, 1979. Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design. Prentice Hall. ISBN-13: 978-0138544713 facsimile edition 1986.
 Glenford Myers, 1979. Reliable Software Through Composite Design. Van Nostrand Reinhold, ISBN-13: 978-0442256203.
 Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides, a.k.a The Gang of Four, 1995. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. ISBN-13: 978-0201633610.
The concept of services is important to business and to society in general. It means a step forward to the boundless flow of information, the vision of The Open Group, and as such also to the universal access to all human knowledge as envisioned by Brewster Kahle's Internet Archive. Both visions are inspiring!