BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News SOA & The Tarpit of Irrelevancy

SOA & The Tarpit of Irrelevancy

This item in japanese

Bookmarks

 

In his new three parts post, ThoughtWorks' architect Neal Ford blames vendors' marketing for the way SOA is perceived today.

The first installment is about how the need for SOA arose: tactics vs. strategy.

According to Neal, when the company is small, their IT needs are also small. As the business needs more software, they give the requirements to the developers, and they write code. There is a limited amount of applications, which do not require a comprehensive strategy for application interoperability - there is no time for that. It is all about delivering software in order to stay in business. One day, a company becomes an enterprise and realizes that its IT systems are a mess. There is a lot of code duplication, significant overlap of data in the database, etc. According to Neal, that is typically happening:

... because of 2 reasons: first, you took the path of least resistance when you were a company (before you became an Enterprise) because, if you had taken the time to build a comprehensive strategy, you'd have never survived as a company. Second, and more important, what is strategic to the business is always tactical to IT.

The issue is that IT can never move as fast as the business, with means that IT always has to respond tactically to the decisions and initiatives brought forth from the business.

No matter how much effort you put into a comprehensive, beautiful, well-designed enterprise architecture, it'll be blown out of the water the first time the business makes a decision unlike the ones that came before. The myth of SOA sold by the big vendors is that you can create this massively strategic cathedral of enterprise architecture, but it always falls down in the real world because the COO (and CEO) can override the CIO (and his sidekick, the CTO). If you can convince your organization to allow IT to set the strategy for what capabilities the business will have long-term, you should. However, your more agile competitors are going to eat your lunch while you build your cathedral.

The second installment discusses standard-based vs. standardized approach to SOA.

Neal notes that one of the main marketing mantra touted by software vendors is "standard - based" implementation of ESBs, which are presented as a solution for all SOA problems. The issue here is that:

ESBs are standards-based but not standardized. This distinction is important. All the existing ESBs use standards in every nook and cranny, but it's all held together by highly proprietary glue. The glue shows up in the administration tools, the way their BPEL designer works (along with their custom BPEL meta-data), how you configure the thing, how you handle routing, etc. The list goes on and on. These vendors will never allow the kind of standardization imposed by Sun in J2EE. The last thing the vendors want is to see their (crazy money making) babies turned into commodity software again. They'll make noise about creating a true standard, but it won't happen. They want to be more like the database vendors, not the application server vendors.

And that is true not only for commercial, but also for the open source offerings, which are selling consultancy, training and support around their products (for example, JBoss, Mule, Fuse, etc.). All of them have a motivation for keeping user locked in their proprietary glue and tooling.

Finally, the third installment discusses SOA Tools & Anti-Behavior.

Neal also notes the danger of getting excited over the flashy simple SOA demos, often being sold by vendors as a solution for all SOA problems:

... the favorite demo-ware application is their [vendor's] BPEL (Business Process Execution Language) designer. This designer allows you to wire together services by drawing lines between boxes. The lines can include transformations and other sexiness. And it demos great. "Look, just draw a couple of lines here and here, click on the Run button and voila! Instant SOA".

Management often gets excited with these demos and command development to start implementing SOA using the new tool of choice. The issue is often that these work great on really small "toy" demos, when it's easy to see all connections between things. But, as things get complicated, they start suffering from the hairball effect: all the lines start running together, and one can't create a diagram that makes sense to anyone anymore. Neal attributes this to the fact that graphical languages (with automatic generation), in general, are not a good solution for building complex systems and especially for business process definitions. He points out the following issues of such languages:

  • reuse: you can't really reuse portions of your workflow because their is no method or subroutine functionality (you might get lucky with a sub-workflow). Mostly, people achieve "reuse" by copy and pasting, which you never do in code.
  • refactoring: no refactoring, making it harder to identify common workflow chunks for reuse. When you don't have refactoring, you don't watch for opportunities for refactoring as much.
  • limited programmability: you don't get if statements and for loops, you get whatever this particular BPEL designer supports. You get flow-chartly looking stand-ins for real decision statements, but they are much more brittle than the facilities offered in modern languages.
  • testing: you can't write unit, functional, or integration tests for these workflows. The only real testing option is user acceptance, meaning that the entire universe must be up and running. If you have no unit testing, you also don't have mock objects or other testing techniques common in code.
  • hard to diff: lets say you fought the beast and get a non-trivial workflow up and running, and everything is great. In six months, you change it in non-trivial ways, and all is good. Then it comes time to see what's different. BPEL tools don't have diff facilities, so you can either visually diff the diagrams (yuck) or diff 2 10,000 line XML documents (double yuck). BPEL relies on either heavy-weight diagramming tools or raw XML, and nothing in between.

Neal considers these tools to be "doodleware tools" - good for creating pretty pictures but collapsing under scale. His solution to this problem is: "Keep behavior in code".

This series of posts provides an interesting point of view on the state of SOA/BPM software and its vendors. Many people argue that large vendors are out there to sell their products and, as a result, often exaggerate their capabilities. It is also often claimed that SOA technologies are often positioned as a solution of many existing business problems. On another hand, many practitioners believe that higher level (domain specific) tools and languages are essential for improving developer's productivity. Some view the whole history of software engineering as being focused on designing software systems that are easier to use (virtually no one is developing in assembly any more), and believe that for the most part, it is the merit of large software vendors' products. What do you think? Is Neal right in his critique, or is it simply the responsibility of software/SOA architects to decide on proper approaches, tooling and middleware for their implementation? Is trying to blame software vendors (and/or management) for their bad decisions simply a poor excuse?

Rate this Article

Adoption
Style

BT