Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Debate: Agile Transition Success Rates, Help or Harm?

Debate: Agile Transition Success Rates, Help or Harm?

This item in japanese

Many of the Agile community have chimed in on a recent popular debate regarding success rates of Agile transitions. Responding to Niraj Khanna's question, 'Does anybody have any metrics regarding Agile transition success rates?', industry experts such as Kent Beck, Ron Jeffries, Alistair Cockburn, Chet Hendrickson, and many more debate the value and risk of establishing such statistics.

Nearly 250 posts have been registered in response to Niraj Khanna's question on the extremeprogramming yahoo list:
I have been searching for the success or failure rate of agile transition attempts, but have come up relatively empty. I would define success as:
a) The transition met the goals/improvements the customer or company was looking for. The goals/improvements could be anything from bottom line profit to productivity to fewer defects and shorter project timelines of delivery.
b) If the improvements the agile coach (3rd party or internal) promised to the company came to fruition.
c) If there were any pleasant corollary improvements that weren't expected.

Failure would be simply defined as: The time and money spent on the transition were either not worth the result. In the worst cases, the development organization actually lost quality, speed or even customers due to the transition attempt.
Much of the early discussion equated to a debate regarding whether a search for such metrics might turn up less than flattering results. This debate was somewhat rooted in a suggestion that the agile community may fear what such results might show, as described by Chris Wheeler:
For years, the concept of using metrics to communicate to executives (or to anyone) was resisted by the Agile community ... I don't think we lack the knowledge so much as we are afraid that when we look at the numbers they won't paint the same great story that, well, stories show.
Many quickly advanced beyond this suggestion, agreeing that fear of the numbers are not a problem. Helping to put an objective and positive lid on the "are we afraid of these numbers" discussion, Kent Beck said the following:
Absolute openness about the costs, pains, risks, and rewards of XP is the right thing to do. It's a leadership position and it has a chance of working. I'm afraid that continuing to try to tell an attractive story will lead this community to become a cult of irrelevant technical competence.
Much of the debate soon then focused on questioning the verifiability of such statistics, many taking the stance that trying to gather statistics such as these might have a low likelihood of being reliable, as described Steve Gordon:
I am against processing those case studies into numbers, because those numbers would only be valid if the case studies we have are a random sample. They are not a random sample, so those numbers do not represent reality whether or not those numbers serve our interests.
Max Guernsey added warning that statistics, even if accurate, may be harmful due to likely misinterpretation:
Statistics that come from "historical surveys" are really not very effective tools because the filter of interpretation plays too large a role. They are given the same kind of credibility as real science but they are missing one key element: verifiability. There is always an element of the interpreters fantasy mixed in with the "results"...
Chris Wheeler largely leads the counter to these points, asserting that such data could be utilized effectively if used not as a scientific measurement but rather as helpful tool for executives and managers to get a good "gut feel":
None of this says 'Yes, you will be 100% successful, or 75% successful'. It will say '80% of companies in the auto industry spent between $1M and $1.5M to transition. 50% of Agile programs paid all transition costs after 3 years, 40% after 4 years, and 10% after 5 years.. After 5 years, 60% of those companies are still using agile, 20% are using some agile techniques, and 20% have abandoned agile all together.'

Now, imagine I, as a CIO had that information at available to me! I could use that information to support a decision in the context of my own business. Perhaps 3 years is too long to realize returns, or maybe it's about right and I'm willing to live with the risk of going longer than 3 years. Or maybe I'm not willing to live with that risk - perhaps 3 years is way too long to get my 1.5M back, and maybe it's about right. Maybe $1M is about right, but I can't risk seeing my entire investment dissolve after 5 years.
The thread's original poster, Niraj Khanna, added this to support Wheeler's stance:
I think that a broad indicator like a success rate would serve some meaningful purpose, both for sales and for those of us involved in transitions. I like to think of it in parallel to what the Dow Jones composite index is to the stock market. The indicator itself doesn't tell you why the market dropped or rose 200 pts. The purpose it serves is merely informing investors how the broader market did that day. It's up to the individual investor to find out why.
One subject area nearly all of the thread's contributors seemed to agree on was the value of collecting real stories of success and failure, asserting this information above all else can serve to provide newcomers useful information about what they might expect when deciding to take on a transition to agile. Perhaps most newsworthy out of all of this was the initiation by Kent Beck of a new wiki site dedicated to an effort to collect such stories from the community:
In the spirit of data over speculation and the spirit of crowd-sourcing over heroic action, I've created a wiki to gather information about transitions: Please consider taking ten minutes to record your organization's experience there.
Given the number of people on this list, we could, should we choose to, quickly have hundreds of stories from which to learn. This learning is a prerequisite to being able to say, twenty years from now, that software development is an exemplar of quality and value.
Ron Jeffries agrees, simply stating:
That would be most excellent.
As you might guess, these are only select highlights of the super discussion still occurring in this XP Yahoo list thread. If you're interested in more, check out the continuing thread for yourself here.

Rate this Article