Evidence of Success of Agile Projects
Early results of a study on the effects of agile development practices are showing improvements in productivity and quality. Michael Mah, managing partner of QSM Associates, Inc., explored the results in “Columbus Discovering Agile” at ProjectsAtWork:
Early results from the Columbus-area participants show that a typical business system comprising 50,000 lines of code is completed 31% faster than the industry average in the QSM industry database of completed projects (4.4 months vs. 6.4 industry average). Even more remarkable is the defect rate, which is 75% lower than the industry norm.
The results come from analyzing agile practices of one programming community in Columbus, Ohio by the Columbus Agile Productivity Benchmark Project. This project is conducted by QSM Associates in tandem with the Central Ohio Agile Association (COHAA) and the Columbus Executive Agile SIG. It provides factual information to their participants, helping them to answer questions about addressing development projects schedules and budgets:
Survey participants are able to see their own results contrasted with the industry at large; and, the Columbus community also compared the regional results in the aggregate with worldwide data.
It is important to point out that not all participants in studies like this — even if they are committed to Agile development — may achieve such extreme results, because not all participants will have adopted all of the best practices that lead to success.
The study of agile practices provides insight in the results of outsourcing, and emphasizes the importance of measurements in outsourcing:
Chief among the results is the fact that programming teams that are co-located tend to be more effective than those where expertise is geographically divided. This is one of the facts that have lead to the reassessment of outsourcing software development.
(…) Recognizing one of the main reasons that outsourcing can be so risky — the lack of assurance or common expectations on quality and productivity — can help make outsourcing more successful. That is, measurement/benchmarking helps both sides set — and thus agree on — more realistic expectations.
Results of agile projects have also been published by David F. Rico in his paper on “The Business Value of Using Agile Project Management for New Products and Services”.
An early study of agile project management showed 10% to 20% improvements in revenues, quality, and cycle time, and 54% reductions in costs. Another early study showed 50% to 60% reductions in time to market and costs, along with 10 times higher development flexibility.
The paper from David includes data from several studies on the results of agile project management:
Agile project management benefits come from many factors that are too numerous to mention here. The primary drivers are increased productivity and quality. Productivity comes from its streamlined nature and quality from its uncompromising discipline. However, its real power comes from its adaptability to change, collaborative nature, and focus on bottom line business results for the marketplace.
In the book “The Economics of Software Quality”, Capers Jones and Olivier Bonsignour investigated the results of agile practices on software quality from:
- Agile Embedded Users: Having user representatives in teams
- Scrum Sessions: Sprints, stand-ups and Test Driven Development
- Agile Testing: Sprints with black box test cases
The goal of their book is:
(…) To quantify the factors that influence software quality and provide readers with enough information for them to predict and measure quality levels of their projects and applications.
The article “Agile Practices with the Highest Return on Investment” on InfoQ gives several examples to calculate the ROI from agile, and discusses the benefits of agile practices.
Studies like “Columbus Discovering Agile” help to objectively benchmark performance, and to decide on goals and directions:
The study […] is providing the Columbus Agile community with valuable information on factual patterns on productivity and quality, instead of just anecdotal claims. Moreover, the data helps answer questions about addressing development projects schedules and budgets.
To read the full article at Projects at Work, a registration is required (free of charge).
Great, someone is finally looking at this :-)...
Additionally, the Columbus study is still ongoing, so research findings are preliminary. Though what they are actually stating isn't as obvious as this article would have us believe (i.e. is this a comparison of Columbus agile against industry agile, Columbus agile again traditional methods or what?).
A few things stand out from David Frico's paper, including that this isn't a quality paper in itself. It calls upon 3 other papers, two of which are written by the author of this paper. There is no direct description of the method chosen for the research, what methodologies were chosen as 'controls' from traditional methods (at the same level of granularity), what confounding factors there were and what was done to eliminate the effects of them.
For example, if we are comparing agile with 'waterfall', PRINCE2 or no project management methods at all, then this may have a different result versus managing iterative methods such as RUP (even if you use the same PM methods). This would then rope successfully managed RUP processes in with the rubbish in the 7,500. So of course agile would show a better ROI and NPV as the majority of projects with ad hoc methods or no methods will perform badly and will drag the rest of the traditional approaches with it.
Additionally, parts of this study were comparisons of 23 agile projects versus 7,500 traditional projects. 23 agile projects are in themselves not enough to ensure statistical significance within their own result set. So it is likely any 'small-p' (which is the probability the null hypothesis is true) will be high, reducing the significance of this research (as even the error in the research will have more chance of being right).
Also, the small sample of the agile results relative to the larger, combined size means that this study can be subject to problems caused by Simpson's paradox.
I am glad there is a body wanting to look more into this, but I think this article significantly misstates the purpose of this particular research, which I don't think is statistically significant anyway.
Craig Motlin Sep 01, 2014