Microsoft Research has released a summary of the results of empirical studies examining software engineering myths. The work, conducted by Nachi Nagappan, measures the impact on quality that common software engineering practices actually have. The analysis reveals:
- More code coverage in testing doesn't necessarily correlate with a decrease in the number of post-release fixes required, citing many other factors that come into play.
TDD improves quality but takes longer (pdf): "What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer." - The use of assertions and code verification decreases bugs. Further, "Software engineers who were able to make productive use of assertions in their code base tended to be well-trained and experienced, a factor that contributed to the end results."
- Organizational structure has a profound impact on quality: "Organizational metrics, which are not related to the code, can predict software failure-proneness with a precision and recall of 85 percent."
- The impact of a team being distributed has a negligible impact on quality.
These research findings are now being used by Microsoft development groups, including helping with risk-analysis and bug-triaging for major projects such as Windows Vista SP2.
Community comments
No citation about the code coverage section
by Simon Kirk,
Re: No citation about the code coverage section
by Gavin Terrill,
Re: No citation about the code coverage section
by Jim Leonardo,
15 - 35% longer for TDD is misleading
by Curt Hibbs,
Re: 15 - 35% longer for TDD is misleading
by Curt Hibbs,
Re: 15 - 35% longer for TDD is misleading
by Sebastian Kübeck,
Re: 15 - 35% longer for TDD is misleading
by William Cherry,
Re: 15 - 35% longer for TDD is misleading
by J. B. Rainsberger,
Re: 15 - 35% longer for TDD is misleading
by Jordi Bieger,
Re: 15 - 35% longer for TDD is misleading
by R Smith,
Re: 15 - 35% longer for TDD is misleading
by Kevin E. Schlabach,
Re: 15 - 35% longer for TDD is misleading
by William Martinez,
SOme facts about and the research
by Luca Minudel,
Impact of org. structure
by Baljeet Sandhu,
No citation about the code coverage section
by Simon Kirk,
Your message is awaiting moderation. Thank you for participating in the discussion.
I'm wondering why there's no citation on the assertion that "More code coverage in testing doesn't necessarily correlate with a decrease in the number of post-release fixes required, citing many other factors that come into play"?
Re: No citation about the code coverage section
by Gavin Terrill,
Your message is awaiting moderation. Thank you for participating in the discussion.
Yes, not sure about that. I did find this link on the microsoft research site - it looks like something might be available through IEEE.
Re: No citation about the code coverage section
by Jim Leonardo,
Your message is awaiting moderation. Thank you for participating in the discussion.
I believe you'll find that in the links on the right of the MS article. This was an older paper they had put out (about 1 1/2 yrs ago).
research.microsoft.com/en-us/projects/esm/nagap...
15 - 35% longer for TDD is misleading
by Curt Hibbs,
Your message is awaiting moderation. Thank you for participating in the discussion.
I wondered how TDD could report 60 to 90% fewer defects but still take 15 to 35% longer, since fewer defects means less rework. It didn't add up, logically.
So I followed the links to the original paper and found my answer. The 15 - 35% is only the original development time, and does not include the increased maintenance cost of the non-TDD approach. To quote the paper:
Re: 15 - 35% longer for TDD is misleading
by Curt Hibbs,
Your message is awaiting moderation. Thank you for participating in the discussion.
I guess I'd like to state this explicitly:
It is incorrect to assert that TDD took 15 to 35% longer. You cannot ignore the rework caused by defects. Its too bad they were not able to quantify this because I'd be willing to bet that this was more than just "offset by reduced maintenance costs" -- that, in fact, there would have been a net increase in productivity for TDD.
Re: 15 - 35% longer for TDD is misleading
by Sebastian Kübeck,
Your message is awaiting moderation. Thank you for participating in the discussion.
Quoting Kent Beck: “If I don’t need to make it work, I can go a lot faster.”
Re: 15 - 35% longer for TDD is misleading
by Jordi Bieger,
Your message is awaiting moderation. Thank you for participating in the discussion.
What is the difference between "more reworking needs to be done" and "increased maintenance costs"? Anyway, I was wondering the exact opposite from you:
What would have happened to the quality if the non-TDD people would have taken that extra 15-35% of time to increase the quality of their code? This would be really interesting to see, because the way the study was done now, the conclusion that spending relatively more time in (initial) development leads to better quality is just as valid as the conclusion that TDD does. It seems intuitive to think that taking that much extra time for refactoring (or simply not hurrying) would (also) improve quality.
Re: 15 - 35% longer for TDD is misleading
by R Smith,
Your message is awaiting moderation. Thank you for participating in the discussion.
The most likely outcome of simply allocating 15-35% more time to development will not be an increase in quality, but an increase in features. To increase quality, it is the process that needs to be changed, not just the time.
Developer testing and peer review are two of the most effective practices to increase quality. Both of these can provide significant quality gains with some increase in initial development time.
Re: 15 - 35% longer for TDD is misleading
by Kevin E. Schlabach,
Your message is awaiting moderation. Thank you for participating in the discussion.
This makes sense coming from Microsoft... their products are clearly delivered before the bugs are found and removed. They never account for that in their "total lifecycle" and it appears their whole project is just the development time.
Thanks for calling this out because I was definitely thinking it!
Re: 15 - 35% longer for TDD is misleading
by William Martinez,
Your message is awaiting moderation. Thank you for participating in the discussion.
Wait, not that simple.
1. That increase is "subjectively estimated by management", meaning it is a guess.
2. It is an increase in creating the feature.
3. Does this mean TDD coding is added up in the KLOC?
4. If 3, then the density should of course be less!
5. Does this mean non TDD means no unit testing code at all? Because no TDD may mean write the unit test later, not upfront.
6. If 5, then density is also affected. Meaning TDD is either slowing down the production but not increasing the KLOC, or just increasing the KLOC and slowing production too.
I guess the values are too up in the air to mean something I can take a decision with.
William Martinez Pomares
SOme facts about and the research
by Luca Minudel,
Your message is awaiting moderation. Thank you for participating in the discussion.
TDD is supposed to improve code quality (code easy to understand, change, evolve; higher team velocity; effort estimations easier).
Functional tests, pair programming and code reviews are better to decrease bugs number. This is a well known fact and TDD enthusiasts know this. So this research, if I get it right, confirm this.
Also the research state that with TDD it take a longer time (15 to 35 percent longer) to get first release and also that with TDD you reduce post-release maintenance costs significantly. When you consider the first release together with the post-release maintenance, TDD take less time and save costs significantly. This is also a well known fact too and the research confirm this.
The research surface that these are decisions that managers have to make — where should they take the hit.
Imho for a company that use his own software the decision in clear: use TDD, while a sw house can put the maintenance costs on customers (e.g. with maintenance contracts) and decide to have a cheaper lower quality first release: can skip TDD.
Read the linked "Exploding Software-Engineering Myths" post and the linked pdf docs if you want to double-check this.
Re: 15 - 35% longer for TDD is misleading
by William Cherry,
Your message is awaiting moderation. Thank you for participating in the discussion.
I've always loved this quote!
Impact of org. structure
by Baljeet Sandhu,
Your message is awaiting moderation. Thank you for participating in the discussion.
From the paper on the the its not very clear (to me at least) what the recommendations are. I am still grappling with the formula they provide in section 5.3. I agree that org. structure impacts software quality but the question then is, what do the authors recommend as a good org. structure conductive to software dev. ?
Baljeet
Re: 15 - 35% longer for TDD is misleading
by J. B. Rainsberger,
Your message is awaiting moderation. Thank you for participating in the discussion.
I raise you Jerry Weinberg, paraphrasing: Yes, your program runs faster than mine, but mine computes the right answer.