Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Agile and the Crutches of False Confidence

Agile and the Crutches of False Confidence

This item in japanese


False confidence is often grounded in wishful thinking. It is defined as a state where the projected reality and the actual reality might differ considerably, however for a limited period of time, it does give a feeling of having everything under control. There are many such situations in Agile development which make a team hold onto the false confidence crutch only to fall later.

Mike Griffiths, quoted a session by Malcom Gladwell where the level of false confidence was related to the level of information presented. He quoted an example about psychiatrists who were presented with information about patients. With one paragraph of information, their confidence level was 25% and the accuracy of their assessment was about 25%. On the flip side when they were presented with increasing volume of data slowly amounting to 10 pages of information, their accuracy increased marginally to 29% however their confidence increased to 90%.

Matt suggested that some companies are in the business of manufacturing confidence. They use specs, documents, and process as things to lean on. These crutches give them a false confidence that nothing will go terribly wrong.

The problem is when you build confidence with documents and all that, you are nailing yourself down to assumptions that are probably wrong (assumptions always seem to fall by the wayside once things get real). Yeah, you may feel better that you have a recipe written down. But if it’s a recipe for failure, what’s the point?

The same goes for testing. J.B.Rainsberger mentioned that "Integration Tests are a Scam”. The reason being that depending on the kind of integration tests written, a team could get a feeling of false confidence. According to Mark Needham, this holds true for unit tests as well

It's important to ensure that our unit tests are actually testing something useful otherwise the cost of writing and maintaining them will outweigh the benefits that we derive from doing so.

Likewise, Doug Rathbone mentioned that many teams are satisfied having an automated build in place. The key however, is not to have an automated build but to have an ability to automatically take that build and deploy it.

If you cannot deploy from that build in an automated fashion you are simply moving the dependency on human error further down the chain of production, while at the same time giving yourself false confidence in you ability to ship a project with ease.

Another phenomenon which gives false confidence to the stakeholders is the notion of code freeze. Jonathan Leffler asked an intriguing question about the value of false confidence provided by pretending to be in code freeze situation.

I suspect that calling these situations a "Code Freeze" is some sort of willful Double Think to provide false confidence to stake holders. Or we are pretending to be in a "Code Freeze" situation because according to Scrum after every sprint we should have a shippable piece of software and it is the expectation we are following Scrum. So we must call it what Scrum expects instead of what it really is.

Another, huge area for false confidence building is related to specifications. According to Mike,

Specifications are another area prone to mis-calibration errors too. When we spend a lot of time gathering specifications, validating specifications and elaborating alternative flows and exceptions we build a sense of confidence in them.

What other crutches do you see in your projects which help in building false confidence but do not add value?

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Design and Code "Walkthroughs"

    by Brent Arias,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    To answer the last question posed by the article, I feel that design and code "walkthroughs" inflate a false sense of confidence in code quality. The problem seems to arise in confusing a walkthrough with a peer review. Indeed I've watched software shops give themselves a vote of confidence for having completing the former while labeling it as the latter. "After all," they say "peer reviews have well known ROI."

  • External audits for checklist sake

    by Gerard Janssen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Not particularly a feat of agile projects: we have done audits on government funded projects, assessing code quality, architecture etc. (remember Vikas? :) And of course we had plenty of findings. However, these were not properly addressed by the project. The audit was just something that had to be performed in order for a project to be allowed to proceed to the next stage. Checklist management, really.

    Our suggestion to do a workshop to see if any structural improvements could be identified was neglected.

    So, checklist crutches: we have done what we are supposed to....

  • Re: External audits for checklist sake

    by Vikas Hazrati,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    >we have done audits on government funded projects, assessing code quality, architecture etc. (remember Vikas? :)

    Of course Gerard that is an excellent point that you bring across. I remember preparing an exhaustive report on how things can be improved only to be told by the management that thanks for the report, the audit phase is done and we start with the next set of development next week ;)
    (I wonder if they threw the report away as soon as I left, as the audit had been signed off! )

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p