BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News 100% Test Coverage?

100% Test Coverage?

Bookmarks

How much testing is enough? The answer varies depending on whom you ask. On one end of the spectrum, some say you should strive to achieve 100% test coverage. Others say it doesn't matter, that you should just rely on the quality of the tests, and that measuring test coverage does not tell you anything about the quality of the tests and the code being tested.

Tim Ottinger from ObjectMentor wrote that if you are practicing true TDD then you should have very high test coverage from the fact that you only write production code to satisfy a broken test. The subtle implication is that TDD rarely affects test coverage of existing code.

I’m not saying that code coverage should be low, only that as we move incrementally, each test we write in isolation should have little effect on our code coverage numbers… a thought that intrigues me.

Andy Glover showed, by example, how test coverage metrics can mislead us into a false sense of security. Tests coverage metrics can tell you what code is not tested, but cannot accurately tell you what code is tested. Similarly, Tobias Schlitt argued that code coverage metrics are important because they tell us what parts are not covered.

Surely, a high code coverage rate of a test suite does never indicate, that code is well tested (if you have not written the code and tests yourself). But the other way around works: A small code coverage rate definitely means, that the test suite is not sufficient. But let me dig a bit deeper into code coverage and what it gives you.
Testivus, the great testing master, explained it best telling us that "it depends". For someone who is new to testing:
Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.
For the experienced developer:
... the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.
Finally, for those who simply want answers:
The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.
Testing, as described in these blogs focuses on the quality-validation benefits of tests. From this perspective, we should be aware that test-coverage metrics can tell us what is missing much better than they can tell us what is done well.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Fallacy

    by Cedric Beust,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Trying to achieve 100% test coverage is not just silly, it's dangerous. Here's why.

    --
    Cedric

  • Re: Fallacy

    by Tom Adams,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Having 100% coverage isn't dangerous; what's dangerous it's falsely assuming that 100% coverage means your application will work. This is exactly what the example is showing, which is obviously not true. An interaction style of testing can often make this problem worse, as you mock out the calls you expect to receive from collaborators.

    All it calls for is some common sense, use your atomic/unit tests to provide rapid feedback and design, and higher level (integration/functional/acceptance) tests to verify that your application is behaving as a whole. How much effort you put in to each of these levels depends on the application you're developing and the risk involved.

    Tom

  • Re: Fallacy

    by Kishore Senji,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I thought that is what the post concluded at the very end, 100% code coverage does not necessarily mean that the code is well tested and conversely anything less definitely means either the code is not well tested or there is code bloat or unused code.

    In my opinion, although 100% code coverage does not mean well tested code, it does tell us an important metric that there is no redundant code. I agree that we still have to make sure all the branching coverage is very high as well and we cover all the corner cases and the functional as well as integration tests using something like Cactus. Typically the person who wrote the class is the best to write the test cases as well as they know all the corner cases the best.

    Thanks for your link. Nice post there.

  • Re: Fallacy

    by Amr Elssamadisy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thank you for your thoughtful comments. Actually, my personal experience is that test coverage metrics can be detrimental - even if we know what they mean. By measuring coverage metrics and focusing on how much is covered, a system that rewards quantity over quality begins to form.

    Then, of course, there is the fact that good tests change the way a developer approaches and solves a problem. It encourages very loosely coupled code and cohesive classes. This is probably more important than the validation features of tests and shows up with test-first development.

  • I'd prefer we never mentioned coverage at all.

    by Bruce Rennie,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    A couple of questions:

    1. If your coverage is low but there are few reported bugs, your customers are happy, releases are solid, money is flowing, etc. Do you start writing new tests?

    2. If your coverage is high, but there are many reported defects, your customers are unhappy, the team is demoralized, etc. Do you point to the coverage numbers and say "but it's all tested"?

    Coverage can't actually help me make those macro level decisions. All coverage can do is tell me where I should invest my testing dollar IF (and only if) I've already decided I have a problem.

    The only reason we talk about coverage as much as we do is that it's become relatively easy to measure.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT