BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Opinion: Code Coverage Stats Misleading

Opinion: Code Coverage Stats Misleading

Bookmarks
John Casey,a key contributer to the Apache Software Foundation's Maven Project, recently spent some time refactoring Maven's assembly plugin.  He thought he'd use coverage reporting to mark his testing progress, and to make sure he didn't break anything as he went.  Well, at very least it was a learning experience.

He constructed a completely new suite of tests to focus on small, specific units of the new plugin's implementation.  His refactoring went well, as did the testing.  He had nearly perfect coverage numbers on the parts he felt were critical and felt fairly confident that this plugin would almost drop in as a replacement of the old incarnation.  That's when things started to fall apart.

You can read the details on Casey's blog, along with his reasons for saying: when you're seeking confidence through testing, perhaps the worst thing you can do is to look at a test coverage report.

His conclusion: coverage reporting is dangerous. It has a tendency to distract from the use cases that should drive the software development process. It also tends to give the impression that when the code is covered, it is tested and ready for real life. However, it won't tell you that you've forgotten to test the contract on null handling. It won't tell you that you were supposed to check that String for a leading slash, and trim it off if necessary. To achieve real confidence in your tests, you would have to achieve multiple coverage for most of your code, in order to test each line under various conditions... And, since there is no hard-and-fast rule about how many test passes it will take to test any given line of code adequately, the concept of a coverage report is itself fatally flawed.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Look at what isn't covered, not what is

    by Luke Redpath,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I agree that using code coverage as some kind of measure of how well tested a piece of software is, is flawed.

    However, whilst there isn't as much value in seeing what *is* covered, I still think it's useful for highlighting what *isnt* covered. Something with 97% code coverage or more isn't necessarily complete, or correct, but its likely to be more reliable than code with low coverage.

  • Re: Look at what isn't covered, not what is

    by Deborah (Hartmann) Preuss,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Umm... isn't the author's point that so-called "97%" code coverage may in fact indicate 10% coverage, if each method has one test but in fact needs 10 to cover basic alternate cases?

    I'm thinking: wouldn't it be great if we could indicate risk and complexity, and weight those coverage stats?

  • Re: Look at what isn't covered, not what is

    by Scott Battaglia,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    While code coverage can't tell you you've covered every possible scenario in terms of input, I've found it to be an indispensible tool in making sure I've taken every *path* in my code.

  • Re: Look at what isn't covered, not what is

    by Steve Bate,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I haven't been able to read the original blog because the site is down, but from this article it appears that some of the confusion is related to the different potential meanings of "coverage". Do you mean 97% code coverage might mean 10% requirements coverage (as in not testing a null handling contract)? If so, that's very true. However, the conclusion that coverage reporting is dangerous is not true. If interpreted properly, it can be a useful tool for identifying holes in testing. I agree that high coverage alone would not give me confidence in the quality of my tests.

  • Re: Look at what isn't covered, not what is

    by Deborah (Hartmann) Preuss,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Do you mean 97% code coverage might mean 10% requirements coverage (as in not testing a null handling contract)?
    Yes. I'm working with acceptance testing right now, so that would be my concern. When people other than developers look at these reports, they are unfortunately misinterpreted, because the number looks simple and solid but is really in need of interpretation.

    If developers have norms about applying good practices for test coverage, they will test well method by method, and then 97% has more meaning for them. If testing is spotty (no agreement among developers on what constitutes adequate coverage of a method) then this is called into question. If there are no norms... it's a crapshoot.

    Does this sound about right?

  • Re: Look at what isn't covered, not what is

    by Scott Battaglia,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    When people other than developers look at these reports, they are unfortunately misinterpreted, because the number looks simple and solid but is really in need of interpretation.


    We consider those test results internal to our development team and actually never show them to management or clients. We may show them to some of our more "technical" managers, but that's about it.

  • Re: Look at what isn't covered, not what is

    by Deborah (Hartmann) Preuss,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    A great idea:

    We consider those test results internal ... never show them to management or clients.
    In the case I experienced, an external QA group was monitoring "coverage" using such figures. Definitely a bad idea.

    The context of a metric is very important - in the local context it carries with it implicit information that is lost when it's communicated outside.

  • Very strange article

    by Kyrill Alyoshin,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This is getting a bit silly. The only reason I use code coverage is to look what is not covered. And the value of code coverage tools there is tremendous. I'd say unit testing and code coverage go hand-in-hand.

  • Re: Very strange article

    by Paul Oldfield,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Agreed. Code not covered is the interesting information. The question arises, Why isn't it covered? Either there's a missing test, or there's extra code.

  • Re: Look at what isn't covered, not what is

    by Paul Oldfield,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yes. I'm working with acceptance testing right now, so that would be my concern. When people other than developers look at these reports, they are unfortunately misinterpreted, because the number looks simple and solid but is really in need of interpretation.


    Test coverage will mean different things depending on what set of tests are run. For Acceptance tests, any uncovered code means that there is a potential that the code may be superfluous. That used to be important when paying by LoC or some derived measure. Of course we wouldn't do that now, would we? Or it might mean we're missing an acceptance test.

    If developers have norms about applying good practices for test coverage, they will test well method by method, and then 97% has more meaning for them. If testing is spotty (no agreement among developers on what constitutes adequate coverage of a method) then this is called into question. If there are no norms... it's a crapshoot.


    For unit testing, coverage can't tell us much useful though it's likely to finger the guy who doesn't write unit tests, or who needs help with them. It might also help to identify 'cruft' left unconnected by poor refactorings.

    In either case, it's the code not covered that is the interesting information. Looking at what code is covered really doesn't tell us much about how good our testing is, though this is a fairly common beginner's misconception.

  • So: poorly named artifact?

    by Deborah (Hartmann) Preuss,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Paul, it sounds like the moniker "Code Coverage Report" can be one source of confusion: these stats are really intended to indicate code "uncoverage" :-D

    I'm a stickler for well-named metrics for EXACTLY that reason. Once the thing is out there, people take it at exactly face value - better make sure things are well named!

  • Re: So: poorly named artifact?

    by Paul Oldfield,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Well, it's already named to show code 'covered' not code 'tested'. Off the top of my head, how about "Unexercised Code Report"?

  • Limitations of branch and statement coverage

    by Les Walker,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The limitations of the branch and statement coverage metrics provided by tools like Cobertura and Clover are pretty well known. My -- highly unmathematical -- rule of thumb is that after you get 60% branch or statement coverage, you really need to stop using those metrics and switch to path based metrics.

    There's good information in Casey's last comment (commenting on your own blog entry?) about testing at different levels. Another rule of thumb that I use is that developers should write white-box tests motivated towards code and domain coverage. QA engineers should write black-box tests motivated towards use-case and feature coverage. Using code coverage metrics as a way of guageing the thoroughness of black-box testing and vice-versa are highly-dubious practices IMO.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT