BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Faster Test Runs With Clover's Test Optimization

Faster Test Runs With Clover's Test Optimization

Leia em Português

This item in japanese

Bookmarks

The recent release of Clover 2.4 highlights a new "Test Optimization" feature that offers to speed up CI builds and allow developers to spend less time waiting for their tests to run. The feature leverages "per-test" coverage data to selectively run only the tests impacted by your code changes.

Atlassian has just released the 2.4 version of their popular code coverage analysis tool Clover, adding a new feature dubbed "Test Optimization". From Atlassian:

Clover has the ability to optimize test runs, greatly reducing the time it takes to test a code change. Typically, the full suite of tests has run whenever a code change is made. With Test Optimization enabled, Clover automatically determines the optimal subset of tests to run based on the specific changes made. Testing only what you need provides quicker feedback without compromising test quality.

Reducing the time it takes to understand the impacts your code changes have made on your regression test suite could offer many teams significant improvements in productivity. It is important to note that many people will argue that is why team's must strive to keep their unit tests lightning quick, and this is absolutely true. For many reasons though, even if the team has made each unit test lightning quick, the aggregate of their entire application's test suite may still take longer to run than is optimal.

A logical approach to improving this is to selectively run only the tests affected by a given code change. Doing this manually not only takes a decent amount of work, but it often leads teams to "miss tests" rather frequently, ultimately losing the benefit of the optimized test run. This new clover feature offers a way for teams to take this approach without the manual effort and with a lowered risk of missing a test that should have been run but wasn't.

Brendan Humphreys describes more about how Clover does this:

As a code coverage tool, Clover measures per-test code coverage - that is, it measures which tests hit what code. Armed with this information, Clover can determine exactly which tests are applicable to a given source file. Clover uses this information combined with information about which source files have been modified to build a subset of tests applicable to a set of changed source files. This set is then passed to the test runner, along with any tests that failed in the previous build, and any tests that were added since the last build.

According to Humphrey's, Test Optimization also makes it possible to be more strategic about the order that the tests are run, which he claims can improve test run effectiveness. About these strategies:

The set of tests composed by Clover can also be ordered using a number of strategies:
  • Failfast - Clover runs the tests in order of likeliness to fail, so any failure will happen as fast as possible.
  • Random - Running tests in random order is a good way to flush out inter-test dependencies.
  • Normal - no reordering is performed. Tests are run in the order they were given to the test runner.

Humphreys goes on to describe the results of 10-day trial by their FishEye team, where he states that their "test execution time was reduced by a factor of four".

Take a moment to read up on this new Clover release, particularly Humphrey's take, to see if this can help your team.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Small limitation

    by Eric Torreborre,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    That's indeed a pretty good idea, but if I look at our own code base, we would still miss some tests because many key classes are instantiated through reflection.

    Eric.

  • Re: Small limitation

    by Nick Pellow,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Eric,

    I'm not entirely sure I understand why you think this, however Clover will still optimize your tests, even if you use reflection to instantiate objects. Clover's per-test coverage data is used to calculate which tests to run, for any given source code modification.

    Why don't you give it a go and see what savings can be made?

    Cheers,
    Nick Pellow
    (Atlassian Clover)

  • Re: Small limitation

    by Mike Bria,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Right. My understanding of how the "what code gets run by what tests" analysis works is that it's not done by static analysis of the code, but rather by watching the tests run and recording what ends up being executed.

    If that's the case, shouldn't be a problem.

    Granted, caveat, I'm not a Clover person, so I could very well have the wrong assumption (but I rather doubt it!)

    All that said, on another note, forgetting Clover/coverage etc - how do you write microtests (aka "TDD-style unit tests") for these reflection-driven classes?

  • Other related projects

    by Mark Levison,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I started a conversation around this on the JUnit mailing list and several related tools were mentioned. Kent Beck mentioned his latest project JunitMax (no references that I can find on the web) and Sebastian Bergmann raised Google Testar (code.google.com/p/google-testar/).

    It would interesting to hear how this new feature of clover differs from testar.

  • Re: Small limitation

    by Brendan Humphreys,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Right. My understanding of how the "what code gets run by what tests" analysis works is that it's not done by static analysis of the code, but rather by watching the tests run and recording what ends up being executed.

    If that's the case, shouldn't be a problem.


    This is correct. Clover records per-test coverage, and so will faithfully detect dynamic/reflection-based invocations.

    Cheers,
    -Brendan
    (Atlassian)

  • Re: Other related projects

    by Brendan Humphreys,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I started a conversation around this on the JUnit mailing list and several related tools were mentioned. Kent Beck mentioned his latest project JunitMax (no references that I can find on the web) and Sebastian Bergmann raised Google Testar (code.google.com/p/google-testar/).

    It would interesting to hear how this new feature of clover differs from testar.


    There are some other tools that do something similar to Clover's test optimization. I'll summarize them here:

    Testar: an open source project from Google that does selective testing. Uses runtime bytecode instrumentation via a java agent to record method-level coverage and use this to support selective testing. The big disadvantage of a java agent approach is that you need control over the java execution environment (to specify the -javaagent cmd line). This is not an option for many developers. Another disadvantage of Testar is that it is a stand-alone test runner. Clover's test optimization integrates directly with Ant and Maven2 test runners, so is easy to incorporate into existing builds.

    JUnitMax: As yet unreleased tool by Kent Beck that supports (amongst other things) test reordering, in order to encourage fast failures. I don't think it does selective testing at this stage.

    JTestMe: Alpha project (no download?) on codehaus that supports selective testing. It uses AspectJ to record method-level coverage, and uses this to generate a list of tests applicable to a given change. Again I think the disadvantage here is the lack of integration with existing test runners. The dependency on AspectJ might also present problems.

    Infinitest: A standalone swing app that uses static analysis to determine optimal subset of tests to run for a given change. Aimed as an interactive developer desktop tool. Use of static analysis means that it will not work with dynamic/reflection-based invocations. Stand-alone nature means it does not integrate with existing test runners.

    Clover's advantages over these tools:

    1. easy integration into existing Ant- or Maven2-based builds. This allows the test optimization to be used on the command line on a developer's desktop and also in a Continuous Inegration environment.

    2. use of coverage-based approach faithfully supports dynamic/reflection-based invocations.

    3. Supports selective testing and test re-ordering out of the box.

    Hope this helps.

    Cheers,
    -Brendan
    (Atlassian)

  • Re: Small limitation

    by Eric Torreborre,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Ok, I get it now.

    Thanks for the clarification.

    [back to my previous post... Clover Test Optimisation is a]
    Very good idea indeed, with no limitation!

    Eric.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT