BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News J.B. Rainsberger: "Integration Tests Are A Scam"

J.B. Rainsberger: "Integration Tests Are A Scam"

This item in japanese

Well-known agilist and TDD expert J.B. Rainsberger has begun a series of posts to explain why his experience has led him to the thought-provoking conclusion that "integration tests are a scam".

J.B. kicks off Part 1 in the series first explaining what exactly he means when he says "integration test":

I should clarify what I mean by integration tests, because, like any term in software, we probably don’t agree on a meaning for it.

I use the term integration test to mean any test whose result (pass or fail) depends on the correctness of the implementation of more than one piece of non-trivial behavior.

I, too, would prefer a more rigorous definition, but this one works well for most code bases most of the time. I have a simple point: I generally don’t want to rely on tests that might fail for a variety of reasons. Those tests create more problems than they solve.

He continues then by presenting one explanation of why programmers might often end up with [too] many integration tests. He paints the scenerio of a programmer (or team) finding a particularly defect that seems only to be testable via an integration test and then concluding they "better write integration tests everywhere" to guard against other such defects.

Establishing his stance that acting out such a conclusion is a "really bad idea", JB goes on to take the reader through a relatively rigorous mathematically-based examination of what it might take to thoroughly test a medium-sized web application using integration tests. Using these numbers ("At least 10,000 [tests]. Maybe a million. One million."), he describes how much of this project's time is lost writing and running these tests, and how most teams will ultimately react to this in a progressively destructive fashion.

Soon after, J.B. followed up with Part 2 in the series, Some Hidden Costs of Integration Tests, in which he tells an entertaining "Tale of Two Test Suites" to make his point. In this tale, one test suite (presumably composed primarily of focused object tests) takes 6 seconds to run and the other (presumably composed more of integration tests) takes a full 1 minute.

The programmer(s) working with the 6 second suite have no time wasted between hitting the "run" button and knowing the outcome of their changes:

What do you do for 6 seconds? You predict the outcome of the test run: they will all pass, or the new test will fail because you’ve just written it, or the new test might pass because you think you wrote too much code to pass a test 10 minutes ago. In that span of time, you have your result: the tests all pass, so now you refactor.

In contrast, the programmer(s) working with the 1 minute test suite embarks on a fractal-like journey of ever-compounding distractions after hitting "run". In this case, a serious combination of costs is incurred:

I need to point out the dual cost here. The first, we can easily see and measure: the time we spend waiting for the tests plus the time the computer waits for us, because we find it hard to stare at the test runner for 60 seconds and react to it immediately after it finishes. I don’t care much about that cost. I care about the visible but highly unquantifiable cost of losing focus.
...
When I write a [TDD-driven focused object] test, I clarify my immediate goal, focus on making it pass, then focus on integrating that work more appropriately into the design. I get to do this in short cycles that demand sustained focus and allow brief recovery. This cycle of focus and recovery builds rhythm and this rhythm builds momentum. This helps lead to the commonly-cited and powerful state of flow. A 6-second test run provides a moment to recover from exertion; whereas a 1-minute test run disrupts flow.

Tying back to the conclusion of Part 1 of the series, where the hypothetical programmers with the integration-based test suite choose to worry only about "the most important [x]% of the tests", J.B. closes Part 2 positing on the options available to those now finding themselves more in the "1 minute test suite" world.

Both articles contain significantly greater detail (in the articles themselves and related comments) than summarized briefly here that are well worth reading in full. Additionally, these appear only to be the start of what looks to be an interesting ongoing series of posts by Rainsberger. Be encouraged to follow along and add your wooden nickel to the discussion.

Rate this Article

Adoption
Style

BT