The opinions expressed in this article are those of Liam O'Connor and are not necessarily those of his employer (NICTA).
If you're anything like me, you've probably been exposed to an enormous amount of articles advocating Test-Driven Development (TDD) or other development practices involving extensive testing, both on the unit and integration tests level. I believe that many advocates of these practices lack the experience in real-world projects to make their arguments credible. In fact, these extremely rigorous testing practices often don't work at all when scaled to larger projects.
In this article, I'll explain some of the common misconceptions about testing. If you write your tests with these in mind, I hope that it will help you and your team to decide when it is appropriate to test, and when it isn't.
Misconception 1: Tests show my code is correct!
While this misconception appears true intuitively, you cannot actually rely on tests to establish any form of rigorous correctness standards. When you write a test, you have tested one possible scenario in your program. Many units in your program may have an infinite (or intractably large) number of possible scenarios to test. It is not feasible to test them all - so the typical response is to test some failure cases, edge-cases and maybe a couple of "regular" cases just to make sure everything is all right.
This is barely a drop in the ocean if your goal is correctness. It's fairly easy to develop a suite of tests that always passes, despite the presence of bugs. There are some bugs that are essentially impossible to detect via tests - race conditions and other errors involving concurrency are classic examples where, even if you had control over the scheduler, the number of possible interleavings grows so rapidly that it quickly becomes impossible to test reliably.
So, tests do not show correctness for all but the most trivial of cases, where the test can fully specify the behavior of the unit. In these trivial cases, often it is not worth writing the tests in the first place; these cases are trivial precisely because the code they are testing is trivial! The only thing that is achieved by writing tests for trivial pieces of code is increased maintenance overhead and added workload for testing machines.
Seeing as tests are just code, you can also have bugs in your tests. If the person writing the test is the same person writing the code, often they may implement a unit incorrectly, and then write a test that ensures that incorrect behavior. The root of the problem is the developer misunderstanding the specification, not a minor mistake in implementation.
If you really need correctness, then formally verify your code (the tools for verification are much better these days than in the past). If you don't need correctness, write tests. Always keep in mind that the tests you are writing merely serve as a smoke alarm for a fire, but can't detect a whole variety of other problems.
Misconception 2: Tests are executable specifications!
This is false for a few reasons. Let's look at my dictionary's definition of specifications:
A set of requirements defining an exact description of an object or a process.
Therefore, if my code conforms to specifications, it should be completely correct, as the specification exactly defines the behavior of the code. If my tests are specifications, they must therefore establish correctness. As we've already discussed, they do no such thing, hence they're not specifications.
Somewhat less pedantically, if we assume that a developer could infer from reading test cases the desired behavior of a function, then we introduce a whole lot of imprecision; if test cases are not extensive enough, we could end up inferring the wrong thing, sometimes only subtly different from the desired behavior.
In addition, test-cases are not checked for consistency. This means that your tests could actually "specify" an undesirable behavior as a result of developer error or misunderstanding. This could lead to contradictions in your tests, and therefore your specification.
Randomized testing software such as QuickCheck allow you to write tests simply as Boolean properties that should hold, and the test cases are generated for you by the software. This software allows tests to come much closer to executable specifications, however the properties are still not checked for consistency.
Misconception 3: Tests lead to good design!
While making a bad design testable does have potential to improve it, testing is not a replacement for good design practices. Writing huge suites of tests against the interfaces of a system increases the "work investment" that developers have put into those interfaces. The problem arises when these interfaces are no longer optimal, i.e., developers have already written huge amounts of tests for these interfaces. Changing the interfaces would mean changing all the tests as well. Tests are tightly coupled to those interfaces, so most of those tests will have to be thrown away and rewritten. Seeing as most developers grow attached to their work, this can lead to suboptimal design decisions hanging around even if they aren't the best fit, well into the lifetime of the project.
The solution here is to start testing only after you've written a series of prototypes. Don't bother testing something you are likely to refactor heavily very soon. All it does is increase workload on developers and testing machines, as well as cause developers more pain when they have to destroy hours of work when requirements or interfaces change. If you don't wait for testing, your tests can actually lead to bad design, as developers will be reluctant to do any major refactoring.
In addition, making code testable is hard. Often people resort to questionable design decisions just to make testing easier; exposing leaks in abstraction, tying mocks too heavily to implementations of interfaces, or writing test cases that are full of so much code that they almost require tests themselves (mocks and stubs often suffer from this problem).
Misconception 4: Tests make it easier to change code!
Tests do not always make changing code easier, however if you are changing underlying implementations of interfaces, tests can help catch regressions or undesirable behavior in your new implementation. If you are changing the higher-level structure of the program, however, the opposite is generally the case. Tests are often tightly coupled to higher-level interfaces. Changing these interfaces means rewriting the tests. In that case, you've made your life harder - you will have to rewrite the tests, adding more work, and the old tests do nothing to ensure you haven't introduced a regression, meaning the tests haven't helped at all.
So, don't write tests?
I am not saying that you should not write tests. Tests are a valuable way to improve confidence and prevent regressions in software. They do not, however, uniformly lead to good design, correctness, technical specifications, or effortless refactoring for the reasons outlined above. Using tests in excess makes development *harder*, not easier.
Similarly, not verifying code at all makes quality assurance impossible, but rapid prototyping easy. Testing introduces a trade-off between quality assurance and flexibility so a suitable compromise must be struck.
About the Author
Liam O'Connor formerly worked for Google, and teaches at the University of New South Wales. He recently began working for NICTA, Australia's leading ICT research institution, on the l4.verified project: the formal verification of an operating system kernel.
Community comments
It is not all about tests
by Mileta Cekovic,
Not TDD!
by Daniel Sobral,
I think he missed the point
by Henrique Rangel,
Rant from a tester unwilling to change
by Jurgen Huls,
Re: Rant from a tester unwilling to change
by Anthony Topper,
Serious misconceptions on testing and design
by Jerome St-Pierre,
/shudder
by Joe Eames,
These are the conclusions you could draw from observing misapplied testing
by Przemyslaw Pokrywka,
In this article is not covered the psychological part of TDD !!!
by Anton Antonov,
Good points to bear in mind about unit testing
by Stephen Levitt,
Valid observations
by Patrik Helsing,
Don't waste your time
by André Thiago Souza da Silva,
Re: Don't waste your time
by Anthony Topper,
Loss of time
by Tiago Costa Silva,
Large scale?
by Nick Watts,
Re: Large scale?
by Patrik Helsing,
Re: Large scale?
by James Kingsbery,
It is not all about tests
by Mileta Cekovic,
Your message is awaiting moderation. Thank you for participating in the discussion.
Agree with most of the points, though article title should be more precise: Unit Testing Misconceptions.
Today, when everybody is talking more about 'all beloved tests' then about code (even people that do not write them), any article that tries to make unit tests perception more realistic and pinpoint that there are disadvantages to consider too is welcome.
Not TDD!
by Daniel Sobral,
Your message is awaiting moderation. Thank you for participating in the discussion.
This article is full of misconceptions when it comes to TDD. For instance:
If the person wrote the unit first, it is not TDD. In TDD, you first write a test for a single requirement, then you write just as much code as needed to make that test pass, and then repeat. If you are not doing this, you may be testing, but you are certainly not doing TDD - and quite a few statements of the article simply do not apply to TDD.
I'm not here to defend TDD, but criticize it based on practices that do not apply is not helpful.
I think he missed the point
by Henrique Rangel,
Your message is awaiting moderation. Thank you for participating in the discussion.
I disagree in some parts, to mention a few:
Tests give you more confidence to refactor, and if it's likely, it could not happen.
You have to destroy, delete or change code anyway when requirements change or your codebase will rot. You dont have to feel pain when doing that.
The tests by themselfs do not lead to better design, code easy to change or anything. Its the whole that counts, the good practices, the developer, all that and more.
Go read Growing Object-Oriented Software Guided by Tests
Rant from a tester unwilling to change
by Jurgen Huls,
Your message is awaiting moderation. Thank you for participating in the discussion.
Not much more to say than that, the author's disgust with TDD comes across in the first paragraph. Then follows a series of weak arguments, my favourite being:
The point behind these testing practices isn't to aim for 100% correctness, I think the author has completely missed this. Executable specifications are about linking the tests more closely with what the customer is asking for. They are a collaboration exercise to improve communication between customer and coder. Test Driven Development has a number of benefits: getting coders thinking more about testing, reducing bugs, driving better code design, enabling refactoring to take place with confidence. I would like the author to tell me where TDD claims to aim for correctness. Do your homework before you dismiss new ideas.
Serious misconceptions on testing and design
by Jerome St-Pierre,
Your message is awaiting moderation. Thank you for participating in the discussion.
This article is based on serious misconceptions. Unit testing is not magical and like everything else it must be done with care. I don't know to what kind of unit testing this author has been exposed to get this perception, but it was obviously not professional. Liam O'Connor doesn't seem to be an artist.
« Misconception 2: Tests are executable specifications! » Of course we start by testing trivial cases, but this discipline also involve writing a failing test before to fix a bug. When doing so, with expressive code and test names, it becomes pure technical specifications. This is even Sarbanes Oxley compliant!
« Misconception 3: Tests lead to good design! » When done correctly it sure does because it forces decoupling. It helps to think about the design from the client perspective and thus also avoid to write useless code and expose useless members. If a system requires to change so many interfaces at once, it means there was a lack of separation of concerns. If programmers are creating leaks in abstractions to make code testable (I have often seen that) it is because they lack of knowledge or experience.
« If you don't wait for testing, your tests can actually lead to bad design, as developers will be reluctant to do any major refactoring. » This can't be any further from the truth. After 5 years of TDD experience, this issue has always been to refactor modules without test coverage, not the other way around!
By the way, who is still using hand-written mocks? Since we got frameworks like Moq in .NET, writing tests is much faster and there is no more mocks or stubs to maintain...
I think there is not much design happening without tests after a system is grown up because programmers are too afraid to break something so they keep patching and hacking when it ends up growing out of control after years (inevitable on legacy code) and the natural emergent design we can achieve with TDD becomes impossible. TDD helps me to strive for simplicity, but the tests themselves need to stay clean and simple as well because yes they also must be maintained, but never thrown away as the author mentioned!
I have seen the difference between projects that were built with or without TDD and I am not interested anymore in writing software without TDD. TDD helps me to stay sane and professional. In the past, I have seen way too much rotten code that turned my daily work into nightmare...
/shudder
by Joe Eames,
Your message is awaiting moderation. Thank you for participating in the discussion.
By that same logic Agile is silly because it just means more meetings every 2 weeks, and I shouldn't talk to my wife because communication sometimes results in arguments. Anything that isn't 100% effective shouldn't be done, and the minor drawbacks with something are far more important than the benefits.
It only takes about a week of doing TDD to realize that it's not perfect, but it's SOOO much better than the alternative.
"Tests do not always make changing code easier,"
True. They only make changing code easier 99% of the time. The other 1% of the time they don't make it harder, they just don't make it easier.
"In addition, making code testable is hard"
True. So is any technique for writing good, maintainable code. I really don't think I'm going to start purposefully write poor code because it's easier.
"In that case, you've made your life harder - you will have to rewrite the tests, adding more work,"
Any kind of testing, automated or manual, is also more work. Should we not do any testing to see if our code works because testing is "more work"? /boggle.
-1 for academia.
These are the conclusions you could draw from observing misapplied testing
by Przemyslaw Pokrywka,
Your message is awaiting moderation. Thank you for participating in the discussion.
I would agree with one point - without a set of prototypes one often does not know *what* should be built at all. Religiously applying TDD to each of the prototypes leads to a waste of time, because almost all of them will be deleted anyway. See the experience of lean startups with regard to this. Though once the *what* (the requirements) for the software is set, TDD is the way to go, with no excuse. One risk of course is, that quick-and-dirty prototypes often turn into the core architecture of software and this is something one should be aware of.
The other points are not valid, though I understand, that observing misapplied testing one could draw such conclusions.
1. "Tests show my code is correct"
Obviously they don't always show - they are meant to be able to show this, though. In minority of cases perhaps, but better in that, than in none of cases. By assumption also, the code of the tests should be much simpler, than production code, so just looking at it should reveal bugs or divergence from requirements. So while not able to show the software correctness, they can show correctness in some cases (hopefully most important ones) at least.
Note, that when you fail to have your tests simpler than your code, the tests will probably not give you much gain and they will certainly cost you in terms of maintenance.
2. "Tests are executable specification"
This is misunderstanding, because this is not a inherent property of unit tests, but an ideal one should go for. But it is in the reach of developers and you should try to make your tests like that - then they'll provide much more value, than specification wrote down in a document somewhere. You'll be able not only to read it, but to verify it instantly.
3. "Tests lead to good design"
Of course they do. TDD does the most, but testing after writing the code also helps the design. First thing, you can explore the design of your API driving from the tests and often you end up with a API very friendly to clients. Second, testing amplify the pain resulting from bad design, because then it's harder to write tests. Of course testing is not sufficient to do design well - you need to learn it first, but when you already know how to design software well, tests constantly remind you to apply the correct approach. Otherwise maintaining them will be pain.
4. "Tests make it easier to change code"
to be more precise "to change code without fear of breaking something". That rule applies to good tests - tests without tight coupling (which is as bad as in production code, tests are not an exception), and ones that check important requirements, and not barely a design decisions. With tests like those (which should be your goal) changing code is actually easier, because you are confident you'll not break something by an accident.
Yet the very sad fact is that there is a lot of unexperienced people, that write tests in tightly coupled fashion and tests that check some very low-level implementation decisions. In such settings it is certain, that tests will get you more maintenance burden than gain.
Concluding, the arguments of testing proponents are still perfectly valid and they do come from deep experience. The new thing you see these times in the field is there is a lot of people with testing slogans in their mouth, however without understanding what it all really means.
The effect is that you come to negate the pro-testing arguments. In fact, this are only the sloppy and clueless practices, that make the testing look bad. The right solution is to teach the people how to do it right: how to avoid tight coupling, how to write tests in a readable, simple and consise ways and how to test important things and avoid testing the implementation details.
In this article is not covered the psychological part of TDD !!!
by Anton Antonov,
Your message is awaiting moderation. Thank you for participating in the discussion.
TDD is an wonderful method to discipline our thinking when we implement some piece of code. Also to break our code on smaller pieces. The smaller pieces give us better chance to use them in a new way easier. I don't think that software development is only code coverage and formal methods to assert code quality. From my point of view TDD is the human way to break the huge complexity of software development. Every piece of code is implemented by people not machines!
www.ajantonov.com
Good points to bear in mind about unit testing
by Stephen Levitt,
Your message is awaiting moderation. Thank you for participating in the discussion.
Unit testing, and TDD in particular, is not a panacea. Having used TDD, I wouldn't choose to develop code any other way but is important to bear in mind the false assumptions that sometimes end up being part of the "TDD mindset".
Valid observations
by Patrik Helsing,
Your message is awaiting moderation. Thank you for participating in the discussion.
Having experience from more than 10 years of large-scale projects where a test-driven approach have been used, I can only verify your observation of common misconceptions. As a good engineer you should always be aware of the pitfalls of techniques you are using. A real engineer never adopt any one technique "religuously", but uses a wide spectrum from his/hers toolkit. Thanks for sharing your insights.
Don't waste your time
by André Thiago Souza da Silva,
Your message is awaiting moderation. Thank you for participating in the discussion.
Please, don't waste your time reading this article. It is full of misconceptions
and the author seems to not know to much about Unit Tests and TDD.
Re: Rant from a tester unwilling to change
by Anthony Topper,
Your message is awaiting moderation. Thank you for participating in the discussion.
The author never says the point of testing is to "aim for 100% correctness". In fact he says, "I'll explain some of the common misconceptions about testing". So when you say, "The point behind these testing practices isn't to aim for 100% correctness", you are agreeing with the Mr. Liam O'Connor. But for some reason you've decided to be combative. Tisk Tisk.
Re: Don't waste your time
by Anthony Topper,
Your message is awaiting moderation. Thank you for participating in the discussion.
Of course the article is full of misconceptions. It's titled, "Testing Misconceptions".
Loss of time
by Tiago Costa Silva,
Your message is awaiting moderation. Thank you for participating in the discussion.
I love this theme, but the article didn't bring anything useful for those who want to improve his test skills. For me this article would be nice in a personal blog, but it has no credibility for InfoQ.
Large scale?
by Nick Watts,
Your message is awaiting moderation. Thank you for participating in the discussion.
I'm new to TDD (a few months) and so far struggling to catch the fever. I noticed the remark that TDD doesn't scale well to large projects, but there isn't really any elaboration. I was wondering what experience led him to this conclusion. Anybody have any guesses? If Liam (the author) is reading these comments, care to weigh in?
Re: Large scale?
by Patrik Helsing,
Your message is awaiting moderation. Thank you for participating in the discussion.
Hi Nick,
I don't know what Liam says about this, but here are my personal reflections.
The tests you write will be an investment in the interfaces of your code, both the interface of the object/unit/system under test, and the interface dependencies of that object/unit/system (since you are probably mocking/stubbing those interfaces to verify the interactions). This is all good and well. The problem begins when you need to change those interfaces. You want your design to be as maintainable as possible, meaning it should be easy to change things. If you have large test investments in (poor) interfaces, then those tests make it harder to change. So there is a trade-off to be made.
Or you can of course write excellent interfaces from the beginning...! (Why haven't I thought of that before. ;-)
Re: Large scale?
by James Kingsbery,
Your message is awaiting moderation. Thank you for participating in the discussion.
What I have found is actually the opposite... TDD really helps when it comes to larger projects, or rather, the lack of TDD becomes very noticeable in larger projects that don't have tests. Large systems that don't have tests are very difficult to understand, and it is difficult to make changes and knowing what you broke. Even if a change you make requires changing dozens of tests, that is a far more effective use of time than trying to understand what you might have broken.