In addition to being a software industry best practice, unit testing is promoted by agile methodologies as a pillar for sustainable software production. According to the most recent annual Agile survey, 70% of the participants said they unit test their code.
Unit testing goes hand in hand with other agile practices, so starting to write tests is a stepping-stone for organizations wanting to go agile. The road is long, but is worth taking. In this article, I’ll cover tips on what to expect, and steps to take when starting out in order to make unit testing a part of development life.
There’s an implicit notion about effective unit tests - they are automated. Without automation, productivity tumbles. Making unit testing a habit cannot be sustained long term without it. Relying on manual testing (done by either testers or developers) doesn’t stick; under pressure, no one remembers to run all the tests, or to cover all the scenarios. Automation is our friend, and all unit test frameworks have embraced automation, as well as integration with other automated systems.
Unit testing is crucial for modern development
With tests around our code, we have a built-in safety net. If we change our code and break something, the tests let us know. The bigger the safety net, the more confident we are in knowing the code works and our own ability to change it when needed.
The major benefit of unit tests over other types of tests is quick feedback. Running suites of hundreds of tests in a matter of seconds helps the development flow. We’re forming a cadence of adding some code, adding a test, seeing the tests pass and moving forward. Moving in small steps, knowing everything is working, also means debugging time drops immensely. It’s no wonder we feel more productive with tests - there’s less time spent on bugs, while a lot more time is spent on pushing features out.
The wall of dependencies
Adding tests to a greenfield project is considerably easier - after all, the code is not there to get in the way. However, this situation is definitely not the norm. Most of us work on legacy code, which is not easily testable. Sometimes we can’t even run the code - it may require data or configuration existing only on production servers. We may need to create different setups for different scenarios, and this may require a lot of effort. In many cases, we may need to change the code in order to test it. This doesn’t make sense: we’re writing tests to have the confidence to change our code without breaking it - how can we do it safely without tests?
Code testability is a function of language and tools. With dynamic languages, like Ruby, code is considered testable as is. We can change behavior of dependencies of the code inside tests, without touching the production code. Statically-typed languages like C# or Java are more difficult.
Here’s an example: An expiration checker method (C#), that checks for expiration against a constant date:
public class ExpirationChecker { private readonly DateTime expirationDate = new DateTime(2012, 1, 1); public bool IsExpired() { if (DateTime.Now > expirationDate) { return true; } return false; } }
In this example, there is a hard dependency in the method IsExpired on when the test runs, since the property DateTime. Now returns the actual time. The method has two cases, returning a different value based on that date. Changing the computer clock is out of the question - we want to tests scenarios on any computer, whenever we can, and without any side effects.
A possible solution to test both cases is to change the code. For example, we can modify our code to this:
public bool IsExpired(DateTime now) { if (now > expirationDate) { return true; } return false; }
Here the test can inject a different, controllable DateTime value than the one in the production code. If we can’t change the code, we can use a mocking framework, like Typemock Isolator, that can mock static properties and methods. This allows writing the following test for the original code:
[TestMethod] public void IsExpired_BeforeExpirationDate_ReturnFalse() { Isolate.WhenCalled(() => DateTime.Now) .WillReturn(new DateTime(2000, 1, 1)); ExpirationChecker checker = new ExpirationChecker(); var result = checker.IsExpired(); Assert.IsFalse(result); }
Existing legacy code is not as simple to change, since we don’t have tests for it. Starting out with testing legacy code, the truth is revealed: the uglier our code, the harder it is to test. Tools can alleviate some of the pain, but we need to work hard for our safety net.
And it’s not just dependencies...
Another issue we quickly encounter is test maintenance: Tests are coupled to tested code. With coupling, there’s a chance changing production code will break the tests. When tests break due to code changes, we need to go back and fix them. The fear of maintaining two code bases discourage many developers from even starting out unit testing. The real maintenance work depends on both tooling and skill.
Writing good tests is a skill acquired through practice. The more tests we write, the better we become at it, while the tests improve and require less maintenance. With tests around, we’ll have the opportunity to refactor our code, which, in turn, will make for shorter, more readable and robust tests.
Tools can greatly affect how easy or hard the experience is. At the basic level we’ll need a test framework and a mocking framework. In the .NET space, there is a wide selection of both.
Guidelines for writing our first tests
When we start out, we usually experiment with different tools to understand how they work. We usually do not do this on our real work code. But, the moment soon arrives when we need to write actual tests for our code. When that time comes, here are a few tips:
- Where to start: As a rule of thumb, we write tests for code we’re working on, whether it is a bug fix or a new feature. For bug fixes, write a test that checks for the fix. For features, check the correct behavior.
- Scaffoldings: It is prudent to first add tests that make sure the current implementation is working according to our knowledge or expectation. We do this prior to adding new code, because we want a safety net around our existing code before we change it. These tests are called "characterization tests", a term from Michael Feathers’ excellent book, Working Effectively with Legacy Code.
- Naming: The most important property of a test is its name. Usually once a test passes, we don’t look at it again. But when it fails, what we’ll see is the name. So pick a good one, describing the scenario and the expected result from the code. A good name will help us identify bugs in the test as well!
- Reviewing: To increase our chances of successful adoption, we should partner with a co-worker when writing our first tests. Both will learn from the experience, and as with any code, we’ll get instant review on the test. It’s better to have agreement on what to test, and how to name it, since this will be the base template for the rest of the team.
- AAA: Modern tests are structured in the AAA pattern- Arrange (the test setup), Act (calling the code under test) and Assert (test pass criteria). If we use Test Driven Development (TDD), we write the test first completely, and then add the code. For legacy code, we might need another option. Once we have a scenario and a name to test, write the Act and Assert part first. We’ll need to keep building the Arrange part, as we know more about dependencies we’ll need to prepare or fake. Then we’ll continue until we have a passing test.
- Refactoring: Once we have tests in place we can refactor the code. Refactoring, as with testing, is an acquired skill. We’ll refactor not just the tested code, but also the tests themselves. We don’t apply the DRY (Don’t Repeat Yourself) principle to tests though. When tests break, we want to fix the problem as quickly as possible, and it’s better to have all the test code in one place, rather than scattered around in different files.
- Readability: Tests should be readable, preferably by a human. Review the test code with a partner to see if he can make sense of the purpose of the test. Review other tests to see how well their names and content differentiate themselves from their neighbors. Once tests fail, they will need fixing, and it is better to review them before that happens.
- Organization: Once we have more tests, organization will come in handy. Tests can differ in many ways, but the one most apparent is how quickly they run. Some can run within milliseconds, others require seconds or minutes. As we work, we want to the quickest feedback possible. This is how we can progress in the cadence I talked about earlier. To do this, you should separate the tests in a way so you can run the quick ones separately from the slow tests. This can be done manually (and diligently), however in .NET, Typemock Isolator also includes a runner that does the separation automatically.
Summary
Taking the first steps in unit testing is challenging. The experience depends on so many things - language, tools, existing code, and dependencies and skill. With a little bit of thinking, a lot of discipline, and practice you’ll get to unit testing nirvana. I did.
About the Author
Gil Zilberfeld is the Product Manager at Typemock. With over 15 years of experience in software development, Gil has worked with a range of aspects of software development, from coding to team management, and implementation of processes. Gil presents, blogs (www.gilzilberfeld.com) and talks about unit testing, and encourages developers from beginners to experienced, to implement unit testing as a core practice in their projects. He can be reached at gilz@typemock.com.
Community comments
Why not DRY?
by Rafał Wicha,
Re: Why not DRY?
by Jonathan Allen,
Re: Why not DRY?
by Alan Felice,
Re: Why not DRY?
by Assaf Stone,
Perils of Not Unit Testing
by Ilias Tsagklis,
I question Unit test utility
by Serge Bureau,
Re: I question Unit test utility
by Alan Felice,
Re: I question Unit test utility
by Serge Bureau,
Refactoring code should not imply refactoring tests
by Alex Worden,
TDD
by Olivier Gourment,
Where to start and naming
by Darrell Grainger,
Why not DRY?
by Rafał Wicha,
Your message is awaiting moderation. Thank you for participating in the discussion.
A really nice article. A proper aproach to testing is a concept which can fill few fat books, you've covered strong foundation in few paragraphs. But... I cannot agree that you shouldn't follow DRY in tests. Of course, there are situations when applying this rule isn't a good idea. But your advice is to not apply it at all. I also can't agree with your explanation. If test fails, we should fix the code rather than a test (I assume that if we already applied DRY, we did it during refactoring, so basically the test were passing). What I want to say is that DRY in tests is sometimes fine, sometimes not. You'll figure out when, while getting more and more experience in testing.
Re: Why not DRY?
by Jonathan Allen,
Your message is awaiting moderation. Thank you for participating in the discussion.
I have to agree with you. As far as I'm concerned test code should be written to nearly the same standards as production code. While I'm not a fan of dogmatically following so-called principles, it’s hard to make a good argument against DRY. Especially for unit tests where you are going to see the same setup over and over again.
Perils of Not Unit Testing
by Ilias Tsagklis,
Your message is awaiting moderation. Thank you for participating in the discussion.
Excellent article as always by Gil. You may also check The Perils of Not Unit Testing on the same subject.
I question Unit test utility
by Serge Bureau,
Your message is awaiting moderation. Thank you for participating in the discussion.
For dynamic languages it is a must, because all errors will mainly popup at run time.
For compiled languages and in particular for strongly typed one, it seems much less useful.
Don't get me wrong, I am for automatic tests but mostly functional tests.
When did a Unit test showed me an error the last time ? Maybe once in 3 years.
Just about all the bugs are functional or multi-threading issue, ...
So the return on investment for truely Unit tests is not good at all. (except for dynamic languages)
In fact the popularity of Unit tests came with the popularity of dynamic languages ?
Refactoring code should not imply refactoring tests
by Alex Worden,
Your message is awaiting moderation. Thank you for participating in the discussion.
If your tests break because you refactored code, then your tests are too fragile. The ONLY point of writing automated tests is to allow refactoring or extending your code while maintaining the existing functionality. If I never changed my code, I would only ever have to test it once, so writing automated tests would be a waste of effort.
If tests break (due to false-negatives) when I refactor, they are more of a hinderance than a help. This is why mock frameworks are largely a broken paradigm because they assume too much about the implementation being tested. Good tests should only verify the external expression of a 'unit'.
TDD
by Olivier Gourment,
Your message is awaiting moderation. Thank you for participating in the discussion.
Eh.. no mention of TDD? Hello?
Re: I question Unit test utility
by Alan Felice,
Your message is awaiting moderation. Thank you for participating in the discussion.
> When did a Unit test showed me an error the last time ? Maybe once in 3 years.
> Just about all the bugs are functional or multi-threading issue, ...
A unit test that never fails is not a good unit test. And if you really saw your last failing unit test 3 years ago.. consider the option that you are writing too weak unit tests.
I've just developed a 10k multi-threaded Java application in 5 months that seems (till now) to be stable; it was up for 20 days.
The only bug found (till now) was caused by an openJDK bug on the ARM board, that killed my Java application with no possibility to recover, and the serial modem I use was left in an unexpected (for my implementation) state the next time the application was started -by a watchdog-. I have over 250 unit tests that runs in less than 10 seconds (3 times slower on the ARM board):
- I can't enumerate the times a unit-test revealed that I made in a conditional if the opposite of what I intended to;
- I can't enumerate the times I extended something in one point, and the unit tests for some other component (that I forgot was indirectly using the same thing) suddenly failed, and so prevented me to put that (broken, not safe) extension on production code;
- I can't enumerate the times I heavenly modified a class hierarchy, and the unit-tests revealed me where the new implementation was lacking.
I'm really sure I wouldn't be able to develop (and deliver) that application without the unit-test. And not in that time.
Re: Why not DRY?
by Alan Felice,
Your message is awaiting moderation. Thank you for participating in the discussion.
> it’s hard to make a good argument against DRY.
> Especially for unit tests where you are going to see the same setup over and over again.
For my experience, readability should be in first place. So, isolate what differs from what is always the same. If something is always the same, put it in a proper method [JUnit has a setUp() method, automatically invoked for all the test-cases in a unit-test]. But when two test cases have different 'arrangement', has to be evident in what them differs. You should not navigate 5 level of methods to discover that.
Mocking framework helps a lot in that. You quite 'declaratively' say what each 'arrangement' is:
Re: Why not DRY?
by Assaf Stone,
Your message is awaiting moderation. Thank you for participating in the discussion.
Favoring readability and locality over DRY in your unit tests is a good practice, generally speaking. The reason for this is that applying the DRY principle requires that you use complex control-logic: methods, conditions, and loops. This will increase your tests' cyclomatic complexity (the number of paths your process may follow). Doing this increases the need to test your tests - which you do not want to do.
So in short, keeping your test code self-evident is important both for learning (understanding how to use your code) and keeping it error free.
Another point to consider - if you feel the need to apply DRY to your test, could it be that your test is too big?
Assaf
Where to start and naming
by Darrell Grainger,
Your message is awaiting moderation. Thank you for participating in the discussion.
This is a great article. Thanks for posting it.
I would add that for Where to start, for bug fixes I'd write a test which would have caught the bug. I'd run it against the existing code and see it fail. I'd then fix the bug and see my test go green.
For Naming I like to have the test method name tell me what it is testing. The expected results should be in the assert statement. If the method passes I assume I got the expected result. No need to clutter up the results with expected results for every test. You really only need the expected results for when it failed. I'd make sure we are looking for the correct expected results during the Reviewing stage.
Re: I question Unit test utility
by Serge Bureau,
Your message is awaiting moderation. Thank you for participating in the discussion.
Maybe my tests are too simple.
But I submit to you that Unit tests themselves are too low level.
Anything a little bit complex is much more testing the Mocks than the code.
I think only fonctional tests have real value. (in the form of story testing)
Most of todays issue are related to distribution, transaction, deadlocks, muti-threading timing issue: all of which are not to be tested with Unit tests.
That adding two numbers give a wrong value will be apparent on any functional tests, Unit tests are redondant and loosing precious development time.