Tackling real-world unit testing problems
One thing I’ve learned in the last few years is unit testing is considered a "solved problem." All the information, books and tools are out there, just pick up NUnit, and you’re good to go, right?
Even before deciding to start unit testing, we need to sift through real experience of others; good and bad, horror stories and miracles ("This one test saved me a week of work!"). Then, we take the plunge, and realize: There’s so much to learn!
I’d like to take you on a journey in unit-testing land. Our team at Typemock has been exploring this area over the years, and our exploration has definitely affected our product development. Our main product, Isolator, began as a mocking framework. Yet, as we learned more about real world problems people experience with unit tests, we’ve developed features alleviating many. And, I can tell you, we still have much ground to cover.
But let’s start at the beginning. At Typemock, we have a simple vision: Easy unit testing for everyone.
Simple, yes. Easy to achieve? Well...
Unit testing is not easy. The benefits of unit tests are enormous, and people recognize them. But you’ll need to work hard to get these benefits.
Most of us already have a code base we’re working with. Some of us are lucky to work on Greenfield projects, but most of us have the natural element called "legacy code" in abundance. When we decide to write tests, it’s for that code. Turns out, it’s not that easy.
When Typemock started, it was not possible to write tests for legacy code without changing the code to fit the tests. That was Isolator’s main goal: Provide the ability to write unit tests without changing the code. With Isolator’s ability to mock every .NET type, it was now possible write unit tests for legacy code.
One thing we’ve learned over time is to pay attention to our APIs usage. Take, for example, the first set of APIs. They were based on strings. For example, to fake
DateTime.Now, this is what you had to do:
Mock mockDateTime = MockManager.MockAll<DateTime>();
mockDateTime.ExpectGetAlways("Now", new DateTime(2000, 1, 1));
Not great, but it works. However, we know these things are easy to break when refactoring. So we moved to the record-replay model. This was refactor-friendly, although a bit awkward:
using (RecordExpectations recorder = RecorderManager.StartRecording())
DateTime.Now = new DateTime(2000, 1, 1);
This works, and was really accepted as a revolutionary way to write tests. But the record-replay style was going out of style. And this version had some technical issues we wanted to get rid of. So when lambda expressions appeared, the APIs took another turn for both readability and refactoring:
Isolate.WhenCalled(() => DateTime.Now).WillReturn(new DateTime(2000, 1, 1));
With this current API set, we decided to make another simplification. We’ve dropped the "mock" vocabulary, and replace it with "fake" for instances. "Mock" and "stub" were already overloaded terms, and were misused and confused all the time. Instead of trying to teach beginners all the nuances, we decided to side-step the issue altogether.
Isolator as a Visual Studio add-on was not the only player - we had to get along with other tools and providers. Code coverage tools, performance profilers, build engines, you name it. Isolator needed to play nice with others, so people could easily run their tests, using different tools in different configurations.
Speaking of running tests, what about running outside of Visual Studio? When you start building automation in your team, you’ll learn a lot about different tools in the MS-Verse, including the almighty TFS. Isolator’s profiler technology required lots of integration work to make tests run as part of the continuous integration processes. Because different teams use different tool sets and CIs servers, we needed to accommodate something that would easily fit the way they work.
Ask anyone who’s even thinking about starting unit testing, and she’ll tell you: I know my code will change, and I don’t want to fix my tests all the time. Can you do something about that?
Mocking frameworks (much like Spiderman) have a superpower, which comes with great responsibility. The ability to change behavior comes from knowing what goes inside the object. This x-ray vision is also its Achilles heel - changing internal code affects tests as well.
Unit testing is also about maintenance. When designing our APIs, we always looked at that angle. Here’s an example. Consider this constructor for an object (from an open-source project called ERPStore):
, ICartService cartService
, IAccountService accountService
, IEmailerService emailerService
, IDocumentService documentService
, ICacheService cacheService
, IAddressService addressService
, CryptoService cryptoService
, IIncentiveService IncentiveService)
It takes many interfaces as input. In my tests, I can fake those dependencies like this:
var fakeSalesService = Isolate.Fake.Instance<SalesController>();
var fakeCartService = Isolate.Fake.Instance<ICartService>();
var fakeAccountService = Isolate.Fake.Instance<IAccountService>();
var fakeEmailerService = Isolate.Fake.Instance<IEmailerService>();
var fakeDocumentService= Isolate.Fake.Instance<IDocumentService>();
var fakeCacheService = Isolate.Fake.Instance<ICacheService>();
var fakeAddressService = Isolate.Fake.Instance<IAddressService>();
var fakeCryptoService = Isolate.Fake.Instance<CryptoService>();
var fakeIncentiveService = Isolate.Fake.Instance<IncentiveService>();
var controller = new AnonymousCheckoutController(
What happens if we need the constructor to take another type? Or remove an argument? I would need to change my tests.
So we came up with this API that decouples the signature of the constructor from how it’s used in the test:
var controller = Isolate.Fake.Dependencies<AnonymousCheckoutController>();
That’s all. The
Fake.Dependencies API creates a real object of type
AnonymousCheckoutController, and passes fake implementation of the dependencies, without mentioning their type. If the constructor changes, the test would still work. We’ve decreased the coupling between test and code, and made it much more readable.
People experienced in unit testing know it’s an acquired skill. We can learn how to write better tests, but we usually learn the hard way. We thought about making the experience easier. How can we help people not make the mistakes we did?
It was time for another part to join Isolator. It can examine the tests and flag common errors (for example, a test with no asserts) inside Visual Studio. It suggests a better way, and gives you a chance to fix it.
(Click on the image to enlarge it)
Improving the feedback cycle
For a long time, Isolator didn’t have a test runner. It was our way of saying the user chooses the best tool and we’ll accommodate that choice. As we started tackling new problems, we started to think about the continuous process of development.
Experienced people, having written large test suites, started asking for better speed. Over time, we’ve made Isolator work faster, yet we felt this is not the complete answer. Large test suites take a while to run, but you don’t need to run them all the time. In fact, in a development session, only tests that relate to the code you’ve changed should run. The rest can run another time, like before check in, or on the server.
But that wasn’t the whole issue. Experienced testers look at tests they’ve written three years ago, and can’t believe they wrote such bad tests. Bad tests don’t just break easily. Sometimes they are not unit tests - they are integration tests in disguise. As such, they do not run quickly. The large test suite does not run slowly just because of its size -It also includes tests that are slow by nature.
The final push towards writing a specialized runner came from a completely different area: fixing bugs. When tests fail, you start looking for what recent changes caused the failure. You try to solve the puzzle in your mind: Where was I last? What did I change? Why is this test failing, but the other scenarios still passing? Usually after much debugging you fix the problem.
Much like the rest of the world, we at Typemock don’t like debugging. This is where we had the epiphany: everything ties in together. We’re looking for a speedier solution all around the developing-testing experience. It’s not just about writing faster tests, or running them faster. It’s about the complete iterative process of writing tests, running tests, fixing failed tests, repeat.
Isolator’s test runner sought to answer the entire set of problems. It automatically runs just the relevant tests, those that execute the code you touched. To get the quickest feedback possible, it filters out long running tests automatically. It shows on the code what is covered, and by which tests. It takes into account where recent changes occurred, and can point you in the direction where the bug was inserted. And it encourages you to cover more code with tests, while keeping the cycle tight with relevant feedback, making it easy to continue.
That’s the story of Isolator so far. At the beginning, we wanted to solve one problem. As more people unit test their code, we learn we can help them more by looking at the challenges they face.
Unit testing is still not easy. We’re not done yet.
About the Authors
Gil Zilberfeld is the Product Manager of Typemock. With over 15 years of experience in software development, Gil has worked with a range of aspects of software development, from coding to team management, and implementation of processes. Gil presents, blogs and talks about unit testing, and encourages developers from beginners to experienced, to implement unit testing as a core practice in their projects. He can be reached at firstname.lastname@example.org, and on his blog.
See : erpstore.codeplex.com/SourceControl/changeset/v...
And : erpstore.codeplex.com/SourceControl/changeset/v...
John Krewson, Steve Ropa and Matt Badgley Nov 24, 2014