BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Design for Testability – The True Story

Design for Testability – The True Story

Bookmarks

Every application should be tested. Kent Beck said, "Code that isn't tested doesn't work - this seems to be the safe assumption." However, not all applications are easy to test. More often than not, the effort invested into testing a specific area is in inverse proportion to how easily it can be tested. Put simply, the easier parts of the system to test, get tested a lot more than those that are harder to test. Testing is a major activity in any development lifecycle - a large part of a project budget is spent on it. If we want to effectively use it, the ease of testing should be addressed from the early stages of building the system.

One of the most common strategies today for improving our ability to test a system is test automation. The adoption of test first practices (TDD, ATDD) by the majority of agile teams demonstrates how test automation needs are addressed from the initial steps of system concepts. In addition, a testable system can evolve easily - you can add features knowing existing ones did not break.

The main idea of the test-first approach is tests are written before the production code. When adding a new feature to the system, automated test cases are written first to make sure the functionality written is correct and answers the needs expressed in the requirements. This guarantees any part added to the system is accompanied by set of automated tests, and therefore the system is, by definition, a testable one. The problem arises when trying to approach an existing system with a code base that doesn't have any tests - a testable system does not evolve on its own.

In cases where testing takes a very low priority during system design phases, the resulting system is hard to test. In turn, since now writing automated tests is hard, the actual effort invested in it is reduced, reducing the need for a testable system (most testing effort is done manually) which lowers the priority of automated testing even further.

One way to break this flawed loop of reasoning is to understand a good suite of automated tests will contribute immensely to the flexibility of the system. In some cases, the benefits achieved outweigh those achieved by a good system design. When designing a system, the main goal is to achieve a working system, which will be easy to maintain and to extend. We want the current solution to work correctly; our design is aimed at making the system flexible enough to minimize the cost of change. However, when analyzing the cost of change factors, we usually see that the need to verify nothing has been broken at the end (regression) is actually what costs us the most. A suite of automated regression tests, like those written when using test-first approach, is probably the most effective way to reduce regression costs.

In this article, we will focus on testing at the unit level. We will demonstrate how a testable system can grow, what consideration should be taken into account during system design, and how tests can be written in an existing system even if it is not designed to be tested. Of course, unit testing is not the only level of testing, and the challenges of these testing levels are different. Many of the concepts and principles for effective unit testing writing presented here are common and can be applied to higher levels of testing.

Design for Testability

When we talk about Design for Testability, we are talking about the architectural and design decisions in order to enable us to easily and effectively test our system. We first must understand the context on which we are writing tests in.

When we approach writing automatic unit tests (AUT), the main difficulty we face is the need to isolate the tested parts in the system from the rest of it. In order to test a functionality of a class, we first need to detach it from the rest of system in which it is designed to work. We then need to create an instance from that class, activate the tested functionality and finish by making sure the resulting behavior matches our expectations. However, unless the system is designed specifically to enable this, in most cases, it will not be simple.

When writing AUTs we face a few issues. For example:

  1. Instantiating a class - In most cases a class is not meant to be created in a standalone manner, as we do when writing tests. Normally, classes are created as part of an entire system. Since they depend on other parts of the system, they make sure those parts are there and working correctly. Setting these parts in the testing environment is expensive and complex. To avoid that we need a mechanism to create the tested instance without creating the rest of the dependencies as well.
  2. Isolation from dependencies- in almost all cases, a single class does not work alone. In a typical system, each class interacts with other classes and depends on them to function properly. When writing tests, our ability to isolate the given class from all others dependencies is crucial, and we must think of putting mechanisms in place to enable us to do so easily.
  3. Verifying Interactions - In order to write meaningful tests, the expected behavior must be checked. In some cases this behavior can only be observed by looking at the resulting state of the class at the end of the test. However, in many cases, the tested class has no meaningful state of its own, and its purpose is to correctly interact with other parts. In order to verify this interaction we need a way to allow it during testing, making sure all expected interactions were carried out as they should.

In order to write effective unit tests for a system, we need to effectively isolate each class, and surround it with fakes (created as part of the test), which will enable verification of all interactions carried out by the class under test. The ease of writing unit tests is in direct correlation to this ability.

Examples

Below we will cover some basic examples of the issues encountered when trying to write unit tests.

Internal Object Creation

In the House (Listing 1), we have a couple of rooms and a front door. All the instances of the house are created inside the constructor and are not exposed.

class EXPORT House       
{ 
public:
  House() 
  { 
   bedroom = new Bedroom(); 
   kitchen = new Kitchen();
   door = new FrontDoor();
  }
  void LeaveHouse()
  {
   kitchen->ShutDownAllAppliances();
   bedroom->TurnLightOff();

 

   LockFrontDoor();
  }
  bool IsFrontDoorLocked()
  {
   return door->IsLocked();
  }
private:
  void LockFrontDoor()
  {
   door->LockDoor();
  }

   Kitchen* kitchen;
   Bedroom* bedroom;
   FrontDoor* door;
};

Listing 1 - the House class

We would like to write a test case verifying the front door is locked after we leave the house. Such a test can be seen in listing 2.

TEST(HouseTests, AfterLeavingHouse_FrontDoorIsLocked)
{
  House target;
  target.LeaveHouse();

 

  ASSERT_TRUE(target.IsFrontDoorLocked());
}

Listing 2 - Simple test-case: After leaving the house, front door is locked

When executing the test the House class invokes some functionality on its rooms, and this can present a problem. In many cases the House logic can be complex and can depend on other parts of the system. For example, if this code is part of a smart house system we would expect it to issue direct commands to the hardware controller of the house. But in our test environment, the House class is not hooked into the entire system and these commands will fail.

It's obvious that class was not built for testability. Testability instructs us to decouple classes from each other, using interfaces. The creation of instances of dependencies in the constructor goes against this idea.

In our unit test, we want to verify the House logic invokes the right commands without actually carrying them out. In our specific test, we would like to ignore the commands issued to the actual rooms and only verify the door is locked. A general approach would be to introduce fakes instead of the real room classes. We will need to first create those fakes and then hook them into the trusted house instance.

The interesting point is there is no native mechanism for hooking the fakes into the house instance. The House class does not expose its internal members and has no mechanism for replacing them.

One approach to solve this is to use the Dependency Injection (DI) principle, and move the creation of the rooms outside the house class, as shown in listing 3. We do this by first extracting interfaces for the Kitchen and the Bedroom classes and then adding a constructor to the House class:

House(IBedroom* room1, IKitchen* room2)
  : bedroom(room1), 
    kitchen(room2)
{
  door = new FrontDoor();
}

Listing 3 - A new House DI constructor

Listing 4 shows how the test can use this mechanism to inject the needed fakes for the testing context.

TEST(HouseTests, AfterLeavingHouse_FrontDoorIsLocked)
{
  FakeBedroom fakeBedroom;
  FakeKitchen fakeKitchen;

 

  House target(&fakeBedroom,&fakeKitchen);
  target.LeaveHouse();

  ASSERT_TRUE(target.IsFrontDoorLocked());
}

Listing 4 - usage of fakes in the test

In order for this to work, the FakeBedroom and the Bedroom classes (and respectively, the FakeKitchen and Kitchen classes), must be derived from the same interface/base class.

If our class was written this way originally, it would be testable out of the box. But if not, as in Listing 1, we would need to change it. Changing existing code without tests just to add tests is risky.

Interaction testing on a concrete class

A common scenario is one in which we want to verify an interaction between two objects has been performed correctly. The example in Listing 1, we would like to verify that when the LeaveHouse logic is executed, the appliances in the Kitchen are turned off. Design wise, the actual mechanism by which they are turned off is the responsibility of the Kitchen class, however the house is responsible for telling the kitchen to do that (i.e. invoking the ShutDownAllAppliances).

TEST(HouseTests, WhenLeavingHouse_KitchenAplliancesAreShutDown)
{
  FakeBedroom fakeBedroom;
  FakeKitchen fakeKitchen;

 

  House target(&fakeBedroom,&fakeKitchen);
  target.LeaveHouse();

  ASSERT_TRUE(target.IsFrontDoorLocked());
}

Listing 5 - usage of fakes in interaction testing

In addition to introducing a common interface for the Kitchen Class and adding the DI constructor, we will need, to add some verification logic to our FakeKitchen class. Listing 6 shows the resulting FakeKitchen class implementation, which is quite a lot of code.

class FakeKitchen : public IKitchen       
{      
public:        
  FakeKitchen()      
       : _ShutDownAlAppliancesWasCalledFlag(false)          
  { }         
  bool WasCalledShutDownAlAppliances()          
  {        
     return _ShutDownAlAppliancesWasCalledFlag;         
  }          
  //IKitchen Implementation
  virtual void ShutDownAlAppliances()
  {        
    _ShutDownAlAppliancesWasCalledFlag = true;
  }        
private:
  bool _ShutDownAlAppliancesWasCalledFlag;
};

Listing 6 - FakeKitchen verification logic

Dependencies through static methods

Another common case is the usage of static methods, using the singleton pattern. In this example, we want to test the BusinessLogic class. It uses a log to keep track of important business information. We would like to test the flow of finishing a transaction, specifically the error handling mechanism implemented in the flow. We want to make sure if someone had tried to close an unknown transaction, an exception was thrown to let the client know of the problem.

class EXPORT BusinessLogic          
{          
public:          

  void FinishTransaction(int transactionID, Status status)
  {
    ITransaction* transaction = FindTransaction(transactionID);
    transaction->UpdateStatus(status);

    char buf[1000];
    sprintf(buf,"Transaction %d Finished with status: %d",
      transactionID,
      status);
    LoggerManager::Instance()->GetLogger("BusinessLogic")
      ->LogMessage(buf);
  }
private:
  ITransaction* FindTransaction(int transactionID)
  {
    ITransaction* trans = _transactions[transactionID];
    if (trans==NULL)
    {
      LoggerManager::Instance()->GetLogger("BusinessLogic")
        ->LogMessage("Transaction Id was not found");
      throw MyException();
    }
    return trans;
  }

  map<int, ITransaction*> _transactions;
};

TEST_F(BusinessLogicTests,UnknownTransaction_ThrowsException)
{
  BusinessLogic target;

  ASSERT_THROW(target.
    FinishTransaction(5,Status::SUCCESS), MyException);

}

Listing 7 - the BusinessLogic class and test

The logging mechanism can be complex. In a simple scenario, logging information can be written to the local disk, but in a realistic system, logging information is frequently sent to a dedicated server or stored in a database. Invoking logging calls during test execution will fail unless the entire logging mechanism is set up properly. This can be very expensive and time consuming.

When testing the business logic, the actual logging is less important. We would like to use fakes to disable actual logging and still allow the rest of the logic to be tested. The problem is the logger instance used is retrieved from a main repository which prevents us from simply replacing it. Also since the logger repository is implemented as a singleton, we can't directly fake it and override the returned logger.

Many techniques can overcome these issues. A simple solution would be to first wrap the calls to the logger inside a virtual method. This will enable us, during the test, to inherit from the business class override this wrapping method and disable the actual invocation of the log. Listing 8 depicts the resulting code, followed by the new test class and the test case.

class EXPORT BusinessLogic         
{       
public:          
  void FinishTransaction(int transactionID, Status status)       
  {      
    ITransaction* transaction = FindTransaction(transactionID);     
    transaction->UpdateStatus(status);   

 

    char buf[1000];
    sprintf(buf,"Transaction %d Finished with status: %d",
      transactionID,
      status);
    WriteToLog(buf);
  }

protected
  virtual void WriteToLog(char* msg)
  {
    LoggerManager::Instance()->GetLogger("BusinessLogic")
      ->LogMessage(msg);
  }

private:
  ITransaction* FindTransaction(int transactionID)
  {
    ITransaction* trans = _transactions[transactionID];
    if (trans==NULL)
    {
      WriteToLog("Transaction Id was not found");
      throw MyException();
    }
    return trans;
  }

  map<int, ITransaction*> _transactions;
};

// Disable calls to log by overriding
class TestBusinessLogic : public BusinessLogic
{
  virtual void WriteToLog(char* msg)
  { }
};

TEST_F(LoggerTests,typemock3)
{
  TestBusinessLogic target;

  ASSERT_THROW(target.
    FinishTransaction(5,Status::SUCCESS), MyException);
}

Listing 8 - the changed BusinessLogic class and test code

These examples show only a few of the issues encountered when trying to write automated unit tests for an existing system. When trying to make a system testable there are few simple guidelines to make writing tests easier.

  1. Avoid Statics - Usage of static introduces hard dependencies; static methods can't be overridden and replaced and therefore are harder to fake.
  2. Use dependency injection - This will allow fakes to be inserted instead of real logic, making isolation easy.
  3. Work against an interface - working against a pure abstract interface ensures all methods can be inherited and faked and is in general a good idea.
  4. Allow easy initialization - during testing instances will be created and destroyed all the time. Making this easy will go a long way in reducing the effort needed to write tests.

Working this way is in line good design practices. Following good design practices like S.O.L.I.D principles leads to a more testable system. Looking at the above examples we can see the Usage of Dependency Injection is clearly a change in a good direction, and the wrapping of the log invocations seen in the second example also made the code a little bit better. All usages of the singleton were extracted into a single location which will make changes to that mechanism easier to implement.

Drawbacks

However, the design constraints for creating a testable system are not applied naturally in most contexts. When working against a given code base, chances are there will be numerous places in which these guidelines are violated. In order to write automated tests for such a code base, it first must be changed. These changes require non-trivial effort and poses risks. Here are the main challenges for writing tests for a given system:

  1. Current system design -many programmers, unless specifically asked to do so from early on, will not adhere to the guidelines that allow for testability. This means the system will have to change to allow these tests. While the effort involves depend upon the exact context, it is a significant effort. In some situations it will be perceived as too big and will be delayed until the "right time", which is never "now".
  2. Sometimes making a system testable includes changing parts we don't have control over. For example, a source-less third party tool or framework that does not adhere to the DFT paradigm. When that happens, other means must be implemented in order to work around these issues and these may complicate the resulting system design.
  3. Changing a working system can be a risky business; if not done carefully it might break the system. The best tool for guarding against these changes is a proper suite of automated tests. However, in order to write those tests effectively the system needs to change so we are stuck in a closed loop.

Even when we put in the effort to make our system testable, keeping it that way means the team must continue working in a certain way. All new additions must be made while keeping the system testable. All developers working on the system must learn how to write testable code and conform to design principles. For many programmers, this means learning new ways to develop, requiring time and effort. The programmers need to have discipline, not stray from the testing path, and team leaders need to train new people on the team with these methods.

And last, while a good system design is crucial, and there is a strong correlation between good design principles and a testable system, most development contexts are more complex. In some cases, making the system testable goes against what seem to be the better design at the moment. Sometimes the levels of abstraction needed for testing seem too much and may over-complicate the code, making it less readable than other alternatives.

Writing tests for existing code is a real challenge. That's why many companies are not doing it. In order to write unit tests, you need not only to learn the tools, the new design skills, and get used to the process changes of Test Driven Development (TDD), you also need, when dealing with existing code bases, to learn how to make the current system testable. This makes many people trying TDD, or even writing plain unit tests, think it's too hard to do. Indeed when trying to do everything at once, this can be hard.

Alternatives

However, what if most of the technical issues will go away? What if the ability to isolate everything in the system was given no matter how the system is designed? What will that enable us to do?

Naturally this ability exists in several available tools.

Before continuing, it is important to stress again, good system design can't be replaced and will always be a crucial in reducing development costs. To truly make the best out of TDD, one needs to learn how to write meaningful tests and to improve on design skills. That's the only way to make a system easier to test and maintain in the long run. The true value of tools is in allowing us to decouple the learning of testing and design skills, allowing one to learn each skill separately, making the overall process easier and safer over time.

Let's see how the previous code section can be tested using a powerful mocking framework such as Isolator++[1]. Notice how the same code can now be tested without changing it.

Internal Object Creation

Let's rewrite the house test (Listing 2) using a mocking framework:

TEST(HouseTests, AfterLeavingHouse_FrontDoorIsLocked) 
{
  Kitchen* fakeKitchen = FAKE_ALL<Kitchen>();
  Bedroom* fakeRoom = FAKE_ALL<Bedroom>();

 

  House target;

  target.LeaveHouse();
  ASSERT_TRUE(target.IsFrontDoorLocked());
}

Listing 9 - Simple test-case using a mocking framework:

After leaving the house, front door is locked

In the first two lines we create fakes for the kitchen and bedroom using the mocking framework. The FAKE_ALL directive not only creates the fakes, it also injects them instead of the real implementation, the next time someone creates new instances of those classes. Inside the constructor of the house class, the internal bedroom and kitchen instances created will be hooked into our fakes, and all calls made on those instances will be ignored (which is the default behavior of the fakes created). The end result being this test will execute without the need to introduce the new constructor.

We've arrived at the same isolation level we needed in Listing 3, but without changing the code. In other words - we've achieved testability without the risk of code modifications.

Interaction testing on a concrete class

Let's rewrite the house interaction test (Listing 5) using a mocking framework:

TEST(HouseTests, WhenLeavingHouse_KitchenAplliancesAreShutDown) 
{
  Kitchen* fakeKitchen = FAKE_ALL<Kitchen>();
  Bedroom* fakeRoom = FAKE_ALL<Bedroom>();

 

  House target;
  target.LeaveHouse();

  ASSERT_WAS_CALLED(fakeKitchen->ShutDownAlAppliances());
}

Listing 10 - Simple interaction test using mocking:

When leaving the house, Kitchen is told to shut down appliances

We create fakes for the kitchen and bedroom using the FAKE_ALL directive of the mocking framework. As before, the mocking framework does all the needed replacements and the only thing left for us is to use the ASSERT_WAS_CALLED directive to make sure the proper call was made, without any modification to the original.

Dependencies through static methods

Mocking frameworks also give the option to isolate static and global functions. In the next example we use this ability to write the test for the business class (given in Listing 7)

TEST_F(BusinessLogicTests,UnknownTransaction_ThrowsException)
{
  WHEN_CALLED(
      LoggerManager::Instance()->GetLogger("businessLogic")).
    ReturnFake();

 

  BusinessLogic target;

  ASSERT_THROW(target.
    FinishTransaction(5,Status::SUCCESS), MyException);
}

Listing 11 - Testing error handling of our business logic using mocking:

The When_Called directive instructs the mocking framework to return a fake logger any time someone invoked the GetLogger method, instead of going through the LoggerManager production logic. Since, by default, the fake logger will ignore all calls to LogMessage the given test will pass correctly on the original BusinessLogic code as is, without the need to extract the new static WriteToLog wrapper and override it in our test. We have testability without changing the code.

Summary

Modern development calls for test automation. However starting out can be challenging. Adopting new ways to do things is never an easy task. Sometimes adapting test first approaches, learning how and what to test while trying to learn new design techniques is too much to take all at once.

Being able to automate tests and having a good system design is the best way to achieve high-quality in any code. Only high-quality code enables us to effectively enhance and extend it. Power tools are there to help ease the journey. They enable writing tests to most common design patterns existing in code as-is. They don't require preliminary work and let you start writing tests almost immediately, in essence, decoupling testability from design.

Leveraging these tools' power, while keeping in mind the ultimate goal, is a good strategy to achieving true tested software while having the necessary freedom of design.

About the Author

Gil Zilberfeld is Product Manager of Typemock. With over 15 years of experience in software development, Gil has worked with a range of aspects of software development, from coding to team management, and implementation of processes. Gil presents, blogs and talks about unit testing, and encourages developers from beginners to experienced, to implement unit testing as a core practice in their projects.

 

 


[1] Isolator++ is a mocking framework for C/C++ developed by Typemock Ltd.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Are these concepts taught in western colleges?

    by Leung Michael,

    • Design for Testability is a Fallacy

      by Daniel Bullington,

      Your message is awaiting moderation. Thank you for participating in the discussion.

      I have been preaching this point for years in consulting and commercial practice, especially when TDD became the soup of the day: designing purely for "testability" is a dangerous mindset when approaching the design of software systems. Yes, code should be testable and tested. But a high measure of testability is inherently achieved as a welcome side effect of a well designed software architecture and well engineered code; this strategy stands up to the test of time and space.

    • Re: Design for Testability is a Fallacy

      by monser corp,

      Your message is awaiting moderation. Thank you for participating in the discussion.

      I think you are taking it too far away. The author is actually saying towards. No one is just design a system with just one thing in mind. But you have to have something in mind, testability is definitely one of them and this will result good design, or at least sustainable design.

    • Are these concepts taught in western colleges?

      by Leung Michael,

      Your message is awaiting moderation. Thank you for participating in the discussion.

      I come from Asia and have had a hard time trying to apply DDD, TDD, DI, unit test, agile, automated testing to my projects. One of the main reason is education. It learnt from the latest graduates that these concepts were not taught in college here. Do anybody knows if these Software Engineering concepts (especially those emerged after 2000) are taught in most western country colleges?

    • Re: Are these concepts taught in western colleges?

      by Hesam Chiniforooshan,

      Your message is awaiting moderation. Thank you for participating in the discussion.

      As of Software Engineering Instructor, I think teaching TDD, and more importantly "pushing students to use it", is a MUST. In Computer Science Department of University of Toronto, Software Design and Software Engineering courses are covering TDD in detail. This goal is achieved by reducing our expectations from students' projects in terms of number of features to be implemented, but emphasising on the quality of design and the completeness of test cases. We observed that students after third year would get used to "designing with testability in mind".

    • Re: Are these concepts taught in western colleges?

      by Bart Dubois,

      Your message is awaiting moderation. Thank you for participating in the discussion.

      I could say that in most cases the teachers focus on what teh students do, not how the achive this. I know is the meater of time that need to be given fo the student or group. From the other hand when the go to work the know the lanagae and libraries but then do not know how to work and make things done.

      This is teh common problem i observe, at least in Poland.

    Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

    Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

    BT