BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Maintainable Automated Acceptance Tests

Maintainable Automated Acceptance Tests

Leia em Português

This item in japanese

Bookmarks

Automated tests that are brittle and expensive to maintain have led to companies abandoning test automation initiatives, according to Dale Emery. In a newly published paper, Dale shares some practical ways to avoid common problems with test automation. He starts with some typical automation code and evolves in ways that make it more robust, and less expensive to maintain.

The fundamental idea behind Dale's paper is that test automation is software development. It's a simple, yet powerful realization, that he credits Elisabeth Hendrickson with instilling in him.

For most software, maintenance is likely to cost more than the initial development, over the lifetime of the code. In the test automation space, this is amplified by the use of record-and-playback scripts. These scripts are easy to create, though tend to be difficult to maintain.

Dale asserts that two of the key factors must be addressed in order to make test automation scripts more maintainable: Incidental details and code duplication.

Incidental detail is all of the 'stuff' that's needed to make the test run, but isn't really part of the essence of the test. Examples include: variable assignments, loops, calling of low-level routines, automated button clicks, and even the syntax of the scripting language itself. All of these are necessary, they are 'how' the test code works, but they also obfuscate 'what' the code is really trying to achieve.

Duplication is simply code that appears multiple times; it is well-known enemy of maintainability. A change in the system under may require every instance of the duplicated code to be fixed. This is one of the key problems with record-and-playback automated tests; they are full of code duplication.

To make the concept concrete, Dale provides sample code, which tests an account creation routine. The code is realistic and correct, but has much incidental detail and duplication. Through successive refactoring, Dale evolves the code to hide the incidental detail and remove duplication. The resulting code is clearly more maintainable. An additional benefit is that the resulting code more clearly reveals the essence of what each test is seeking to verify. Even without knowing the testing tool or other context, one can likely understand what system requirement is not being met when this code fails:

Rejects Password ${aPasswordWithNoLetters}

Back in 1997, The Los Altos Workshop on Software Testing (LAWST), a gathering of software testing professionals, found similar issues with the state of test automation. Cem Kaner documented the results of this gathering and presented them at Quality Week '97. In the paper Improving the Maintainability of Automated Test Suites Cem observed:

The most common way to create test cases is to use the capture feature of your automated test tool. This is absurd... The slightest change to the program’s user interface and the script is invalid. The maintenance costs associated with captured test cases are unacceptable.

Here are three of the LAWST group's suggestions for making test automation more maintainable.

Recognize that test automation development is software development

Of all people, testers must realize just how important it is to follow a disciplined approach to software development instead of using quick-and-dirty design and implementation. Without it, we should be prepared to fail as miserably as so many of the applications we have tested.

Use a data-driven architecture

The implementation logic of many test cases will be the same, but that logic needs to be exercised with a variety of inputs and corresponding expected outputs. By separating the data from the test logic, duplication is removed. Should the user interface change, for example, a single fix to the underlying test code can fix a large number of tests cases.

Use a framework-based architecture

The framework isolates the application under test from the test scripts by providing a set of functions in a shared function library. The test script writers treat these functions as if they were basic commands of the test tool’s programming language. They can thus program the scripts independently of the user interface of the software.

This is nothing more than good programming practice: abstracting away messy implementation detail. Interface-based, and object-oriented schools of programming thought have been preaching the benefits of this for years, though the concept goes back as far the idea of the subroutine.

Using a data-driven approach implemented on top of a well-designed framework can greatly reduce maintenance cost. The real question is how to get there. Dale's article provides an answer: evolve existing code through a series of refactorings until it exhibits these desirable attributes. This makes sense when you consider that test automation is software development.

What have your experiences been with test automation? Have you experienced problems with maintainability? What approaches did you try to overcome these issues and what were the results? Leave a comment and share your experience with the rest of the community.

Rate this Article

Adoption
Style

BT