BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Maintainable Automated Acceptance Tests

Maintainable Automated Acceptance Tests

Leia em Português

This item in japanese

Bookmarks

Automated tests that are brittle and expensive to maintain have led to companies abandoning test automation initiatives, according to Dale Emery. In a newly published paper, Dale shares some practical ways to avoid common problems with test automation. He starts with some typical automation code and evolves in ways that make it more robust, and less expensive to maintain.

The fundamental idea behind Dale's paper is that test automation is software development. It's a simple, yet powerful realization, that he credits Elisabeth Hendrickson with instilling in him.

For most software, maintenance is likely to cost more than the initial development, over the lifetime of the code. In the test automation space, this is amplified by the use of record-and-playback scripts. These scripts are easy to create, though tend to be difficult to maintain.

Dale asserts that two of the key factors must be addressed in order to make test automation scripts more maintainable: Incidental details and code duplication.

Incidental detail is all of the 'stuff' that's needed to make the test run, but isn't really part of the essence of the test. Examples include: variable assignments, loops, calling of low-level routines, automated button clicks, and even the syntax of the scripting language itself. All of these are necessary, they are 'how' the test code works, but they also obfuscate 'what' the code is really trying to achieve.

Duplication is simply code that appears multiple times; it is well-known enemy of maintainability. A change in the system under may require every instance of the duplicated code to be fixed. This is one of the key problems with record-and-playback automated tests; they are full of code duplication.

To make the concept concrete, Dale provides sample code, which tests an account creation routine. The code is realistic and correct, but has much incidental detail and duplication. Through successive refactoring, Dale evolves the code to hide the incidental detail and remove duplication. The resulting code is clearly more maintainable. An additional benefit is that the resulting code more clearly reveals the essence of what each test is seeking to verify. Even without knowing the testing tool or other context, one can likely understand what system requirement is not being met when this code fails:

Rejects Password ${aPasswordWithNoLetters}

Back in 1997, The Los Altos Workshop on Software Testing (LAWST), a gathering of software testing professionals, found similar issues with the state of test automation. Cem Kaner documented the results of this gathering and presented them at Quality Week '97. In the paper Improving the Maintainability of Automated Test Suites Cem observed:

The most common way to create test cases is to use the capture feature of your automated test tool. This is absurd... The slightest change to the program’s user interface and the script is invalid. The maintenance costs associated with captured test cases are unacceptable.

Here are three of the LAWST group's suggestions for making test automation more maintainable.

Recognize that test automation development is software development

Of all people, testers must realize just how important it is to follow a disciplined approach to software development instead of using quick-and-dirty design and implementation. Without it, we should be prepared to fail as miserably as so many of the applications we have tested.

Use a data-driven architecture

The implementation logic of many test cases will be the same, but that logic needs to be exercised with a variety of inputs and corresponding expected outputs. By separating the data from the test logic, duplication is removed. Should the user interface change, for example, a single fix to the underlying test code can fix a large number of tests cases.

Use a framework-based architecture

The framework isolates the application under test from the test scripts by providing a set of functions in a shared function library. The test script writers treat these functions as if they were basic commands of the test tool’s programming language. They can thus program the scripts independently of the user interface of the software.

This is nothing more than good programming practice: abstracting away messy implementation detail. Interface-based, and object-oriented schools of programming thought have been preaching the benefits of this for years, though the concept goes back as far the idea of the subroutine.

Using a data-driven approach implemented on top of a well-designed framework can greatly reduce maintenance cost. The real question is how to get there. Dale's article provides an answer: evolve existing code through a series of refactorings until it exhibits these desirable attributes. This makes sense when you consider that test automation is software development.

What have your experiences been with test automation? Have you experienced problems with maintainability? What approaches did you try to overcome these issues and what were the results? Leave a comment and share your experience with the rest of the community.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • 1999 called, they want their epiphany back

    by Tobias Mayer,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Gosh, I am just surprised people still use record/playback without editing the scripts into acceptable code. That seems insane. I had the sense I was reading an article written ten years ago. Is this really still so rife? Should not all developers and testers know all of this by now? What a terrible state of affairs. Gladly, I haven't bumped in to this so much ...but then again, maybe I should be digging deeper.

    Test automation and record/playback are not synonymous. Of course we must automate our testing in every situation that is practical, and of course our testing code has to be as high a quality as the code we plan to test. As long as Testing (or QA) is seen as a separate function, and lower on the life-form ladder to "development" this problem will persist. We'll continue to pay low salaries, and attract second-rate people to these roles. A tester needs to be skilled in software craftsmanship just as much as someone who spends his day wring code.

    I feel sad you had to write this article.

  • Re: 1999 called, they want their epiphany back

    by Jim Leonardo,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    It's the rotten truth that too many still think they can just whip through and record a path through their app with whatever click recorder they are using and call that an "Automated test". It's the rare organization that really understands testing and rarer still the organization that wants to fund it to the level it needs to be.

    There's about a dozen things I can think of that lead us into the "reinventing the wheel" scenarios such as this. Wrong focus in education (on theoretical rather than engineering, who the heck needs 4-5 semesters of math?), thinking someone with 10 yrs experience is "Senior", "promoting" the best and brightest out of hands on work, developers-as-a-commodity thinking, etc, etc. The general theme being we make so many decisions that keep us from retaining our previous epiphanies. Someone else probably had the 1999 epiphany in 1989, and in 1979, and in 1969, and probably even 1959.

  • Need real testing tools to support maintainable test scripts

    by Jason Sandfield,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Unfortunately, test automation is carried commonly in two extreme ways: record/playback with expensive commercial tool or hard-core programming. The test community shall set the mind set for test automation: not as simple as using recorder, and shall not need years of programming training.

    We hope there will be more tools like iTest (www.infoq.com/articles/refactoring-test-scripts) accessible to testers new to test automation.

  • Tough topic

    by Paco Castro,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This last year I've been sunk into this very same problem and I can assure you that automated acceptance testing (functional and non functional) was a pain release after release.

    The reasons?
    - Code duplication
    - Communication issues between the testing and the development team
    - Hard to follow the three of the LAWST group's suggestions mentioned

    My previous background was development and IT in other projects and I'd say that these reasons where there due to a late adoption of testing techniques (try to take it as an indicator of testing need awareness) and high pressure on deadlines and the constant inclusion of new functional requirements.

    It's hard for a customer to admit a increase on the budget or to delay the delivery of new features in order to improve the quality of the product because it's assumed that this quality was inherently paid during the inception phase.

    Then it seems that automated testing tools come to the rescue. But these tools marketing campaigns try to justify expensive initial inversions with a biased message. I'm sorry, but automation is not practical if you base your testing on record/playback.

    I guess vendors can argue that their tools are not being properly used but observing the generated code and the interface oriented approaches to me it's hard to be sympathetic with them.

    The client will take for granted that part of your testing consists on pressing a red button and collecting results but you'll be coming across releases, release patches, multiple environments, guaranteeing a correct test environment replication, IT restrictions and changes, licencing issues, software delivery delays not followed by a delay of testing deadlines, etc.

    Meanwhile you'll be struggling to have a decent, generic and reusable code base for your tests and improving the reporting capabilities to have a proper display of your results.

    I agree with the article's 3 suggestions. I might comment...

    Recognize that test automation development is software development
    I'm still to see this anywhere. Mostly because of enterprise culture and testers specialization and background.

    Use a data-driven architecture
    This is very hard to follow but it's a key fact.
    To me it sums up to be prepared to throw away your environment and data at any moment and to be able to rebuild it again automatically.

    I've seen cases where a great issue between testing, development and dbas was the need for the testers to be comfortable with the SQL schema and to be able to prepare insertions on demand. Hands on or not hands on testing engineering?

    I've also seen several approaches for this issue but in the end it was a matter to decide whether you'd be blocked most of your time waiting for others to help you out or performing a great deal of inverse engineering.

  • Re: Tough topic

    by Daniel Doubrovkine,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I really appreciate the simple statement that test development is software development. Microsoft realized that 8 years ago when it made the job of a software tester obsolete. All testers are now developers specializing in test.

    My current company spent half a million dollars on a test automation project that produced nothing but unmaintainable bloat. People writing test infrastructure were just not qualified to do so. In the end, more experienced developers stepped in and built infrastructure (remoteinstall.codeplex.com is an example). The model where your best people work on test frameworks and the not so best developers use it to acquire more depth worked. It's the same for testing and for shipping code - best developers work on core components of the product and others completing the blanks between well-defined boundaries.

  • Even not using recorded tests

    by Cezar Guimaraes,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    My team use to have recorded test before i started to work with them. all of the pains described in the article were observed and the decision was obvious: stop to record the test.
    but it not means necessarily that the team will start to use good practices and develop the test as a software engineer. unfortunately a lot of people still believe that the test code base doesn't need to have the same quality of the product code base. and don't understand that maintainability cost also is applied in test code.
    I agree with some comments that the only way to change this mindset is hiring software engineers to work with test having the same skills and quality expected for software engineers to work with the product code.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT