Why Traditional Test-Automation Tools Stifle Agility
Hendrickson nicely summarizes her message in the following, index-card-able way:
Why Traditional, Record-and-Playback, Heavyweight, Commercial Test Automation Solutions Are Not AgileAfter first describing how the test-last aspect of record-and-playback tools has little chance of success on any project, agile or not, Hendrickson explains why this is particularly detrimental to an agile project. On an agile project, test-last workflow has at least the following problems:
Three key reasons:
- The test-last workflow encouraged by such tools is all wrong for Agile teams.
- The unmaintainable scripts created with such tools become an impediment to change.
- Such specialized tools create a need for Test Automation Specialists and thus foster silos.
Hendrickson explains how the test scripts fundamental to these record-and-playback tools, which inevitably contain a spaghetti mix of business-level expectations and implementation-specific details about the UI code, turn an agile project's responsiveness to change into a maintenance nightmare. Succinctly stated:
...Further, test-last tools cannot support Acceptance Test Driven Development (ATDD). Agile teams need tools that support starting the test automation effort immediately, using a test-first approach.
- Waste: the same information is duplicated in both the manual and automated regression tests. Actually, it’s duplicated elsewhere too. But for now, let’s just focus on the duplication in the manual and automated tests.
- Feedback Latency: the bulk of the testing in this workflow is manual, and that means it takes days or weeks to discover the effect of a given change. If we’re working in 4 week sprints, waiting 3 - 4 weeks for regression test results just does not work.
Agile teams need tools that separate the essence of the test from the implementation details. Such a separation is a hallmark of good design and increases maintainability.Thirdly, due largely to their high costs of and propriety-coding requirements, typical record-and-playback tools lead most organizations to the creation of a dedicated group of "Test Automation Specialists", charged as the keepers of the automated tests. Hendrickson asserts how this works against the collaboration required for effective agility:
Agile teams increase their effectiveness and efficiency by breaking down silos, not by creating test automation superheroes. That means the test automation effort becomes a collaboration. Business stakeholders, analysts, and black box testers contribute tests expressed in an automatable form (e.g. a Fit table) while the programmers write the code to hook the tests up to the implementation.Hendrickson finishes off with a great discussion about what Agile teams do need from their test automation tools:
Agile teams need test automation tools/frameworks that:Do take a moment to read Elisebeth Hendrickson's full post for more on her insight and experiences. Also, see Brian Marrick's blog for more expert advice on agile testing.
Fit, FitNesse, and related tools do just that.
- Support starting the test automation effort immediately, using a test-first approach.
- Separate the essence of the test from the implementation details.
- Support and encourage good programming practices for the code portion of the test automation.
- Support writing test automation code using real languages, with real IDEs.
- Foster collaboration.
But what about Google?
Re: But what about Google?
One of Elisabeth's primary points is that traditional commercial tools (of which Selenium certainly is not!) use a proprietary language/syntax for the scripts - not Selenium.
Another point is that the traditionals are commercial - ie, cost a lot of cash - which encourages organizations to create "test automation specialists" in order to save on license fees - again, not Selenium.
The question then, is Selenium part of the "Next Generation", or is it not ;-)
Thanks for the reply!
I don't see the test automation specialization that Elisebeth writes about at PushToTest customers. Instead I see enterprise development organizations wanting to repurpose the unit test assets their developers write into function tests, load and performance tests, and service monitors. This is happening at AMD, Ford, The Jackson Labratory and others enterprises.