Automated Acceptance Tests - Theoretical or Practical
There have been sporadic reports of successes in writing requirements and automating them as acceptance tests (sometimes called test driven requirements, story driven development, and - depending on whom you ask - behavior driven development). Yet this practice is only used by a small minority of the community. Some thought leaders have gone on the record saying that this is a bad idea and wasteful effort. Are automated acceptance tests written at the beginning of each iteration just a theoretical assertion that have been proven ineffective by the lack of adoption?
First of all, let's define what is meant by automated acceptance tests: they are tests that are written typically at the beginning of an iteration and are an executable form of the requirements. They are detailed examples of how the system is supposed to function when the requirement they describe is complete - an answer to "what does it look like when I'm done?" Historically FIT and FITNesse have been the tools of choice although today there are new set of tools such as cucumber and rspec.
InfoQ: So that sounds interesting. It makes sense to me to have tests out at the level of business, things that the customer understands and is expecting to see. And you talk about examples. I don’t know if you are using that to simplify, to get away from the word test. But you are really talking about tests for customers, aren’t you? Can you tell us more about that?
Brian: The interesting thing that I have noticed is that it doesn’t seem to work anywhere near as well as unit Test Driven Design, in the following sense: when you design a test, you often get all kinds of insight that is clearly a win. And of course the unit testing all does the same thing the unit testing used to do. But the actual writing of supporting code, that is needed to make that example, that test automated to run without human intervention, very often does not have any of the same “Aha!, wow I am really glad I wrote that code, I really learnt something, it’s my I just written a bunch of code that was boring, and I didn’t learn anything” so there is not the reward you get from writing that code, you don’t get that actual benefit from it in addition from the benefit of having the test. And then to date it hasn’t seemed like acceptance tests, these outside examples, lead to really profound structural consequences like refactoring does to unit tests. So the question I have is if the value comes from the creation of the test, and there is a substantial cost to writing all this code, are we really getting our money’s worth by actually automating those tests? Because if we are not getting our money’s worth with that, why don’t we just make the tests on a white board, have the programmers implement them one at a time, checking them manually even showing them to the product owner manually and when we are done with it, why not just erase it and forget it? Why do we have this urge to save these tests and run them over and over and over again?
Then, there are still many other thought leaders in the community who still recommend using automated acceptance tests; these include such names as Robert C. Martin, Joshua Kerievsky, and James Shore to name a few.
Chris Matts gives an interesting way to look at the problem, as a problem of information arrival. Suppose you have a software development process (not necessarily Agile-specific) that does not have acceptance tests written upfront. The QA team typically runs their own test scenarios and when a defect is found it is fed back to the software developers. These defects are found randomly and therefore affect the velocity of the software team randomly because a percentage of their capacity is used to address these defects. New information arrives randomly to the development team.
Now, consider if the tests written by the QA department are written before the development begins. The information provided by these scenarios now occurs at the beginning of the iteration in a predictable fashion. Therefore the uncertainties are reduced, velocity becomes more stable (fewer random interruptions), and that means more predictability.
So, are automated acceptance tests jus something the elite (or lucky) have been able to make work? Is there an internal flaw that is unseen that has caused it's less-than-stellar adoption? Or is it just difficult with proven benefits and a practice that every software development team should aspire to adopt?
stating the obvious
Re: stating the obvious
One of the things that come out when watching Brian Marick's interview, is that there are many teams doing just fine with unit testing for regressions and manual QA Testing for acceptance tests (ad-hoc). He strongly suggests that the fact that automated acceptance tests have NOT caught on in the industry is indicative of their value (or lack thereof) compared to the effort needed to create and maintain them.
use them for data exchange...
Kevin E. Schlabach
Automated acceptance tests can have a high value for high data exchange (as opposed to screen manipulation). For example, test a signup or registration form for all of the required fields, field level logic, etc. With regression testing, this insures data integrity. They are also a good fit for testing REST interfaces or other public accessible API's.
Don't use them for drag and drop UI testing or color/stylesheet testing.
difficult to sell in the organization
Also, FitNesse's architecture is not suited for integration with build servers and source control systems (too track and keep the requirements in synch with the program source code), quite the opposite. We like the wiki markup syntax, but the idea of having the requirements in a wiki is just completely wrong. It is far too precious for that. Also, modules that modifies some of its source files on each test run is also not easy to make SCMs and maven accept. Some projects end up in a situation where only the developers are able to run the requirements test, since the test rig is so fragile and incompatible with the other parts of the build system. This makes it very hard to demonstrate the benefits of continuous requirements testing.
The customers/product owners that have invested sufficiently in the requirements to take ownership at the testcase level are, however, amazed at the value this little tool can give you. Imagine doing what used to be several man-months worth of regression testing in 3 minutes at the click of a button! (only too bad we don't get the test results as Excel spreadsheets...)
Re: stating the obvious
Re: stating the obvious
Re: stating the obvious
Re: stating the obvious
Functional test automation is valuable and viable
Functional tests (of which acceptance tests are a subset), while driving a different kind of value compared to unit tests and TDD for developers, are uniquely beneficial in their own right. An automated regression is essential if want your project to be nimble in terms of the rapid change agile processes talk about. Our product Mingle has thousands of automated functional tests in addition to unit tests, and the changes we make wouldn't be possible without that type of coverage. David Rice wrote a more elegant description on our blog last year here. Automated regressions also give you fast feedback - if your manual regression takes days to run, you're pray to a lot of uncertainty at the end of your release cycle. Lastly, automated regressions free your QA people up to do more exploratory testing, which is the biggest win. Too often developers think that their unit level testing is sufficient vs. a trained high quality tester. An awesome tester doing real exploratory testing will blow your mind (see some of Michael Bolton's stuff).
Frankly, automated testing to date hasn't had the right kind of tool support to make it viable. Most commercial tools are built assuming that you'll write a test once, which makes no sense in general, but particularly so in an agile context. And, most open source tools require you to write substantial code around the tool to make it all hang together in your organization (per what @Eirik Maus talks about in the comments).
ThoughtWorks is building Twist to give QA and test automators the right kind of tool support to make functional test automation work. I won't give a product pitch in this forum, but I encourage everyone to get away from a "should we" automate to a "how we" automate discussion. If the above benefits are true in any meaningful amount, then saying it's too hard is a cop out. Let's fix the tool support issue instead of walking away.
So, is Tooling the problem?
My personal experience - going back to the days of 2000 when we were using a pre-cursor of FIT (co-written by Dave Rice and Ward Cunningham for ThoughtWorks) - the main pain involved:
a) Learning - as a team - to collaborate over getting tests written. And learning to cross-pollinate expertise among testers, analysts, and developers.
b) Discipline writing the tests.
c) Retrofitting the code with fixtures for tests (a tool issue that took a team of about 5 people a few iterations).
Of all of these problems, tooling was the easiest by far.
Re: So, is Tooling the problem?
Tooling is definitely a problem. But people can solve the tooling problem if they want.
Turning the question around, what do people do if they don't use automated acceptance tests? Especially, what do they do to keep up with regression testing as the application grows?
More at blog.gdinwiddie.com/2009/06/17/if-you-dont-auto...
is that really it?
For something written by an editor of such a respectable news source in the community, I really find it hard to believe that this is the whole article and that you haven't made an effort to do some real research into the subject. There's a wealth of material online and in print that helps people get started and implement it correctly and you could have at least asked for an informed opinion on the agile testing list (see Brian Marick's response to your article there).
In saying "Consider if the tests written by the QA department" you probably uncovered the reason why it has not worked for you - benefits of BDD/ER/ATDD etc are not test automation but clarity in specifications and development and instead of the QA team dealing with this the real benefits come only if the entire team pitches in, including customer representatives, business analysts and developers.
Lisa Crispin and Janet Gregory have written about this in their Agile Testing book and call the pattern "the whole team". I've written lots on that subject, including two books that describe in detail how to make this work and numerous blog posts. In Bridging the Communication Gap I offer several patterns such as the specification workshops and collaborative specifications that are key to make this work properly (see a summary of these patterns on see acceptancetesting.info key-ideas, also see my post on specification workshops from november last year) and I've presented often on this subject very often including the software craftsmanship conference and progressive .net mini-conference this year (you can even find a video of the presentation online).
Ian Cooper wrote a really good article about the subject and how they finally made fitnesse work when they involved everyone. There's even an infoq article on this from about six months ago.
To conclude, making automated acceptance testing work is not that hard at all and people have been doing it for years. The most common problem with teams that fail is that they focus too much on the word "testing" and delegate all responsibility for that to the testers which defeats the point, where in fact they should be focusing on collaboration.
Instead of "This type of testing hasn't really caught on." I'd say that it is gaining more and more momentum.
Tom Gilb & Kai Gilb Jan 26, 2015