Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Automated Acceptance Tests - Theoretical or Practical

Automated Acceptance Tests - Theoretical or Practical

Leia em Português

This item in japanese

There have been sporadic reports of successes in writing requirements and automating them as acceptance tests  (sometimes called test driven requirements, story driven development, and - depending on whom you ask - behavior driven development).  Yet this practice is only used by a small minority of the community.  Some thought leaders have gone on the record saying that this is a bad idea and wasteful effort.  Are automated acceptance tests written at the beginning of each iteration just a theoretical assertion that have been proven ineffective by the lack of adoption?

First of all, let's define what is meant by automated acceptance tests: they are tests that are written typically at the beginning of an iteration and are an executable form of the requirements.  They are detailed examples of how the system is supposed to function when the requirement they describe is complete - an answer to "what does it look like when I'm done?"  Historically FIT and FITNesse have been the tools of choice although today there are new set of tools such as cucumber and rspec.

This type of testing hasn't really caught on.  In fact, there was a recent conversation entitled Is FIT Dead? .  Also, in an interview with InfoQ at Agile 2008, Brian Marick went on the record:

InfoQ: So that sounds interesting. It makes sense to me to have tests out at the level of business, things that the customer understands and is expecting to see. And you talk about examples. I don’t know if you are using that to simplify, to get away from the word test. But you are really talking about tests for customers, aren’t you? Can you tell us more about that? 
Brian: The interesting thing that I have noticed is that it doesn’t seem to work anywhere near as well as unit Test Driven Design, in the following sense: when you design a test, you often get all kinds of insight that is clearly a win. And of course the unit testing all does the same thing the unit testing used to do. But the actual writing of supporting code, that is needed to make that example, that test automated to run without human intervention, very often does not have any of the same “Aha!, wow I am really glad I wrote that code, I really learnt something, it’s my I just written a bunch of code that was boring, and I didn’t learn anything” so there is not the reward you get from writing that code, you don’t get that actual benefit from it in addition from the benefit of having the test. And then to date it hasn’t seemed like acceptance tests, these outside examples, lead to really profound structural consequences like refactoring does to unit tests. So the question I have is if the value comes from the creation of the test, and there is a substantial cost to writing all this code, are we really getting our money’s worth by actually automating those tests? Because if we are not getting our money’s worth with that, why don’t we just make the tests on a white board, have the programmers implement them one at a time, checking them manually even showing them to the product owner manually and when we are done with it, why not just erase it and forget it? Why do we have this urge to save these tests and run them over and over and over again?

Then, there are still many other thought leaders in the community who still recommend using automated acceptance tests; these include such names as Robert C. Martin, Joshua Kerievsky, and James Shore to name a few.

Chris Matts gives an interesting way to look at the problem, as a problem of information arrival.  Suppose you have a software development process (not necessarily Agile-specific) that does not have acceptance tests written upfront.  The QA team typically runs their own test scenarios and when a defect is found it is fed back to the software developers.  These defects are found randomly and therefore affect the velocity of the software team randomly because a percentage of their capacity is used to address these defects.  New information arrives randomly to the development team.

Now, consider if the tests written by the QA department are written before the development begins.  The information provided by these scenarios now occurs at the beginning of the iteration in a predictable fashion.  Therefore the uncertainties are reduced, velocity becomes more stable (fewer random interruptions), and that means more predictability. 

So, are automated acceptance tests jus something the elite (or lucky) have been able to make work?  Is there an internal flaw that is unseen that has caused it's less-than-stellar adoption?  Or is it just difficult with proven benefits and a practice that every software development team should aspire to adopt?


Rate this Article