BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Virtual Panel: Specification by Example, Executable Specifications, Scenarios and Feature Injection

Virtual Panel: Specification by Example, Executable Specifications, Scenarios and Feature Injection

Bookmarks

In the last couple of years terms like Specification by Example, Executable Specifications and Feature Injection have showed up quite frequently in the community, often in relation to Behaviour Driven Development (BDD) or tools like Cucumber or SpecFlow. InfoQ have talked to some of the leading experts in this domain about what these practices are and how they relate to BDD.

The participants:

  • Dan North – Lean Technology Specialist at DRW Trading Group. Coined the term Behaviour Driven Development (BDD).
  • Gojko Adzic – Consultant, author of 'Specification by Example' and 'Bridging the Communication Gap'
  • Elizabeth Keogh (Liz) - Independent consultant, Lean / Agile coach and trainer.
  • Matt Wynne - Freelance Programmer & Coach at Matt Wynne Limited, author of The Cucumber Book together with Aslak Hellesøy.

The questions:

  1. Three common terms seems to have evolved around the automated outside-in testing space: Specification by Example, Executable Specification and Scenarios. Could you give us a brief description of each? What's the relationship between them?
  2. BDD started as a reaction to TDD, but now we see BDD in relation to the behaviour of entire systems, rather than small code elements. Why is BDD important in this space or is it?
  3. Reading about the uses of examples for specifying behavior, three topics seems common: collaboration, documentation and tooling. How important would you say these examples are as a communication tool and what is the typical success factors of those organizations with successful implementations?
  4. What can you tell us about using these techniques as documentation?
  5. How important do you find tooling to be for automating validation of system behavior and is it a requirement for a successful implementation?
  6. Could you give a short description of feature injection and how it relates to BDD?
  7. At some point automated scenarios and examples moved from being a vision to something concrete. Fit was one of the first tools and other tools have followed. How would you say the evolution has been in the last few years around the process itself as well as the tooling support?
  8. Where do you see Scenarios, Specification by Example and Executable Specifications evolving from here?
  9. What we have covered so far evolves around the processes used, but there are also some technical aspects around automation. Sometimes automation is done through the UI and other times through the Controller when a MVC pattern is used. In your experience, what are some of the pros and cons with the different options? Any other options that are commonly used?
  10. Another challenge comes into play with Mocks and Stubs. With TDD there’s lots of opinions for when to use or not to use them. Is this different when automating a whole system? Do you have some good and bad examples of different usages?
  11. Any parting words about this topic?

InfoQ: Three common terms seems to have evolved around the automated outside-in testing space: Specification by Example, Executable Specification and Scenarios.

Gojko: It's not just these three. There are many more and the language confusion is one of the key obstacles for teams new to these techniques. Many of the terms the community uses, especially those containing the word "test", seem to lead people to a completely wrong path. This is one of the key things I hope to start fixing with the Specification by Example book.

Matt: Yes! One of the things that makes BDD a great step forward is that it acknowledges the importance of the words we use for stuff in our projects. Ironically, we all seem to still be experimenting with different names for the things in and around the practice itself.

InfoQ: Could you give us a brief description of each? What's the relationship between them?

Dan: Some time ago (around 2005?) I started describing TDD as Coding-by-Example in my Introduction to BDD tutorial. It was partly tongue-in-cheek ("Is anyone doing coding-by-example?" silence "You might know it as TDD" ohh!), but the point was that these little code fragments that you call "unit tests" are actually usage examples of code that doesn't exist yet.

I'm not sure where or when the term Specification by Example originated, but Gojko took it and made it his own. I like it because it puts the focus on an example's role as specification, rather than just being an example.

The term Executable Specification has been around at least as long as TDD. It might even appear in the original XP Explained. It positions the code examples as a supplement, if not a complete replacement, for a regular functional specification. The difference is that the specification can be executed at any time to determine whether or not it still holds true. By running the executable specification frequently, you can ensure it doesn't diverge from what the application actually does (or at least you'll know about it when it happens, rather than some time later when the spec no longer describes the system you have).

Scenario is a term I introduced with BDD, to describe the behaviour of a specific feature in a given context. (No doubt there are other definitions and usages of Scenario out there, but that is what the BDD one means.) I usually suggest the Friends episodes naming convention ("the one where..."), as in: The one where the account is overdrawn, or The one where the card is invalid.

You would implement Specification-by-Example using Executable Specifications, one Scenario at a time, until you've delivered the capabilities you wanted to.

Gojko: For me, Specification by Example is a set of process patterns that teams use to collaborate on specifications/tests/scenarios etc. I like that as the name of the overall process because it carries the least amount of negative baggage. I know that some people (in particular Liz) will disagree with this so as you can see we're still far from adopting a unified language as a community.

Executable Specifications, for me, are an artifact of this process that is produced by automating the validation of key examples in a form that is understandable to all delivery team members and business stakeholders.

I tend to use the word "scenario" in many different scenarios, so I do not have a clear definition for it. I think a scenario might be a collection of related key examples. some people call their tests scenarios. Some tools such as Cucumber use scenario as a keyword to capture one or more key examples.

Liz: Specification by example is the act of using examples of an application's behavior to specify that behavior. Executable specifications are the same thing, but automated. You don't *have* to automate them; some teams have got quite a lot of benefit just having the conversations. Scenarios is the traditional BDD term for the examples, to differentiate them from examples at a unit level which describe a class's behavior, rather than a whole system.

They all describe much the same thing, except that "executable" is quite specific in its reference to automation. I also prefer to avoid the word "specification" if I can. I find it causes people to treat the examples as set in stone, rather than questioning them. "Scenario" or "example" are words that everyone understands, and more people are willing to provide examples off-hand, without thinking them through. That means you get to find out areas of misunderstanding more easily. They're pretty much the same thing, though. Everything in this space - Acceptance Test Driven Development, Executable Specifications, Specification by Example, Scenarios - they've got far more in common than different. At the end of the day, we want to write software that matters.

Matt: I'd go along with Gojko's definitions of the first two. As a Cucumber guy, a Scenario (with a capital S) is for me the name we give to each example in our executable specifications. Each of those examples follow the same pattern: put the system into a particular state, poke it somehow, examine the new state afterwards. In Cucumber we call these scenarios, but you could just as well call it an example---cucumber understands both keywords---and I sometimes think I like that term better.

Another related naming question I've heard people ask is: what's the difference between ATDD and BDD? As far as I can tell, they both emerged around the same time on either side of the atlantic, and the practical application of each is pretty much the same: driving development outside-in from failing customer-facing acceptance tests. For me BDD has done a better job of describing the conditions and good practices around this that make it successful: terms like 'outside-in' and 'ubiquitous language' are very much part of the holistic spirit of BDD, and a crucial part of doing it well.

InfoQ: BDD started as a reaction to TDD, but now we see BDD in relation to the behaviour of entire systems, rather than small code elements. Why is BDD important in this space or is it?

Dan: Saying BDD was a reaction to TDD implies I thought there was something wrong with TDD, which isn't the case. Quite the opposite in fact: I feel very strongly that TDD was an amazing and (positively) disruptive insight.

Unfortunately the vocabulary it came packaged with was misleading. My experiences as a coach told me people were missing the point, with all this talk of unit tests, acceptance tests, functional tests, integration tests... Kent Beck's style of TDD is a very smart way to develop software, so I tried removing the word "test" when I was coaching it, replacing it with things like behaviour, examples, scenarios etc. The result was very encouraging: People seemed to "get" TDD much quicker when I avoided referring to testing.

TDD - as originally described - is also about the behaviour of entire systems. Kent specifically describes it as operating on multiple levels of abstraction, not just "down in the code". BDD is equally important in this space, because describing the behaviour of systems is fractal: you can describe different granularities of behaviour from the entire application right down to individual small components, classes or functions.

Gojko: I have no idea who said this first, but I think that it summarises it nicely: TDD tells you that you're building your code right, BDD tells you that you're building the right code. When I wrote the FitNesse.NET book I really didn't consider the BDD part specially different from core TDD practices, just expanded to business aspects of the system. I think that in recent years BDD evolved to contain fractal outside-in design, value-chains and pull systems etc, going far beyond just example-driven automated targets and collaboration practices.

Liz: When I started coding using TDD, I was thinking about the design of my code up-front, then writing that code. Often I had to rework bits as I started to understand more about the system. By thinking about the behavior of the whole system, then deriving the code I needed - pulling it from larger-scale requirements - I found I was learning faster and wasting less time on rework. It also forced me to think about why I was writing particular pieces of code - what aspects of behavior at a system level that piece of code was responsible for.

After that, I realised that a lot of the conversations I had with the business were phrased in terms of scenarios and examples. To have a language and pattern that's easy to understand, that lets you talk about tricky bits of the system which don't necessarily behave in an obvious way - the conversational aspects of BDD are more important to me now than actually driving code with it!

Matt: BDD doesn't stop at the acceptance tests. Designing things from the outside-in, and being rigorous about names, are habits that you carry right down into the last private variable in the lowest level class in your stack. Sometimes you discover something deep in the code that has implications right back out at the top layer, meaning you need to go and re-word an acceptance test, or even go and have a discussion with a domain expert because you've realised you hadn't understood something as well as you'd thought. You need to be thinking about these things wherever you're working in the codebase.

Although I don't think they mention BDD in the book, Steve Freeman and Nat Pryce's book Growing Object Oriented Software, Guided by Tests is a great description of working outside in using a combination of high-level acceptance tests and lower-level isolated unit tests.

InfoQ: Reading about the uses of examples for specifying behavior, three topics seems common: collaboration, documentation and tooling. How important would you say these examples are as a communication tool and what is the typical success factors of those organizations with successful implementations?

Dan: With a specification, you're not just communicating what the software does (or will do) right now. You're also communicating across time to people who will come along later and want to understand the software. In that regard the communication aspect is critical to the application's longevity. Also, by writing executable specifications, you are exploring the domain and coming up with terms that build towards a consistent domain vocabulary.

Gojko: Examples are a very effective technique to elaborate and clarify and we use them in everyday conversations without thinking twice. Software teams use them even if they practice broken siloed development processes - for example analysts often get examples of existing invoices or reports to analyse. Developers use examples of edge cases to clarify abstract requirements. Testers write scripts that are examples of system usage. The idea of using examples to drive collaboration recognises the role and power of examples and just makes people use the same set of examples consistently and throughout the entire process, avoiding the telephone game that often happens when the initial examples are translated to abstract requirements, and further examples for clarification are produced separately and thrown away after discussions, with test examples written completely in isolation. Illustrating requirements using examples allows teams to get just enough information just in time when they need them to turn scope into specifications and to drive their development effectively in short iterative cycles, without the need to manage and maintain several sources of truth.

Liz: Few organisations actually use examples in conversation. More typically, I see testers or analysts specifying scenarios then passing those scenarios to the devs. It makes me sad when I see scenarios being used as mini-Waterfall requirements. In the places where devs, analysts and testers get together and talk through the scenarios, everyone gets a better understanding of their domain. Frequently even analysts have to think through the behavior they take for granted when developers start asking questions from the perspective of someone who has to code, or the testers from the perspective of things which might go wrong. Some teams are put off by the idea that they have to automate the scenarios. I've seen teams turn around - devs talking the business language, focusing on business outcomes, considering edge-cases, knowing what "done" looks like, uncovering hidden scope - just from having conversations.

Matt: Examples themselves are a surprisingly powerful tool for communication. People find it much easier to visualise a yet-to-be-built system when they're talking about a concrete example. This in turn means they're able to explore the problem better, and perhaps see edge cases or assumptions that they otherwise wouldn't have been able to spot until they had a working prototype to play with. This is why I often say that teams can get 50% of the benefits of BDD without automating anything. Just going through the process of exploring the work to be done using examples leads to a much better up-front understanding of what they're setting out to do.

Another wonderful thing I've noticed happen where the whole team is involved in writing the examples is that people stop blaming each other for bugs. Once you start to realise that each bug is just an example that you'd all missed when you originally explored the story, there's no need for finger pointing anymore, and you can just prioritise it along with all the other new scenarios you want to write. People start to accept that there will always be things you'll miss sometimes, and I've seen a genuine change in the atmosphere in the team as this has happened.

InfoQ: What can you tell us about using these techniques as documentation?

Dan: Gojko uses the phrase Living Documentation which I just love. The point of SBE/BDD is that you end up building a system with living documentation based on a shared, growing understanding between all the stakeholders. That understanding is what can steer you to only focus on the valuable stuff, and avoid getting sidetracked into meaningless features.

Gojko: Automated validation of examples, kept in human readable form, allows teams to create good business documentation for their systems in a way that is easy to maintain. Teams that got the most long term out of BDD from the group that I've interviewed for Specification by Example ended up using their BDD specifications/tests to support the system, to investigate the impact of business model changes and to explain and simplify their business processes. One team even called the whole set of tests "business framework".

Aiming to create a living documentation system resolves some classic test maintenance issues that teams new to BDD often have, so I think that promoting that model will help teams implement BDD/specification by example faster. On the other hand, I'm sure it will create its own set of problems but we'll have to wait a few years to judge that.

I think that the idea of living documentation, which combines business documentation and automated tests, is going to be very important for BDD in the future. There are emerging tools such as Relish and SpecLog which are nice stabs at implementing this idea.

Liz: I see a lot of people who are new to BDD writing down every single scenario; happy paths, edge cases, and every single combination. It's a lot of unncessary documentation that nobody will ever read. Matt Wynne showed an example of a very specific, lengthy scenario once, and said, "This makes people's eyes glaze over." At that point, the documentation has become useless. I think it's better to have a few examples of interesting behavior, leave the obvious behavior out, and have people actually read it and talk about the system than to have reams of unread documentation lying around.

Matt: On a long-living project it's impossible to remember the details of what the system will do in any particular situation, and having a *reliable* source of documentation about that is immensely valuable. I think there are two occasions when you'll want to look at the existing documentation for reference:

1. You think you've found a bug (or perhaps you've had a support request) and you want to know what the precise behaviour is around that area of the system.

2. You're considering extending the existing behaviour of a part of the system and you want to know what it currently does.

Technical people would go and read the code, or maybe the unit tests, or they'd go and ask 'the guy' on the team who has worked on it since day one. Non-technical people can also go and ask 'the guy' or they'll just go and click buttons on the user interface until they find out what they need to know. So both of these groups will go and ask 'the guy' until he goes off sick or gets a new job, but other than that they're going to different places for their own private source of truth.

Examples give a team a single source of truth to congregate around, and that can help to build much better trust between the two sides of the team who are often quite suspicious of one another. I built Relish (http://relishapp.com) because I wanted it to be much easier to put this documentation into everyone's hands on an equal footing. When acceptance tests are stored in source control, they disappear for half of your team.

InfoQ: How important do you find tooling to be for automating validation of system behavior and is it a requirement for a successful implementation?

Dan: I see automation as a means to an end, and I find that both tooling and automation are often overrated or overplayed. In the context of the development process, automation is a way of minimizing variance on the parts of the process that shouldn't vary, i.e. the parts that you've done often enough that they are boring - and well-enough understood - that you can get a script to run them for you.

People often automate prematurely, or make automation the goal rather than delivering new capabilities through software. I've seen teams who spent the first six weeks of a project setting up the Perfect Build Pipeline, with exactly zero delivered business functionality. Needless to say they lost all credibility with their business stakeholders and had to start over rebuilding that trust.

Gojko: Tools aren't perfect but there are some good enough tools out there. A lot of the focus has been on tools because people wish that their problems could simply go away if they install an application. Most of the problems in this space are communication and people problems, and tools won't solve that no matter how good they are.

Many teams struggle not because of bad tools but because of broken processes. Applying automation to a broken process only makes it hurt more and more frequently. With the right process, existing tools are more than enough to make then run smoother.

Liz: People have been building successful systems for years without automation. There's a large manual testing burden without it, it's true, but you can manually test. You can also use recording tools to capture the behavior after it's been implemented. You have to test the software *anyway*, because using examples describes only the behavior you know about, not the behavior you introduced that you forgot! I do like automation, as long as it keeps things easy to change. That's the goal of it. If the automation is actually making things harder to change, cover the weird edge cases and a few smoke tests and then test manually. Anything which isn't changing can be tested manually anyway. I think some teams forget that that's the goal; to make it easy and safe to change, not to pin it down so that it doesn't break.

Matt: As I said earlier, I think you can get 50% of the benefit of BDD by forcing yourself to write examples of all the behaviour you want before you start building it. The conversations you have to have in order to create those examples uncover so much of the uncertainty that we're used to discovering much later on in a project. That said, the regression safety net provided by a thorough set of automated tests is just amazing for a development team. I've ripped out and re-written fundamental parts of a system safe in the knowledge that the acceptance tests would lead me back to a working system.

One problem we're starting to see more and more is that acceptance tests can be slow, and a large suite of them can take a long time to run. Some people like Jim Shore and Arlo Belshee are advocating simply not doing it any more because of this and I'm interested to see where that leads them. For me, I'd like to see the tools help us to be more intelligent and selective about which examples we need to run in order to validate a particular change. I've been experimenting with using code coverage for this in Ruby projects, but there are also statistical tools which follow the models used in manufacturing testing, like Sir Kent Beck's JUnit max.

InfoQ: Could you give a short description of feature injection and how it relates to BDD?

Dan: Feature Injection was a term coined by Chris Matts around the time Martin Fowler popularised the term Dependency Injection, intended as a deliberate pun. Dependency injection is an object-oriented programming idiom whereby you provide all the dependencies an object will need at the point you instantiate it, by passing them as parameters to the object's constructor. This means you can't create an instance of the object without having already constructed all its dependencies, and in order to provide those, you have to have instantiated all their dependencies, and so on all the way down. To borrow from the Lean folks, instantiating an object pulls its dependencies - i.e. causes you to need them, which in turn pulls their dependencies, and so on until you reach objects that have no dependencies.

Chris realised dependency injection is a great metaphor for focusing on the requirements that really matter. You start with the outcome you want, and identify the features that give you that outcome. These are then the features you would inject - as dependencies - in order to get that outcome. (Of course these features don't exist yet.) Then you look at each feature in turn and realise those features need other features, which also don't exist yet. As you burrow down into the features you need to achieve this outcome, you are creating the graph of only the features you need to achieve the outcome.

Ironically, the reach of the term "feature injection" seems to have grown beyond that of the programmers who would recognise "dependency injection", which means most people who come across it are a bit perplexed. Perhaps feature pull would be a better term, and Liz Keogh's InfoQ article "Pulling Power" is certainly moving in that direction.

Gojko: The key authority on this is Chris Matts, but from my perspective, feature injection is one of the emerging techniques to derive the right scope for a system from business goals. It is important because if you get the scope wrong everything else is just painting the corpse, and many teams struggle to do that correctly. Getting the scope right requires collaboration between different roles and input from technical as well as business minds. Good scope is important as an input into the collaboration on key examples, so it is a starting point for specification by example/BDD.

Liz: You mentioned that we'd moved from describing class behavior to describing system behavior. Feature Injection takes it up further; looking at business and financial outcomes, and the context in which we're achieving those. We take an initial project vision - something which makes money, saves money, or protects money by giving us options for the future or preventing our customers moving away, for example. Then we look at the stakeholders who have an interest in the implementation of that vision and whose goals we have to meet. Perhaps they can stop us from going live if we don't. Then we look at the system's capabilities and the capabilities it provides to users - the ability to achieve business outcomes. From those, we derive features and scenarios. A story is just splitting up features to get feedback faster. I tend to phrase my scenarios in terms of capabilities these days, and it makes them much more effective both for maintenance and understanding the "why" of the behavior of the system.

Matt: As Gojko said, Chris Matts is the real expert on Feature Injection, and by the way he also helped create a lot of the terminology around BDD too, as I understand it.

Feature Injection is a bad name for a good idea. I think the word injection makes it sound like you're trying to shove features into something, when in fact it's the opposite: what you're trying to do is defer commitment to any particular way of delivering on a goal. By planning around the goals instead of a particular way you've thought of to meet each of those goals, it's easier to stay focussed on the value of each goal, and you buy yourself more time to come up with creative solutions. This is outside-in thinking at the business level.

InfoQ: At some point automated scenarios and examples moved from being a vision to something concrete. Fit was one of the first tools and other tools have followed. How would you say the evolution has been in the last few years around the process itself as well as the tooling support?

Dan: I can say without reservation that Ward Cunningham is a genius, and a big part of his genius stems from a desire for simplicity. FIT came from the same mindset that created the wiki wiki web, and before that the crazy notion of modelling complex organisations using index cards, of all things. FIT - Functional Integration Testing - was simply an HTML page that would fire off a bunch of interactions against an application and render their results in the page, coloured green for success and red for failure, with no knowledge of the application's internals. You would be able to take a FIT specification for an application, rewrite the application completely, and as long as your FIT suite still passed, you would know you had functionally the same application.

It turned out that the detail of getting the specification into HTML - without any control structures or abstractions other than tables - made FIT harder to use than most people could manage, so Fitnesse was born, along with myriad libraries, to try to bend HTML to the will of acceptance testing. Around this time, independently, I was playing with the idea of code-as-acceptance-tests with JBehave (which in adoption terms was an almost complete failure until Liz Keogh rewrote it and turned it into a viable acceptance testing framework). So now we had two styles of specification-by-example, namely the tabular FIT-like style and the narrative JBehave style, neither of which was universally useful.

As a long time fan of Ruby I decided to try my hand at using it to write a JBehave-style scenario framework, which I nominally titled rbehave, using the popular rspec framework to build it. This got me in contact with David Chelimsky, the rspec project lead, who was the epitome of the helpful, insightful BDD enthusiast, and I managed to write yet another poorly-adopted scenario framework, this time in Ruby!  At this point Dave Astels and David Chelimsky inserted the missing piece - that the scenarios should be written in natural language - and Aslak Hellesøy wrote the first cut of Cucumber, which is now the most popular acceptance testing framework in the world (certainly in terms of downloads).

One of Aslak's great successes with Cucumber was mixing tabular and narrative scenario definitions, so now you can define a template scenario and plug in various parameters and expected outcomes, so we've nearly come full circle back to Ward's original tables with their red and green cells.

Gojko: I think as a community we got a lot better recognising the limits and typical problems with the approach. There are enough teams using this now in different contexts for patterns to truly start emerging. Again, I don't think that this is particularly tool driven although Cucumber is definitely responsible for introducing a whole new set of people to this space.

Liz: There's a lot of focus on natural language nowadays. I've tried a few frameworks - and written part of one - and my experience is that it's still tricky to get these frameworks up and running on continuous builds. Add in things like Microsoft's security model, which doesn't like services to pretend they're users, and you've got real complexity and hard problems to solve. The frameworks add another layer of indirection and make it even harder. If your business isn't actually involved in reading and writing scenarios, do it with a little DSL instead of using a framework. Much easier to maintain, and business stakeholders can read them too anyway. Too many developers go straight for the tools - isn't that always the way? Then they say they're "doing BDD", but I read the scenarios and it's obvious that they're not having conversations, or even thinking through the business outcomes, capabilities and business language. We created the frameworks to make conversations easier, not to replace them.

Matt: I think it's great to see so many people trying to do this, but I worry we are already starting to see a backlash from people who haven't really grasped it and have struggled. I hear lots of people say things like "we tried to get our customers to read our cukes, but it didn't happen so we gave up". I think there's a lot of work to be done helping people find the right level of abstraction to express their examples. My talk at the Cucumber conference addressed this topic: http://skillsmatter.com/podcast/agile-testing/refuctoring-your-cukes

I think the tooling is still relatively weak. I'd like to see something that combines ease of automation of something like Cucumber with the hands-on feel of Fitnesse. There are lots of good ideas around, and I think Robot Framework is one to watch. Cucumber is pretty developer-centric at the moment, and I'd like that to change. One thing Aslak and I have done to help with this problem is to write The Cucumber Book, which makes a deliberate effort to speak to non-technical as well as technical readers.

InfoQ: Where do you see Scenarios, Specification by Example and Executable Specifications evolving from here?

Dan: I see the whole thing as a constantly evolving space. My journey with BDD led me to deliberate discovery, which in turn led to the current work I'm doing on patterns of effective delivery, and the idea that everything comes back to our reluctance to embrace uncertainty. Others, notably Gojko and Liz, have dug further into the nature of BDD, bearing fruits like Gojko's wonderful Specification by Example book - the BDD book I wish I had written - and Liz's BDD For Life workshop. Chris Matts and Olav Maassen among others are blazing the trail of Real Options, which just make my head implode, and ironically I get a real sense that we are getting back to what Kent Beck originally described as test-driven development over ten years ago.

Gojko: The work of the AA-FTT group on defining patterns is very important and I'd expect a proper collection of process patterns to come out of that if the momentum stays strong. I think that the idea of living documentation is very powerful and I expect we'll see a lot of innovation in this space.

Liz: As you can see from my responses so far, we're developing a better understanding of what works in this space and what doesn't. Automation tools are getting better. Teams using the cloud are now able to run hundreds of scenarios very quickly. I'd like to see the evolution go back into the analysis space; teams actually having conversations and picking up new patterns and techniques there. It's not about the tools.

Matt: Eventually, they will be recognised as The Silver Bullet and we will have finally solved software development. If only we can get Dan to stand still long enough so that everyone can just copy what he does, perfectly.

Seriously, when I see ATDD / BDD spoken about at agile conferences at the moment, it feels like a minority sport, but one that's growing in popularity all the time. I think we're going to see more and more people realise what a great way this is to build software.

InfoQ: What we have covered so far evolves around the different processes used, but there are also some technical aspects around automation. Automation is sometimes run through the UI and other times through the Controller when a MVC pattern is used. In your experience, what are some of the pros and cons with the different options? Any other options that are commonly used?

Dan: When you say "automation" in this context I take it to mean automated execution of scenarios or acceptance tests. As I mentioned earlier, I think it's easy to put too much emphasis on automation, and especially automated test execution, often at the expense of more effective discovery techniques like exploratory manual testing.

The purpose of automated acceptance testing is twofold: firstly it ensures the application does what you want and secondly it reduces the risk of introducing a regression. If you test from the UI, be it a browser or desktop application, right through the application stack, you are reducing the likelihood of a bug occurring due to incorrectly wiring up the UI, but this is often at the expense of increasing complexity (in the form of rigidity and coupling) of the scenarios themselves. Alternatively you can model the UI as a source and sink of events, where user interactions create events that are sent to the server, and server-side activity creates events that make their way to the UI. In that case you can often test just as effectively "behind" the UI, by generating equivalent events in your examples and seeing what happens to them.

By doing this you introduce the risk that the real UI generates events that are different from your examples, but you are now decoupled from the process of UI widgets generating system events. In reality it is usually pretty obvious if the user interface isn't producing the kinds of events that your scenarios say it should, and you gain the benefit of decoupling the scenario from the detail of the UI layout. I find that these days I write very few tests that drive the actual UI, and when I do they tend to be end-to-end integration tests rather than descriptions of detailed behaviour.

Liz: As Dan and I both found out, it can be tricky to run automation through the UI. On some projects, the binding between UI and Controller can be flaky - my poor implementation of WPF is a good example - so it's worth going through the UI. However, automation is also a commitment, and if you're into Real Options you know you never commit early unless you know why. If your UI keeps changing, either hold off on the automation or do it through the controller. I advised one team to run their customer-facing scenarios through the UI, but their admin scenarios through the controller, because it ensured data integrity while letting them knock up new admin consoles and change existing ones very easily. The customer UI didn't change as often.

Matt: In Ruby on Rails apps, you have the option to go in at the HTTP layer. Automation libraries like Rack::Test, Webrat and now Capybara have built abstraction layers on top of that so you can actually talk to them in terms of pages, fields, buttons and so on without ever having to make an actual HTTP request over the wire or fire up a browser. The ability to do this has two advantages: it's much faster to run those tests, both in start-up latency and execution speed. More subtly, it also encourages you to build the first iteration of each feature to work without javascript. Often, even if you started out with a grand vision of a rich AJAX UI, you find that the simple HTTP post form is fine, and you can move on to the next feature. Even if you have to add the javascript in the end, you can now add it as a progressive enhancement and your code is cleaner as a result.

InfoQ: Another challenge comes into play with Mocks and Stubs. With TDD there’s lots of opinions for when to use or not to use these. Is this different when automating a whole system? Do you have some good and bad examples of different usages?

Dan: Yes. But that's a whole other article!

Gojko: This is very contextual and any generic advice would have to come with warnings. I've seen some good implementations where automation goes along system boundaries and external systems are stubbed out, especially when they are data sources. I've seen some other good implementations that, for legacy reasons, had to include other systems in the testing loop. I prefer to use ports-and-adapters and anti-corruption layers to isolate subsystems and make this problem go away by design, then cover technical integration with separate technical tests where possible. Again, there are exceptions to all these situations.

Liz: If you're mocking out part of your system, you're not really describing the behavior of the whole system. Having said that, unless you're actually running scenarios against production - which nobody does - you're stubbing out part of it anyway! I've seen some aspects of a system stubbed out before, but I've never seen anyone do it with a mocking framework - usually just simple, coded classes. It's hard to configure a mocking framework. I've stubbed out things like timers for a Tetris game, where it would slow down the automation. Often people stub out clocks; regions; users; anything which would be specific to a particular time and place, but you want to describe the behavior in other times and places. That works well, and I'd advise all teams to make sure that you're not getting things like this directly from a clock or service you can't control. The number of automations which have failed because we ran them overnight and the date changed mid-scenario, or someone quit the company...

Matt: I like the 'mock roles not objects' school of OO design, and so I like using mock objects in my unit tests. I find that acceptance tests compliment these unit tests nicely as they'll tell me if everything wires up together. As for using doubles to fake out dependencies for a whole system, I'd refer you Nat Pryce's recent article on 'Simplicators': http://www.natpryce.com/articles/000785.html

InfoQ: Any parting words about this topic?

Dan: I would just say that it's a great time to be involved in software development. To paraphrase one of my favourite people, Eric Evans (the originator of Domain-Driven Design), you know your idea has landed when you are no longer where the action is, and others are the centre of innovation. Quite apart from Gojko, Chris and Liz, I can look elsewhere to the likes of Antony Marcano and Andy Palmer at Riverglide, or I hear about people I've never met giving "BDD tutorials" (no, really!) in cities I've never been to, and I know that the idea of specification by example - the thing I called BDD because at the time "the words were wrong", the thing Gojko calls "living documentation" because, well, it is! - makes sense to other people too, and that maybe Kent Beck and Ward Cunningham were right all along.

Liz: Just to point people to the other ideas in the family - Real Options, which plays into Deliberate Discovery, which plays into Feature Injection, which plays into BDD's scenarios and examples. Chris has a great comic on Feature Injection and Real Options, and Dan North has a post on Deliberate Discovery. Both Real Options and Deliberate Discovery have uses and implications which go beyond software development. Well worth a look.

Matt: You need to talk to other people to make good software. All this stuff is really just about getting people to have the right conversations with the right people at the right time. The tools just mean you don't need to keep having those conversations over and over again.

About the Panelists

Dan North writes software and coaches teams in Agile and Lean methods. He believes in putting people first and writing simple, pragmatic software. He believes that most problems that teams face are about communication, that is why he puts so much emphasis on "getting the words right", and why he is so passionate about BDD, communication and how people learn. He has been working in the IT industry since he graduated in 1991, and he occasionally blogs at dannorth.net.

 

 

 Gojko Adzic is a strategic software delivery consultant who works with ambitious teams to improve the quality of their software products and processes. He specialises in agile and lean quality improvement, in particular agile testing, specification by example and behaviour driven development. Gojko is a frequent speaker at leading software development and testing conferences and runs the UK agile testing user group. Over the last eleven years, he has worked as a developer, architect, technical director and consultant on projects delivering financial and energy trading platforms, mobile positioning and e-commerce applications, online gaming and complex configuration management systems. He is the author of Specification by Example, Bridging the Communication Gap, Test Driven .NET Development with FitNesse and The Secret Ninja Cucumber Scrolls

 

 Liz Keogh is an experienced Lean and Agile coach, trainer, blogger and well-known international speaker. Coming from a strong technical background, her work covers a wide variety of topics, from software development and architecture to psychology and systems thinking. She is best known for her involvement in the BDD community, and was awarded the Gordon Pask award in 2010 for deepening existing ideas in the space and "coming up with some pretty crazy ones of her own".

 

 

 Matt Wynne works as an independent consultant, helping teams like yours learn to enjoy delivering software to the best of their abilities. In his spare time he is a core developer on the Cucumber project, and he blogs at mattwynne.net and tweets as @mattwynne.

 

 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT