BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Experiment using Behavior Driven Development

Experiment using Behavior Driven Development

At the Agile Eastern Europe 2015 conference Liz Keogh gave a talk about Behavior-Driven Development: A bit of an experiment. In her talk she provided a short overview of Behavior-Driven Development and explored how it can be used to do experiments to deal with complex problems and do discoveries.

Behavior-Driven Development (BDD) uses examples, preferably in conversations, to illustrate behavior. A lot of people focus on the tools if they are doing BDD but having the conversations is the most important part, more important than writing down conversations and automating them said Keogh.

You can use a "given-when-then" format in BDD to discuss examples of what a system should do. Sometimes people say that they find it difficult to get examples. Asking "can you give me an example" is the simplest way to get one as Keogh suggested. This simple question provides an opportunity for people to explore what the system should do.

Whenever you do something new you will make discoveries said Keogh. Making those discoveries deliberately is preferable to accidental discovery. Since you will have to assume that you don’t know what you don’t know, how can you do this?

Complex problems must be addressed using experiments, or probes according to Cynefin from Dave Snowden. We can use BDD’s scenarios to ensure that the experiment is safe-to-fail and coherent. In her blog post using scenarios for experiment design Keogh explains the need for probing:

People who try to do analysis in the complex domain commonly experience analysis paralysis, thrashing, two-hour meetings led by "experts" who’ve never done it before either, and arguments about who’s to blame for the resulting discoveries. Instead, the right thing to do is to probe; to design and perform experiments from which we can learn, and which will help to uncover information and develop expertise.

Keogh wrote the blog post Cynefin for developers in which she also motivates the usage of experiments:

If you find yourself in a space with high uncertainty, rather than trying to eliminate the uncertainty with analysis, it’s better to try something out – safely, so that it’s all right to fail – and respond to what you find. The more collaborative the business and IT become, the more business stakeholders are prepared to try out risky things, and the more the company innovates!

Any experiment has to be safe-to-fail, otherwise it’s a commitment said Keogh. She listed the criteria for assuring that a probe is safe-to-fail:

  • A way of knowing it’s succeeding
  • A way of knowing it’s failing
  • A way of dampening it
  • A way of amplifying it
  • Coherence

In her blog post on post using scenarios for experiment design she described why coherence is so important for experiments and how you can check that scenarios are realistic:

If you can come up with some realistic scenarios in which the experiment has a positive impact, you have coherence. The more likely the scenario is – and the more similar it is to scenarios you’ve seen in the past – then the more coherent it becomes, until the coherence is predictable and you have merely complicated problems, solvable with expertise, rather than complex ones.

To check that your scenarios are realistic, imagine yourself in the future, in that scenario. Where are you when you realise that the experiment has worked (or, if checking for safe failure, failed)? Who’s around you? What can you see? What can you hear? What colour are the walls, or if you’re outside, what else is around? Do you have a kinesthetic sense; something internal that tells you that you’ve succeeded, like a feeling of pride or joy? This well-formed outcome will help you to verify that your scenario is realistic enough to be worth pursuing.

You can use metrics to measure the success of your experiments. Use ranges if you are not certain said Keogh. Examples of success metrics might be creating a happiness index and measuring sign-up rate. You can also define measures for failure, but as you often get what you measure you might want to measure only positive stuff.

Keogh suggested to look at the cost of failure and try to find ways to make it cheap to fail. For example by making the software only available to few people or making it easy to roll back in case of customer complaints.

At the AgileEE conference Keogh presented a scale for measuring complexity, which she has also described in her blog post on estimating complexity:

5. Nobody has ever done it before
4. Someone outside the org has done it before (probably a competitor)
3. Someone in the company has done it before
2. Someone in the team has done it before
1. We all know how to do it.

Levels 5 and 4 problems fit into the Cynefin complex domain, where level 3 problems are complicated (there is at least one known solution - expertise to solve the problem is available). Problems with complexity level 2 and 1 are something that can be dealt with easily.

Levels 5 and 4 are where the value is, but that is also where risks are higher. Keogh suggests to do them first. You can do a spike, build a prototype or use any other way to experiment so that you can learn and thus move the problem from the complex to the complicated domain.

Keogh also made parallels with BDD’s "Given Scenario", in which one scenario sets up the context for another. She suggests that if the problem can’t be solved immediately, experimenting to change the context into one which provides more options might lead to an eventual solution.

Rate this Article

Adoption
Style

BT