BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Agile Approaches in Test Planning

Agile Approaches in Test Planning

In the talk “Placebo Test Management” at Agile Testing Days 2015, Eddy Bruin and Ray Oei explained how to satisfy the needs of stakeholders who ask for test cases, test plans, and other comprehensive test artifacts without writing large test plans.  

InfoQ asked Bruin and Oei about test plans in agile, how to make stakeholders aware that they can influence quality, and which agile practices they recommend for testing.

InfoQ: What problems have you seen with test plans in waterfall or agile projects? Are the problems similar or different?

Eddy Bruin: I remember that I was very pleased with the first master test Plan I wrote. All was clear for everyone, I thought. It took me a month to write it and I discovered nobody had read it fully and I still needed to argue over a lot of points mentioned in the plan. If I would have spent a month speaking to these people and testing while explaining, I would have been more successful. This was the moment I realized writing such a plan was quite a waste of time.

Since then, I’ve received several test plans and read them. My conclusion is that most of them contain irrelevant and outdated information that does not contribute to the quality of a product or the process to test it. It’s all too inflexible and time consuming.

A difference with test plans in agile I spotted is that they often explain the rules of Scrum again and then try to squeeze in testing in a waterfall way. In my opinion, they are classical plans used in a false agile process.

Ray Oei: In my definition, a test plan is a lengthy document that follows some standard or template. And a common problem with those test plans is that they take a lot of time to write and they are hardly ever read in most cases. It is much more a process artifact than a test one. Many testers I have encountered have problems when they try to follow the plan and are stuck with “what do I need to do to test?” And at some point you probably learn that the content of all plans is often more or less the same and doesn’t really help testing the thing that you need to test. So you try to ignore them.

When test plans are required in so-called agile projects — it has happened to me — it is a clear sign that it is not an agile environment to begin with. One of issues I had in one of those cases was that someone was enforcing a way of working on the testers that was firstly separating them from the team because of the test phase that was described and secondly had no connection with what the team was trying to do. This was bad for the team spirit.

InfoQ: At Agile Testing Days, you called your talk “Placebo Test Management”. Can you explain what that means?

Ray Oei: It describes the effect of providing a solution that is expected while we deliver something that is more or less fake. Like in medicine, it often works. It creates the illusion of control, which satisfies a lot of people. It is, of course, also meant to provoke a bit. We do not want to be fake or only keep managers happy. Often, we need to create a common base first, seemingly doing what is expected, before we can really deliver value the way we want. Showing useful results, which is key, helps convince people that things can be done in a different way than they expected. This will however always be a big challenge in a process-hungry environment.

Eddy Bruin: When Ray and I discussed test plans a year ago, we came to the conclusion that we preferred to have alternatives to explain why, how, and what we test. In some occasions, we were told however that we needed to deliver a test plan or test report based on the company template “because the process forced us to”.

Since we think this is a false argument, we and other testers started to play with the test plan. We have sent empty test plans, only delivered an empty template, or put in Easter eggs in order to show that test plans are not used to actually improve or that testing is not properly explained. This however did please the managers guarding the processes: project managers and quality-process management who [were happy to] see that a process is followed.

The process prescribes that a test plan has to be written. If the managers see an e-mail with a “test plan” attachment, they check the box. In some cases, this sadly happened. Process people are happy even though there’s no content in the document. We realized these fake test plans triggered a placebo effect. That’s how we coined the phrase “placebo test management”.

InfoQ: Do you think there’s still a need for test plans in agile? What purpose can they serve?

Eddy Bruin: A plan to test is in itself a useful artifact. It can shape our context and explain to ourselves and others how we will conduct testing. The problem I have is the inefficiency of writing a plan consisting of information that is already available and changing. Practicalities like which test environment to test in and what risks to cover are useful to have and to communicate. Also, agreements on the scope of testing (e.g., what browsers do we test on) are easy to write down.

The Scrum framework however already provides an artifact that can be useful for this: the definition of done (DoD). This document will change and, more importantly, it’s a token of conversation. What I mean by a “token of conversation” is that the DoD is just a result, a statement that goes with a story. Only with a conversation can we tell the story, not only by sending people a copy of the DoD. That said, I do like to have a test-vision document in place. It can either be a PowerPoint or a mind map. The test docs (like a DoD) also change. Keep the team sharp to allow changes to this in the retrospective, for instance.

Ray Oei: It depends on your definition of a test plan. In the classical form, I would say no.

The plan, in my view, is partly in the definition of done. Partly it’s the user story itself described by the business: the acceptance criteria that are defined for the user story, overall constraints that may exist, the environment the product needs to work in, the end users, etc. We can find many ways to communicate that in the team and with the PO and stakeholders. I like mind maps myself but, for instance, BDD can also be of great use. In the end, it depends on the specific environment. There is no clear solution for everything. In that respect, the idea of our different “medicines” is always true: you have to look for what works in the specific context. Trial and error. Inspect and adapt. Isn’t that what agile is about?

InfoQ: Can you elaborate on how to use a TMap-style test plan with agile?

Ray Oei: As I explained in my talk, this is more a Trojan virus than a placebo. There still is a document, as the external project organization insisted on in my case. But the content was tailored to support the agile initiative that the development teams were exploring. So I described the Heuristic Test Strategy Model (from James Bach) to explain the way we would generate test cases and Session-Based Test Management (from Jon Bach) — managers really *love* the word “management”, by the way — to describe the way we’d organize the testing. The overall process was of course the famous Scrum cycle picture. It was accepted. And when I got questions about when the test cases would be ready, I could point to the agreed plan and explain that everything went in accordance to it.

InfoQ: What are some examples of how you’ve made stakeholders aware that they can influence quality?

Eddy Bruin: Involvement is key in this aspect. A couple of years ago, an operations manager told me “We need more test cases! Please write this in your test plan.” I replied, asking him for reasons we would need this. “Well, how else will we know what the quality of the system will be? Besides I need to maintain the software. I want to know its behavior.”

It was clear to me that this man and his team needed two things: 1) confidence in the quality of the product and 2) to know how the product would work. From then on, I invited his team to the sprint reviews and took an hour after the reviews to test together with them. They turned out to be excellent testers and since I was guiding them in learning the system, I never got the request for the test cases again. Besides that, they gave more feedback during the sprint reviews, which indeed did improve the quality. In my current assignment, we take the sprint review very seriously and we prepare it so that people can drive the product themselves and give feedback pinned to the areas we have worked on an iteration.

A couple of ideas for the sprint review are:

  • Create a flip chart where participants of the review can leave their feedback. Also positive feedback! (You can see an example of such a flip chart in the presentation.)
  • Prepare test data and print it out.
  • Have multiple devices and workstations available where people can review your product.
  • Invite people to drive the app themselves (don’t only show).

Ray Oei: Talk to stakeholders and be patient and not afraid of repeating yourself. Not every stakeholder wants or feels the need to be involved, they just want a working product. What often has helped is the demo and, mostly, the discussions following it. Letting stakeholders experience that their input is really used and appreciated is very important. Give them opportunities to share their assumptions, expectations, and needs. Give them 24/7 access to the product if possible. Let them test. There is a risk there, too. If the product is too unstable, you might not gain confidence; it may be the opposite. Unfortunately, in cases where stakeholder engagement was a problem, the end result was often a disappointment for them.

InfoQ: Do you have examples of how to plan tests in agile?

Eddy Bruin: If you work in Scrum, all testing ideally is done in the iteration in which the software is developed. I found reality often is different. Companies transitioning to agile have many test phases that do not simply disappear. End-to-end tests and performance tests are often conducted after the sprints. What I strive for is to continuously involve those responsible for these tests and show them how we can perform these kinds of testing earlier. It all comes down to shortening feedback loops as soon as possible. The sooner we have working software live, the sooner we can test if our software is actually helping to achieve the business goals in the market the software is supposed to be a solution for.

The story of the operations manager requesting test cases in a test plan (late feedback) versus involving managers in sprint review every two weeks is an example of shortening the feedback loop. Another example is not to wait for the demo to show what the product does. An alternative is to debrief results with our team or product owner after we have done some testing.

Ray Oei: I try to find as many questions as possible that surround a story, an intended product, or whatever seems important. Create a model of what I think we are building. Mind maps help me to organize. Ask questions. Research if possible. I find that I often think of something I need to test when I hear coders discuss things. I browse through code. When someone tells me that I do not need to check this or that for some unclear reason, then I will surely have a peek. Is that a “plan”? Not as such if you mean a plan that is conceived at some period before starting the testing itself. But I have test ideas and those are the ones I try to execute. And to be honest, I am not always as organized as I want to be. And sometimes my ideas are already obsolete before I’ve had the time to properly organize them. I think my main guiding plan is the questions “Is this important? Now or later?”

InfoQ: Are there specific agile test practices that you want to recommend?

Eddy Bruin: There are tons of agile test practices: pairing, specification by example (ATDD/BDD), TDD, many test heuristics, and so on. The one practice that has always stood out for me is communication in all testing activities. I always try to be as transparent as possible when planning tests, reporting tests, and resolving bugs, and even more so when showing what we actually mean to build and what it’s supposed to bring the company. The tactic I use the most for this is to have as many information radiators as possible. The more flips, white boards, and Post-its you have on walls, the easier it is to start a conversation.

Ray Oei: Not a specific agile one. I think communication is key: talk and listen. Or better: ask questions and take the time to listen and understand. Understand the domain, the business involved, but also the technologies that are used. Understand the people you are working with. And see your own role and behavior in there, too. We tend to look at others who should/could/must have done this or that better — but how about yourself?

Show what you do, what you have done. But when needed, show where you missed the ball, too. This helps to build trust. It is no rocket science in that respect. Although, as a rather technical person, I think that rocket science is often easier than people science.

About the Interviewees

Eddy Bruin has been working as a test consultant from the beginning of 2008. His passion lies in the areas of agile, testing, usability, and mobile. He helps organizations in enabling feedback loops in order to deliver better products. Eddy, an agile-test coach, loves to give training on agile testing and to discuss the topic over a special beer. Eddy currently is a QA lead for an international courier.

Ray Oei is an experienced agile coach and tester with roots in programming. He gains energy when continuously learning new things within and outside of the software-test profession. Ray is a founding member of DEWT (Dutch Exploratory Workshop on Testing), member of Association for Software Testing (AST), lead instructor for the AST BBST courses, and member of TestNet workgroup Training & Coaching. He works as QA team lead for AVG Innovation Labs.

Rate this Article

Adoption
Style

BT