Q&A With Mike Talks on Why Agile Testing Needs Deprogramming
Mike Talks, Test Manager at Datacom, gave a talk at the Agile New Zealand 2015 conference titled Deprogramming the Cargo Cult of Testing. Afterwards he spoke to InfoQ experiences and why agile testing needs deprogramming.
InfoQ: What is the Cargo Cult of Testing in Agile?
Mike Talks: Cargo Cult was a phenomenon that was observed just after the Second World War. The American military would build runways and drop via airplane cargo onto certain remote islands. The local islanders that would observe this and then long after the Americans had gone away, they tried to build their own runaways, in the hope that airplanes would appear; following as a script exactly what they have seen and yet no more cargo appeared as a “gift from the sky”.
Within testing that Cargo Cult tends to be de facto, when we are moving from Waterfall into Agile we try and take a lot of the baggage, a lot of the ritual along with us into Agile, and that can often mean that what we take with us will fundamentally break that transition. And we’ll go “Well, the problem is Agile doesn’t work”. Whereas in actual fact the problem is that the processes that we are trying use were probably never that good to start with, and under Agile they just fail that bit faster.
So my talk is about looking at how we evaluate what we do under waterfall, look at the fundamental values that we deliver with the processes that we originally follow, then from that try and build a new set of process. Processes that we think address those fundamental needs in a different way. Not being afraid that we are going to be delivering in a different way.
InfoQ: What is different? One of the things that we sometimes hear is that “We don’t need no testers in Agile, developers do it”, is that the case?
Mike: I’ve heard that so many times. I think for myself that a lot of that comes from the idea that within Scrum you don’t have defined roles, you just have a team member. And it’s true in a way that you probably don’t need “testers” but you probably don’t need “developers” either.
What you need is a group of people and that group of people can call themselves what they like, but they need as a group to represent certain skill bases. The most obvious is, you need a couple of people who can write code because we need to build things. But at the same point you need a testing skill set, you need people that know how the system will act once built, and interesting ways to check it.
I think pretty much what we have seen from migrating successfully from waterfall is the fact that often people that build software aren’t usually the best people to test software; there does tend to be a skill separation. I’ve come from a development background, and I was a fairly good developer, but I am a much better tester because I think more about ways that things can break.
In actual fact when I try and sit down and program something I get a flood of ideas of all the ways it could break before I can actually think of how I can build it. And that actually to me becomes a paralysis. Developers I think see in another way, they focus primarily on how to build it, and the complexities become apparent a little bit later. But at least they’re moving forward over being paralyzed. It’s a different emphasis.
So, do you need testers? No you don’t, you need people, you need a cross-functional team, which includes some testing skills, much like you need a cross functional team which includes some development skills. You don’t necessarily need a “developer” per se or a “tester”, but that team has to support itself, from being able to do analysis, to being able to develop, to being able to test. As long as all the skills needed are represented, that’s good.
Of course that’s quite frightening for anyone trying to build a team in this way as opposed to silo-based thinking where we could say “we need two business analysts, we need three developers and we need one tester”. That’s much easier to resource.
InfoQ: We can pick up the individual units and bolt them together.
Mike: That is so easy. In a cross functional team however, say Matt leaves and Matt does some of your business analysis and some of your development and some of your testing. Who do you resource?
What do you put in there on the jobs board, do you say “We are looking for a Matt substitute?” Or do you say “Well actually folks, Matt is leaving and we need people to step in to some of these roles” and try to get people to step up and to try and narrow it down the things, you just interview a lot of people hoping you can see someone that roughly fits, and you can work around the other bits.
It’s scary, it’s scary from a resourcing perspective which comes across for me quite a bit, it’s not like Lego, people are no longer like Lego brick components, and that brings challenges.
InfoQ: There would be an argument that they never have been.
Mike: Oh absolutely, but I think we tend to think in numbers; under Waterfall we can say “we’ve got three months of development, so we need X many developers. We’ve got two months of testing, we need Y many testers.”
When you put it all together as you do under Agile, it’s not about resources, it’s we have testing activities, we’ve got development activities, we need a team that can address all these activities.
InfoQ: So we start to look at capacity, rather than activity. So what is different in testing in Agile?
Mike: One of the problems for myself, I was previously a hard-core waterfall disciplinarian. As I worked on avionics, I originally thought “No way will this Agile thing ever work”. Because when I was testing avionics someone would pretty much come along with a forklift truck, drop these massive phone book size requirement documents through and say “That’s what you are going to test guys”. So you’d go through and you’d almost become like a legal eagle on these requirements going, “oh requirement x23 this requirement countermands that requirement” and you would build your testing on mapping all this preplanned complexity.
When people said to me that you put together a story which is a small thing, it’s not even a requirement, it’s a slice of a requirement shocked me. How do you go around testing something that isn’t defined, my brain would say? But how? And that to me was the huge challenge.
Working out that in actual fact it’s ok because there’s a common sense element, you put something in front of me and your common sense should say “maybe I should be able to do this”. And then you talk about it as a team – not everything has to be written and documented on every piece of decision you make. But coming from an environment where you felt like you had to be told to everything formally on paper then signed and approved, this took a bit of doing.
Some of it has to do with empowerment under agile, you’re allowed to actually make those calls; you’re actually allowed to make those choices. Because in an absolute worst case scenario, the most time you could waste on a bad decision is your sprint length (if it’s wrong, your customer should pick it up at the end of sprint). Not build a whole one year project on a bad assumption.
In waterfall testing there is an implicit condition that when those requirements come to us, they have been combed through perfectly and that people have reviewed and cross reviewed, and whilst that work really well on avionics projects, when I stepped outside of that domain I realized that sometimes we’re actually starting to develop and we are still combing through those requirements, which would lead to lots of rework, lots of rework for developers and testers and would often mean that we throw away pieces of work and do work again.
In actual fact it seems more efficient to actually work with stories, knowing that maybe we’ll do this story this sprint but maybe next story will change the story but that’s actually more efficient than actually sitting there waiting for two weeks, four weeks, for someone to confirm that the story that they’ve given you is actually the story that they want. It’s so much easier actually to go away and build it and then say close but no cigar.
InfoQ: And then adapting and learning. You mentioned that your role and Datacom is a Test Manager what is the role of a Test Manager in Agile?
Mike: This is actually a really interesting subject and I actually worked on Lisa Crispin and Janet Gregory’s book “More Agile Testing” as one of the lead reviewers. Behind the scenes amongst us reviewers there was a whole forum on this topic going backwards and forwards.
And really what it came down to is: originally under Waterfall I (and many others) had a huge amount of testers working for me. In that context, I am “the man”. I’d be collecting the big picture reports taking it around.
Under Agile we tend to have a lot of very small teams working about the place. I noticed that a lot of what tended to be happening was empowering my testers and encouraging them to take on jobs that would traditionally be considered to be mine. But they needed to be able to be more self-organizing. So they could look at themselves and they could provide they could make their own decisions, they could do their own reporting, they could do their own planning, and they didn’t need permission from me.
They could go and do it. They could bring stuff to me if they wanted a peer review, but very much their work is under their power to choose what they wanted to use my time for.
What we found is that the role evolves and it starts to go from what you can consider to be traditional manager to closer to kind of simulating the Scrum Master, but for testing disciplines. A test leader, a test scrum master, test-master? I like that word. I think I just invented that word (but I think the official term is chapter lead). But essentially the role becomes more about learning, it becomes more about empowering, removing impediments, these are all Scrum Master type activities and responsibilities.
I suppose thinking about it where does that clash with the scrum master role because there should be only one scrum master but there is this role for the mentor or enabler of people’s testing learning, so there is that. So I think a lot of companies are asking themselves the same question, do you need a senior person as a tester that basically is in charge of individual testers personal development? Absolutely. Is it as command and control as it used to be? If test manager is only a command and control job that’s gone, that’s not there anymore. So maybe you don’t, or maybe you just need to get out of that Cargo Cult way of thinking.
InfoQ: Mike thank you very much it was really interesting and enjoy the rest of the conference.
About the Author
A veteran of 15 years on large military-based projects, Mike Talks has been recovering in New Zealand from waterfall Stockholm Syndrome for the last six years, working with a variety of clients and agencies. He writes regularly about testing and Agile issues and was a contributor and lead reviewer of Janet Gregory and Lisa Crispin’s More Agile Testing: Learning Journeys For The Whole Team.
Agile is a simple concept and people's need to overcomplicate it to justify it has been something I struggle with when I read most articles on agile adoption and agile transition. The key issue for myself and Testing in the agile world is that everyone is working to two weekly sprints which translates to I have 9 days to get everything over the line for testing on the 10th and last day. Assisting the scrum team to understand this issue, is always a key learning for every retrospective... lol. Thanks, again :)