BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Sharon Robson on Agile Testing

Sharon Robson on Agile Testing

Bookmarks
   

1. Good day and welcome, this is Shane Hastie for InfoQ. I’m here with Sharon Robson from Software education and we’re talking about testing in the Agile space. Sharon, you and I know each other well but would you mind just a very brief views in your system?

First of all, thanks Shane for inviting me to talk to you about testing today but for a bit of background, I work at Software Education, I’ve been doing so for about four years now; prior to that I’d been working for big companies and small companies and I’ve been looking at being involved in IT for about 20 years now; obviously I started when I was three but that’s an old joke (except in my case where it’s real).

My background has been pretty much in the world of databases and GIS [Geographic Information Systems] , I did a fair amount of work in satellite imagery; then I made the leap into more structured databases; then I started work with HP where I got into the world of testing.

From then, I actually was able to be involved with setting up ANZSTB, the Australian New Zealand Software Testing Board and that’s a part of ISTQB. I got involved with ISTQB as the Chair of the Marketing working group for a few years. In terms of testing, I think secretly I’ve being testing of about 23 years. My background at the moment is working in both the testing space and the Agile space; so that’s where Shane and I got to know each other quite well.

   

2. Talking about the segue into the Agile space, you’ve been working with quite in a few organization bringing testing practices into their Agile way of working, first of all, we don’t need those testers in Agile?

Isn’t that true? Agile has quality built right in; we don’t need to worry anymore except for the fact that we do. So oftentimes when I get involved in an Agile team and we start to talk about Agile practices, we hear this, "What do we need testers this for? We build the quality in" and this is where I always fall back on to my old hobby horse, "What’s quality?" and then I start to get people describing quality to me. And we hear a lot about TDD - test driven development - and we hear a lot about automated unit testing; we hear a lot about automated testing; and I’m saying, "Yeah, but no one is saying anything about quality".

So my first engagement with the team is often around the discussion of quality. What do you mean by good? What do you mean by done? And we have a good discussion; and then they say, "Yeah, but why test?" First of all, how will you know you’ve done it? How will you know you found it? And that’s where we start to really get an understanding of what testing delivers because testing isn’t about finding defects; testing is about giving people information and that’s the role of the tester - to give people information about the quality attributes of the solution, not necessarily "is it right" - two very different things.

   

3. What is the biggest challenge that organizations face when they adopt Agile and they want to bring this quality focus in?

The biggest challenge overall is this real challenge of defining quality - what makes it "good". And what we find when we’re working in an Agile environment is a definition of good; it’s many and varied. It depends on who you are; so at the moment, I’m spending a lot of time working with various teams from the development level talking about what makes it good for the delivery team. I’m talking about maintainability and reuse and control and structure and source control and coding standards and repeatability but how’s that good for the business? Because the business is looking for market growth and delivery and engaging with their customers. But how’s that good for project management? Who’s looking for budget and time and progress management?

So what we’re talking about is how do we define "good" for the various stakeholders? And then, once we’ve defined it, we can then start measuring and assessing it, tracking it, maintaining it - it’s the definition, the first kick-off point that we struggle with. "What are we looking for" is the real challenge.

   

4. So how do we then build that into testing on an Agile project?

This is where we have this disconnect in the industry as well, and I don’t want to talk down to anyone or to teach anyone how to suck eggs, but so a few people truly understand what testing is and how you do it and I often run across a group of people (and I’m not meaning to offend anyone) but they’re keyboard thumpers - "I’ll hit the keyboard as often as I possibly can and hope that I find something"; that’s not testing.

Testing is a structured considered focused approach to looking for a particular attribute of your solution. So what we’re looking for when we start to involve testing in our Agile projects is making sure that we’ve thought about what we are trying to prove and how we can prove it because we can prove it throughout the life cycle, we can prove it at all different levels and we can prove it at all different times to all different people. But the definition of good and the finding of that good will depend on how we go about proving it and testing it. So integrating testing into an Agile project is incredibly challenging unless you really know what testing is; and unless you’re prepared to think about it, not so much from the point of view of, "I need to find defects," thinking about it from the point of view of, "I want to deliver information; I want to find key information and deliver it to the right people at the right time."

   

5. So what are some of this information that we want to find through testing?

We can talk about progress; "are we going to make it; will it be on time" and then we can talk about quality; "will it do what it’s meant to do". We can talk about coverage, "have we covered all of the stories - all of the risks, have we covered all of the key functionality?" And then we can talk about the wonderful ‘-ities’ that come with any piece of work that we talk about. You know, the ‘-ities’ the quality attributes, maintainability, accessibility, functionality, that provides the business usability. We start that to talk about all of the other things that we’re trying to find and we need to define a way to find them.

   

6. So how does testing change?

This is the best pick. When I first got into testing, we were the group at the end of the last cycle that stopped everything before we go live? You know them? What do we call them? Oh, yeah, the testers, even better QA. When we think about what QA stands for, we know that QA stands for quality assurance; testing is not quality assurance - it’s a different thing; but this is where testing in Agile is different because in Agile, testing is quality assurance but it’s not just done by the testers. Everyone does testing all the way through the life cycle. Every stage of the life cycle has something being developed and then assessed; developed and assessed.

From the beginning when we start to look at our envision stage or the idea stage, we say, "Is this actually the right thing to do?’ Now we move on to our planning phase and you may call it speculate or you may call it your initiate phase, it really doesn’t matter but then you’re saying, "Okay, given this idea, how are we going to do it?" And when we’re looking at that, we’re saying, "Will it fit; will it deliver what we need; will it meet our team’s criteria; will it meet the businesses’ criteria; will we do what needs to be done in time, in budget, in resource using this architecture? So you’re testing again.

Then you’re moving to deliver phase, where we move into our iterations. Now this is where it starts to get a bit tricky - a true Agile team that has its Agile head-on is going to focus on defining done and defining good and then focus on delivering that, not just delivering "my bit".

If we move into a site where we have everyone working together and focusing on getting it done correctly, rightly, to the standard of quality that we’re expecting, then we will find that testing happens throughout each of the levels, you’re going to have unit testing; you’re going to have integration testing, you’re going to have system testing; you’re going to have UAT integrated through the iteration because you’re going to have the customer involved in reviewing the stories as they’re written; you’re going to have integration happening as the stories or integrated into the code base. You’re going to have the testers being able to build their system tests up to scenario level tests.

So at the end of the iteration, we have enough of a core product to actually consider releasing but it’s going to be done. When you have a team that is thinking about done; when you have a team that isn’t thinking about the holistic view of Agile, you’re going to end up with things being left over. Now, I often say to people I’m working with in Agile, "There is no life after Agile, now or never." So whenever you see anyone in an Agile team thinking, "We’ll test that later," or, "We’ll do performance testing later," what they’re actually saying is, phase 2 or it’s never going to happen.

So what we have [or we forgot]; so what we’re looking for is to make sure that it’s built in and everyone in the team recognizes that. So, how do we do that? So how do we integrate this testing? Well, first and foremost, the big thing that the team has to recognize is it’s not just the testers who execute testing.

Everyone executes testing all of the time and testing isn’t just dynamic; it doesn’t mean "I have to spend my time executing test places," testing is static reviews, understanding, analysis, planning - that’s all static testing. And then we got the dynamic execution. The role of the tester in an Agile team is to design the appropriate level of testing and the test approach for the team to execute; so that everyone knows what they can and can’t and you got the right skills allocated to the right level of work at the right amount of time.

So the role of the tester is to make sure what needs to be derived for the information transfer is being derived; be it a unit test or an acceptance test - it doesn’t really matter but we need to have the tests designed that actually tells us something. Not an activity that exercises our fingers. We never have enough time so we can’t waste it. We can’t leave stuff to later.

However, Agile is iterative, working in sprints, what we find is that we will not have critical mass to do all of our types of testing until we’ve done a couple of sprints; or we’ve completed a couple of stories; so there needs to be this view of testing from the big picture of the solution as well.

So you’ve got to be thinking, "What type of testing can I do and when can I do it; and what is this going to prove to me and is this the sort of information that other people need to know?" If the answer to that question, ‘Do other people need to know?" is ‘yes’, then we’re on the right track. If we’re designing a test suite or a test approach that no one in particularly cares about, don’t do it.

Radical, look at the eyebrows - radical. I quote you James Bach "no risk-no test", seriously. As a team, we can assess risk. As a team, we can look at the product and say, "Where are the potential values here?" and as a team, we can actually manage that risk effectively and efficiently, as soon as it’s been identified, rather than wait to the end to find it.

   

7. So if we’re doing this as a team as you say, does this not drive us down the path that there has been some discussion, you know, we don’t need professional testers on Agile teams because "the team" does it.

This is it: remember what I said, "You need to tester to design the tests." So you need someone who thinks, acts, knows testing - someone whose got testing head on at the time to make sure that they are bringing that testing attribute to the solution.

Now, don’t get me wrong. Software development is, in my opinion, black magic. We type little bits of magnetism and turn them into the solutions that run our world while waving our fingers in the air. If that isn’t magic, I don’t know what is. However, the role of a developer is to be an optimist; the role of a BA is to be an optimist, to honestly believe this thing will work. A tester is trained to look for where it may fail.

You just look at how we call ourselves, "professional pessimists", professional pessimists. I’m a happy person, I make people laugh all the time but I know where to look for problems. So you need someone who’s got the skill and the ability to look for those problems and identify ways to see those problems. Unless you’re looking for it, you’re not going to find it.

And if you’ve got an optimistic, "it will work; we’ve got to deliver" mindset, you’re not looking for problems, you’re looking for solutions. So therefore, you’re going to say solutions.

   

8. So you need that contrasting mindset point of view?

Sharon: Absolutely.

Shane: Does that have to be a person with a job title ‘tester’?

Sharon: No, absolutely, not. It needs to be a mindset. You need to have the focus and saying, "Right, here and now, I’m looking for these things." It can be in the form of checklist; It can be in the form of going and sitting on the other side of your desk; it can be you putting a different hat on whatever - it doesn’t matter; it doesn’t need to be someone in a different body. It needs to be a different mindset though.

You got to have the discipline and the understanding of testing, however. You have to have the ability to focus on this a bit now, not all of it because if you try and test everything all at once, you’ll miss stuff. But if you look at this particular attribute right here, right now - what could go wrong? Have I found that? If ‘yes’, defect found good; if I haven’t found it, have I done enough or am I just looking to not find stuff? You see the difference?

So I have looked the right way is what we need to have. Now, you can learn that. It’s a skill, it’s not instinctive, you can learn that but it takes discipline to break out of an optimistic mindset and move into that pessimistic mindset.

   

9. You can learn the skill. One of the oracles of learning the skill is the ISTQB International Software Testing Qualifications Board which you have been actively engaged in and they have a testing syllabus; they have a body of knowledge, I suppose we can call it, that is perceived as being extremely heavyweight and process-driven. Having been on that august body, how do you reconcile that with the Agile way of working and thinking?

Well, I’d say this is one of the great conundrums, I really don’t understand, because at the foundation level and in the advanced level, Chapter 1 and Chapter 2 of both syllabi, talk about various life cycles.

So ISTQB and the syllabi and the training and instruction that goes with it, is process agnostic, and testing is lifecycle agnostic. Testing is testing: the way you apply it; the methods you use and the timing of the application is where the process comes in.

So if you read the Foundation Syllabus and the Advanced Syllabus, it does talk about the variety of life cycles and the risks of doing a particular software development methodology from a testing perspective - things you’ll look out for, V-model, waterfall, spiral - all of these things are discussed as is as intuitive or incremental or evolutionary delivery method.

So when I start to get engaged with the Agile team, I start to say that, "What’s the difference between incremental and intuitive?" and that’s what they usually start to go "oooh - this is a bit scary", because these will have different ramifications.

So from the syllabi point of view, ISTQB actually does talk about it, it’s not widely publicized and a lot of people don’t really notice it; they think that ISTQB is waterfall, it’s not. It’s mentioned, it’s discussed. In my personal perspective, training in testing means that a tester needs to know "when to apply my skill". My skill is what the ISTQB foundation gives us an outline of: here are some of the techniques you can use to prove things. ‘Can’ not ‘must’ not ‘should’; ‘can’ and that’s the way I like to look at it.

Here’s a toolbox of techniques you can use depending on what you’re trying to prove, pick one to prove the thing you’re trying to prove, in the life cycle that you’re working in and focusing on the item that you have to test against.

So you have to be thinking about all of these things when you’re doing your testing. So, ISTQB likes you to think about all of those things. So, it’s not just about test design, it’s about test management; it’s about life cycles; it’s about the mindset; it’s about traceability; it’s about reporting; it’s about tools. It’s just a matter of choosing of the bits you need in a particular life cycle. It can be perceived as process heavy, but when you really look at it, it’s not.

   

10. So how do we get that message out into the wild?

I think fundamentally testers need to recognize their skill set is designing the appropriate tests to prove the thing they need to prove which comes back to their analysis and design capabilities. First and foremost, the tester needs to be able to analyze the solution and understand what people want to know about that solution and design the tests that will find the answers to those questions.

One of the techniques that we use a lot actually and it’s from the event syllabus Victor Percelli’s goal question metric approach, GQM. What am I trying to prove? What questions do I need to ask to prove it? What metrics will allow me to track it? Now, I don’t care what life cycle you’re working in; you still need to have proof.

So the role of the tester in an Agile life cycle is to take those techniques that they know from ISTQB or anywhere else. I mean, there’s a number of learnings that you can have around how do you apply testing; what test design techniques do you use; how do you put them altogether? But to take those techniques and put it into the framework that is Agile; put it into the iteration; put it into this piece of work to prove this thing at this time using these techniques - that’s where the skill is.

So having the testers’ mindset means thinking about all of these things as opposed to just, "Where’s the defect?"

   

11. So one of the things the ISTQB does though is they have a certification program and there’s some difficulty in being in some controversy in the Agile space about testing certification and certification in general. I know that you’re also involved in the IC Agile testing stream learning objectives and the certification going down that path. Do you want to tell us a bit more about the process going on there and how testing certification might or might not fit in the Agile space?

The can of worms, how do you certify something? First of all, I have to say that professionally, I believe there’s a big difference between certification and qualification. So, a certification in my mind and to me, imeans that you have experience and you have exposure to the knowledge base of the particular certification that you’re looking at.

So a certification to me says exposure. Qualification to me says application of that knowledge. The challenge that you have in the testing certification space is there is no global certification that assesses your qualification; there’s no way that I can see, by doing a written exam, whether or not you’re applying the techniques appropriately.

That said - I’m actually in favor of certification, I really am; and the reason that I am is because I believe that we need to have some structure and some frameworks to work with. We need to have some way of gathering the information that we need to gather to be a professional in any field. I need a baseline, I need something to refer to.

From the point of view of ISTQB, they provide a baseline of test techniques. They provide a baseline of quality attributes that you can look for. They never say this is the ‘be-all and end-all’, they say this a good start point and any learning should be nothing more or less than a good start point so should any certification; whereas a qualification is what you attain after you’ve been proven your aptitude at applying that knowledge.

So when I got involved with the IC Agile testing learning objectives, one of the things that I was very keen to just see, was to see whether or not there was that bigger picture view - is it just beyond applying test techniques.

And I was really lucky to work with a great bunch of guys, and if you’re go and to look it up, there’s really a good bunch of guys who worked on it, and the learning objectives are holistic; they’re not just testing, testing, testing - they talk about the life cycle; they talk about team engagement; they talk about interpersonal skills; they talk about team dynamics. They also talk about the different levels and types of testing done by different team members.

So instead of it being the deep diving to a testing skill test, it’s the broad approach of how do we apply testing in this life cycle? Moving away from the "name badge" approach, do you see the difference?

So for me, I think the certification there means that, once again, you have exposure to these thought processes, to this body of knowledge; to these techniques that you apply or not with judgment; but if you haven’t got the exposure, you won’t know what’s out there. So when someone presents to me and then I say, "Yes, I am certified ‘X, Y, Z," so okay, you’ve get this level of knowledge I can expect from you. As a team member or manager or anything else, I’m going to expect to build on that, your qualification to apply those things will become evident as we do stuff, so it’s a baseline, a start point.

Now, when you talk about things like Lean, Lean Manufacturing has this concept of baselines; you’ve always going to have a baseline; stops you backsliding; so we thought about it; when we thought about whether or not there should be Agile certification and I thought, "Oh gee, I’m not a hundred percent sure there should be, how do you certify an ever changing paradigm? How do you certify for each team that is different fundamentally; every context is different, every project is different; every way the team is put together is different; every way of working is different; and the whole point of Agile is you apply the appropriate things for your context. How are you going to certify that?

But then when I looked at the learning objectives, it was about understanding the context, being aware of the variety of implications of that context, of the team, of the environment, of the product, of the data; of the approach - all of those things are about thinking about the bigger picture rather than, "This is my skill."

   

12. So a robust set of learning objectives?

I think it is, I think it’s quite robust and I like to think that it’s holistic and it covers not only techniques - it’s about how do you make this work? How can you make this work? What could you do? What do you need to think about? But it’s not prescriptive. It is about understand what’s out there and recognizing there might be more rather than saying "This is it."

   

13. Thinking of those techniques, what are some of the testing techniques that are most applicable in the Agile space and which is some perhaps are less so?

Right, are we ready? This may take me awhile. Have we go a whiteboard handy (joking)... So if you’re talking about technique, when you think about the types of technique you have and if you wanted to go strictly by the syllabus, from the foundation level of ISTQB, you have three main types of technique: you have specification based techniques, it’s the first thing that happens when any team goes Agile is we decide not to document anything, (I’m joking).

What you have to recognize is specification based techniques use the specifications as oracles. So this is what we’re going to base our testing on; here’s a bit of paper. If we haven’t got that bit of paper, we have to look for different oracles; okay, so we’ve got to look elsewhere for where are going to get this information from. We use the team; we use our stories; we use our design; we use our walls; we use anything we can get our hands on. The technique you pick depends on what you’re trying to choose or what you’re trying to prove.

So if I’m looking for coverage, I’m going to use equivalent partitioning because I want to know that I clearly identified each group or type of thing that I’m looking for. If I’m looking for accuracy, I’m going to be using things like boundary value analysis to make sure this bit is either in or outside of acceptable bounds, whatever they are; and I’m going to use those techniques to make sure, "Hey, we’ve even defined those bounds," and then we agree what those bounds are before we even cut a line of code, before we even we think about it; so I’m going to apply things like equivalence partitioning and boundary value analysis as we are writing stories.

I’m going to ask really hard probing questions, so I’m going to do that. Once we move beyond that, we move to things like say, decision tables and state transition diagrams, which are, once again, specification-based test design technique. I’m looking there for how we’re going to assess emergent behavior; so I’m looking for, once I’ve got story or a piece of functionality or a feature set, that allows me to complete functionality, what other the things that’s going to happen just as we can now?

So I’m looking for emergent behavior; I’m looking to make sure that what is meant to happen does happen; so this is where I’m moving into more of my business logic approaches. So I really have to think, "What am I trying to prove about my solution?"

If I’m looking at unit testing, working with the devs, I’m going to start to talk to them about loop testing, "Have they tested this loop; and have they made sure from the structure or white box technique they have not gone into the loop that gone around loop once and they got at maximum times?" So I’m going to be checking to make sure they have thought about all the ‘what if’ scenarios. What can I tell you - optimists, "it will work, trust me".

So I’m going to look at the level of coverage that I need and pick the right level of coverage. The true genius of Agile however, is the application by the entire team of experience-based techniques and defect-based techniques; so experience- based technique, the one we all know and love is exploratory testing.

The entire team can do exploratory testing all the way through the life cycle, all the way. You just need to be looking for the problems. Everyone in the team knows problems are going to be there; and if we look for them, we can have them off quickly and easily. We all know that the earlier we find them, the cheaper they are to fix.

So we can apply exploratory testing statistically on our design. We can also apply it dynamically once we get a build to work with. Defect-based testing I absolutely adore, because you just look for what went wrong last time and you’re able to go back to the team, and say, "Hey, last time, this went wrong, how about we focus on this?" so that sort of approach, I like a lot.

I also use checklists but I cheat and I’m really sorry about this. What I do is I put the checklist in as part of the quality criteria we’re aiming to meet before we write a story. So, we start to use our checklist as the baseline of what is good before we write a story, we agreed that and everyone builds it to pass the checklist. So by the time it comes to me to check that the checklist has been met it is quick and easy; and it’s some fabulous checklist out there, absolutely fabulous like I could start naming names and they would blush and be embarrassed (wouldn’t they Elisabeth) [Elisabeth Hendrickson - www.testobsessed.com]

   

14. Given these techniques and what you’ve been talking about, when should we engage testing on Agile project?

You got need to get testers involved until at least the second to last iteration. NO, not really.

As soon as possible. Sooner!

Now the reason that I say that is when we start to think about solutions and projects and products that we’re delivering, we don’t talk about the hard questions; we don’t ask people, "really? How fast? What do you mean by ‘user friendly’; have you thought about the maintainability aspects of this; how many platforms does it need to run?"

A lot of teams say "but we don’t really need to know that" but all of these things are design level decisions, that they need to be thought of early and they need to be thought of as part of the design approach.

So what we’re looking for is this holistic team knowledge, collective wisdom; don’t excludes me because I am a tester; include me because I’m going to ask the hard question as soon as I possibly can. There are a lot of question that testers know to ask that other people don’t know to ask. Ask anyone - ISO9126 is my personal favorite standard, I’m sure you got one too. It’s a good checklist of things that we need to think about and those things are what we’ve forgotten unless you’re engage to testers.

Never forget that it’s usually the testers’ responsibility to find the thing you forgot to specify, which makes it very hard. So what we want to do is we want to have people up front saying, "Well, how am I going to find that; where is it going to be; where can we capture that; where is it defined," not so much defined in a requirement or anything but where is it on a design and how are we building that in? There’s a whole premise of Agile is to build the quality in, not to find it at the end.

So, what you want to do is you want to have the people whose job it is to find it tell you how to put it in at the beginning of the project.

May 21, 2012

BT