BT

Why Testing Matters in Agile Projects

Posted by Sharon Robson on Oct 02, 2012 |

Last week InfoQ published an article titled The Day the QA Department Died.

This article is a response by a testing professional to that article.


Just like the passing of a monarch (the King is dead…long live the Queen) We are now hearing a similar thing in software development …”Testing is dead, we don’t need testers anymore!”….then……whoa!!, the customer is unhappy….then…….“Long live Testing”. But an even better, rounder, more effective testing. And like many resurgent monarchs through history (my favourite is Queen Elizabeth I), Testing will powerfully help redefine the way things are done and how they work.

I bet you are thinking that’s a big boast right? Well here’s how it’s going to happen….

Let’s discuss the concept of testing – what is it? Testing is the process of considering what is “right”, defining methods to determine if the item under test is “right”, identifying the metrics that all us to know how “right” it is, understanding what the level of “rightness” means to the rest of the team in terms of tasks and activities, and assisting the team make good decisions based on good information to hit the level of “rightness” required.

Testing is way beyond random thumping of the keyboard hoping to find defects; testing is about true understanding of the required solution, participating in the planning of the approach taken to deliver it, understanding the risks in the delivery methods and how to identify them as soon as possible to allow the appropriate corrective action to be taken. Testing is about setting projects up for success and helping everyone to understand the appropriate level of success required.

So why do we still care about testing, isn’t everyone in the agile team doing it? Well, actually NO!!

It all begins with the concept of quality. “That’s easy” you say to yourself, and if you do I dare you to take it to the next step….define it! Ask your development team, ask the customer, ask the Product Owner, ask the Project Manager, ask the CIO and CEO of the organisation to define quality, define good, define good enough. Do they agree? If not there is your first problem. The role of testing is there to help teams define and understand the impact of quality.

“Impact of quality?? What is that?” is your next question. Here is a fact - Quality costs! But even worse - true quality cost more! To build it in we first have to define it and then find it. There is no way to have a quality solution without building the quality into the process, the techniques and building thorough testing, at all levels, into the work that we do.

“Gotcha!” says the devs “We define done to tell us about quality in agile”. “Rubbish!” is my reply. In all my time in IT the most exciting concept I ever heard of was that of defining “done” – all the components, all the knowledge gathered, all the information passed on….the complexity of the solution defined up front, all the team (development and customer teams), being aware of the work to be completed to generate “done”. Defining done reminds me of what testing is all about. But the bad news is that we don’t do it! No! We don’t! Just like we don’t define quality…we just pretend we do. Ouch! Did that hurt?

Why did I say that? Firstly, the definition of done, like quality is very difficult. Quality is like beauty – it is in the eye of the beholder. Testing is all about being trained to focus on the definition of and then the detection of quality (or the lack of it) and also communicate what the quality levels mean across the project in terms of progress, risk and the work remaining. Defining done is the same really…done is in the eye of the “doer” (not the beholder) and this allows us to understand the many levels of done…done (my bit), done (our bits), done (the story), done (the iteration), done (the feature), done (the release), done (the product), done (the project).

 “Well that’s ok, we can define when it’s finished” is your witty response to this problem. Now here is the challenge! Defining “done” is very different to defining “done well”. The “well” bit of “done well” is not only about finishing the work required for the thing under production, but for also defining how we will know that it is finished to the standard required. Each level of done has a different standard of completion and a very different standard of quality of the “well”. There is one group of people inside a team who are ideally suited to not only assisting in defining “done well” but also the process and techniques that can be used to find the degree of “well doneness”.

Step one, define finished…well that seems easy – make sure that all the components needed to deliver the level of done have been completed by the doer. Ok, sounds good so far. But here’s the rub…nothing is “done” until the customer is happy with the product. That is one of the underpinning attributes of the Agile Manifesto. I quote “working software over comprehensive documentation”. For some unknown reason the definition of “working” got confused with the definition of “done” and the concept of “comprehensive documentation” got confused with the definition of well tested. And then this is trumped with the principle “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. So what makes software valuable? Is it that the product  is there? No! it is that it does its job well!! So is this done (my bit), done (our bits), done….?

So how can we do it? For a start we need to recognise that testing considers more than just the functionality which is where the developers and the users focus. What it “does” is the easy bit (tongue in cheek – I promise). It is easy to define, easy to build, easy to assess. Functionality tends to be binary….like done! There are two levels of done…”Done” and “Not Done”….there is nothing like “almost done”. Functionality is like doneness….it functions or it does not….binary! But then we get into the realm of “done” and then “done well”, and then even further “done well for whom?”

Testing focuses on understanding what makes a solution or an approach valuable to the people using it. Value is context dependent and has to be defined in the context of the project and the customer. Using standards such as ISO9126 with its 6 quality characteristics (functionality, reliability, usability, efficiency, maintainability and portability) with their sub characteristics allow the testers to provoke great discussions around what is good and well and valuable. But even better true testing is needed to find these attributes. This type of testing also takes time and planning to do well, and even longer to do very well.

All the non-functional attributes of a solution are design level attributes and usually cannot be evolved iteratively. They need to be discussed up front, as soon as possible in the definition of the solution and yes….as soon as possible in the definition of the design of the solution. If these attributes are not built in right from the beginning they will never be able to be found through testing at the end. Can unit testing do that? No!

“Ahhhh – that’s why we do Acceptance Testing Driven Development!” You say. I agree, but we don’t do ATDD properly, we only focus on what the customers know about and ask about, not about the things that are needed to be thought about and captured early.

 “Let’s just focus on the functionality” is a phrase I often hear that causes me to cringe….it means that it is too hard to think about anything else so let’s just get going and hope that it is right. Have you EVER heard of anything LESS agile? Agile is about building it right, the first time!

Testing contributes to this building it right, the first time, via static testing. Static testing is “testing the solution without executing the code”. The beauty of static testing is that it can be done anywhere and at any time. Static testing should happen when someone comes up with the first idea for a solution. Ideally a tester is there saying things like “that’s interesting functionality…how will you know it is valuable?”. Testing the concept to see it will actually deliver the required solution through questions, diagrams and the planning of the solution is a vital part of the life of a product.

We can also test the planning of the delivery, focusing on risks and the timing and dependencies of the components and then how the various levels of “done” can be used to prove that we are all heading towards the right solution. Defining done to the extent of defining “well done” requires the engagement of the right people at the beginning of the work, not after the code cutting has started. Testing the plan is vital – have the right environments, teams, resources, approaches been defined to deliver value? This is often a question that is NOT answered prior to code cutting beginning. The wonderful new regime of Testing being alive and well sees it being asked….AND answered prior to anyone moving to the next step.

This is best achieved by the up- front definition of acceptance criteria – using test design techniques. “WHAT??? Testing already???” you yell. Yes, of course! What is the value of all the training and certifications that testers achieve if they don’t apply it up front? Most of the test execution activities that you see testers do are as a result of their test design activities based on risk and specification based test design techniques. Concepts, features, epics and stories are just specifications masquerading under another name. Even better, in the ideal agile world, testers are involved in their definitions so that they are able to be statically tested and then have dynamic techniques applied to them prior to anyone attempting to cut a line of code.

So then we start the real work (chuckle….anyone who thinks that what happens before code cutting isn’t work does not understand the concept of work), We start to cut the code, do unit testing, promote the code, do integration testing, promote the code to the test environment all in the hunt for (drum roll)……Emergent Behaviour!

Emergent Behaviour  - This is the true value of the testers in an agile team, focusing on how the modules and code and stories hang together to deliver the required functionality. But we all know this is where the good bugs live! Bugs that only show their heads when we start moving through the solution in various ways. The skill of the tester is to design those paths through the system following both the customer’s needs and also the risk of the path, using test techniques to identify key areas of focus. This is where the skill in using Decision Tables and Finite State Models (such as N-1 switch coverage) really come into the fore. These are the bugs that won’t be found at unit or integration test time, but will cripple the acceptance testing immediately.

The process of system testing, designing tests to promote risky emergent behaviour and also designing tests to provide the empirical evidence required to assess such things as coverage, residual risk, defect density, progress rates, and other quality attributes is also the skill of the tester. I would not suggest that developers or BAs cannot do this. I would suggest however that they are way too busy doing their jobs! I would also suggest that the testing mind set and skills are best to be effective and efficient in planning, executing and reporting these things.

This brings us to the discussion of Validation and Verification. What’s the difference? Verification is making sure it is built correctly – adhering to standards, following patterns, doing the right thing at the right time. Validation on the other hand is defining what is the right thing! Both of them need to be done, and testing gives us the skills and techniques to do both, as well as do them to the various attributes of the system that need to be covered (such as the quality).

The next question is “do we still need testers?” IMHO – yes!!! Why? The testing practitioners think differently to everyone else on the team. Testers are “professional pessimists” (ISTQB Foundation Syllabus). Good testers spend our time focusing on the potential problems, not the potential solutions. Right from the beginning we consider the bad news  - what could go terribly wrong, and how quickly can we find it, or even better, what can we do to stop it? This fits perfectly with the agile concepts of “failing fast” and understanding the risks as soon as possible. We need this mind set engaged as soon as possible in the project and solution design to identify as many of the potential hurdles as soon as we can.

Not many people know enough about testing to be able to accurately plan the testing effort, and in an agile team the focus on where and when to test stuff is huge! There needs to be a clear line between story level testing, iteration level testing and feature level testing; remember the levels of “done” from before? Who and where and when each of the tests need to be done needs to be clearly defined to ensure that all the environments, tools, techniques, data and people are available to execute it. Testing (like most things) does not happen by accident in good teams…good testing takes great planning. Great testing takes excellent planning. Testers and testing need to be intimately considered in this planning to make sure all the appropriate set ups are established and put in place.

“How do testers do that?” you ask? Most people only think of testing as test execution, but in the real world the bit of testing that you see is the bit of testing that is the easiest. Executing test cases takes about 25% of the total test effort. Most testing is done in the mind or in documentation. “OMG”….you are shocked…..”Agile says “working software over comprehensive documentation”! Yes it does! But testing can happen on any and all documentation (stories, whiteboard designs, acceptance criteria etc). The first and biggest hurdle of all is people or teams who do not want to define “well, value, done” or don’t want to get into specifics because it is too hard.

Blended teams allow us to have the best of every world, deliberately excluding a skill set or knowledge set is naïve and truly immature behaviour and does not promote longevity of solutions or longevity of the approach. An integrated team that covers all the skills required to deliver the best possible solution in the best possible time for the best possible price is just plain smart and good business. Recognising the skills of the other people in the team and leveraging them to their maximum is also just plain smart.

Do testers need to be a special group of people? No….anyone can be a tester on an agile project, in fact everyone is a test executer on an agile project. The main thing is that all the team members  have the discipline to ensure that they have put their “testing heads” on during the day to day work to complete all the testing activities (not just execution) that are required. If team members don’t take the time or make the effort to plan, design and then applying testing to their work products, to their approach and their solution then the team will have no idea of the progress they are making and the issues that they are facing.

So what advice can I leave you with:

  • Make sure your whole team has a clear and shared understanding of the definition of done at every level - my task, the story, the iteration, the release, the project and the product
  • Make sure your whole team has a clear and shared understanding of what quality means on this product - what constitutes "working software" 
  • Testing is not bashing a keyboard hoping to find defects, nor is it just running unit tests
  • Testing is a whole team responsibility, that should start with the very first concept discussions and pervade every aspect of an agile project 
  • Test early, and test often - waiting until the end of any piece of work is the wrong time to start thinking about testing
  • Static testing (examining every piece of work to ensure it contributes to the quality needed) is more valuable than executing test cases
  • Designing good tests is a specialist activity, all members of an agile team can do it but it needs the right mindset

Is testing dead in agile? Yes….traditional, old fashioned, end of the lifecycle testing is dead. Long live the new testing, integrated, up front, actively involved, challenging mind sets, challenging the status quo and enabling the team to deliver…deliver Value, deliver “working software” and deliver solutions that customers actually want!

About the Author

Sharon Robson is a Knowledge Engineer/Consultant specialising in Software Testing and Agile practices for Software Education. With over 20 years’ experience in information technology, software testing, software development and business analysis, Sharon is a talented trainer who develops and delivers training courses at all levels - from Introductory to Advanced – particularly in the realm of software testing. Sharon’s passion for software testing also comes to the fore when she is consulting with Software Education’s customers - implementing software testing approaches, techniques and methodologies. Sharon also consults on Software Testing, Software Testing Process Improvement, Testing in Agile Methodologies, and Agile Implementations for all aspects of the Software Development Lifecycle.

Sharon is a founding board member of the Australia New Zealand Testing Board (ANZTB). This board sets the examination and training provider criteria for software testing certification in Australia and New Zealand as part of the International Software Testing Qualifications Board (ISTQB). Sharon was also the chairperson of the Marketing Working group for the ISTQB, managing and organising the international marketing approach for ISTQB for a number of years. In addition, Sharon has been an active member of Women In Technology (WIT) and Females in Information Technology &Telecommunication (FITT) in Australia.

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

I have some things I disagree on by Nathan Gloyn

You seem to be elevating Testers above other team members by saying
testing is about true understanding of the required solution, participating in the planning of the approach taken to deliver it, understanding the risks in the delivery methods and how to identify them
and
Ideally a tester is there saying things like “that’s interesting functionality…how will you know it is valuable?”
which seems to me to encompass solution design/architecture, project management and product management roles, whilst testers can indeed contribute so can all the other members of an agile team.

You mention the definition of done (DoD) as being confused with working software and with defining quality. Working software is the primary measure of progress, the DoD is there simply to help the team to work towards this and usually contains all the activities the team has identified that they need to have covered to meet this goal, including testing by testers, but it does not attempt to define what is or guarantee quality.

In relation to quality, testers cannot discern quality any better than anybody else in a team, as you state
Quality is like beauty – it is in the eye of the beholder
and this can be seen where what a tester believes is 'quality' is not necessarily what a stakeholder or user would define as quality, but you cannot know that until the stakeholder/user uses the software.

You mention certification and training and whilst they can be advantageous within IT generally certifications in particular aren't necessarily seen as a big discriminator as with most things in software development your worth is gauged on what you can actually do rather than the courses you and attended.

I won't disagree that testers think differently as all the good testers I've come across do have a different mindset to developers allowing them to test things the developer may not have considered, however, when it comes to Verification & Validation user stories should have enough information on them so that the product owner can validate the functionality. This is not to say that there is no testing to be done around validation but a developer shouldn't normally say code is ready to test until they believe it has been validated by the product owner.

The is only one thing I really disagree with you is where you said
Agile is about building it right, the first time!
This isn't correct, this is waterfall type thinking believing we can get it right first time, in agile we know that capturing requirements is an imprecise activity and users won't know "what they want" until they can see something, therefore we know it won't be right first time but that's ok as it allows us to iterate to get to what the user actually wants.

I agree with you around the difference of the testing role in agile being more involved but that itself can bring challenges as a lot of testers are still of the mindset "I get given it after the dev's are finished and try to break it".

Testing does matter in agile but I think more needs to be done to help testers transition from the old style testing to the new style and this in itself can be a challenge as organizations frequently only think of testing in the old way, so it is a case of trying to educate organizations not only how to be agile but how testing is integrated in the process.

Re: I have some things I disagree on by Sharon Robson

Hi Nathan, thanks for taking the time to read and really think about my article...I really appreciate that!
I agree with what you say about all the team being equal and involved...it was never my intent to elevate the role of tester above anyone, my intent was to emphasise that each team member, no matter their role is able to (nay, obliged to) ask these questions. People with the focus on testing may ask the same questions as others, or ask questions that other people do not ask. What I was hoping to demonstrate is that testers should be considered as valuable as everyone else on the team - a true agile team recognises the activities and thought processes required rather than the name-tag or job descripion.
I am quite adamant that quality is very subjective, and that's why I think that the only measure of progress is working software AND anything else that is required to deliver the solution. I think as a whole we need to consider very carefully what "working" means and for whom. I think that someone who has a background in finding quality of the lack of it can indicate where it can be found or missing in an effective manner.
Your final comment is exactly what I feel - we need to break the mindset of the tester breaking something after it has been developed - that's why I mentioned "getting it right the first time". I would prefer to see someone (not necessarily a "Tester") looking at each stage of the SDLC to help make sure that each artefact is the best it can be. If Developers and the Product Owner can think about all of these things - that's great! If not, maybe the early involvement of someone who can look at things from a different perspective can help us build things better...saving time and money by building each bit correct, rather than building it and then finding the issues.
Once again - thanks for such great feedback! I hope I have clarified somethings. Regards Sharon

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

2 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT