BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Why Testing Matters in Agile Projects

Why Testing Matters in Agile Projects

Last week InfoQ published an article titled The Day the QA Department Died.

This article is a response by a testing professional to that article.


Just like the passing of a monarch (the King is dead…long live the Queen) We are now hearing a similar thing in software development …”Testing is dead, we don’t need testers anymore!”….then……whoa!!, the customer is unhappy….then…….“Long live Testing”. But an even better, rounder, more effective testing. And like many resurgent monarchs through history (my favourite is Queen Elizabeth I), Testing will powerfully help redefine the way things are done and how they work.

I bet you are thinking that’s a big boast right? Well here’s how it’s going to happen….

Let’s discuss the concept of testing – what is it? Testing is the process of considering what is “right”, defining methods to determine if the item under test is “right”, identifying the metrics that all us to know how “right” it is, understanding what the level of “rightness” means to the rest of the team in terms of tasks and activities, and assisting the team make good decisions based on good information to hit the level of “rightness” required.

Testing is way beyond random thumping of the keyboard hoping to find defects; testing is about true understanding of the required solution, participating in the planning of the approach taken to deliver it, understanding the risks in the delivery methods and how to identify them as soon as possible to allow the appropriate corrective action to be taken. Testing is about setting projects up for success and helping everyone to understand the appropriate level of success required.

So why do we still care about testing, isn’t everyone in the agile team doing it? Well, actually NO!!

It all begins with the concept of quality. “That’s easy” you say to yourself, and if you do I dare you to take it to the next step….define it! Ask your development team, ask the customer, ask the Product Owner, ask the Project Manager, ask the CIO and CEO of the organisation to define quality, define good, define good enough. Do they agree? If not there is your first problem. The role of testing is there to help teams define and understand the impact of quality.

“Impact of quality?? What is that?” is your next question. Here is a fact - Quality costs! But even worse - true quality cost more! To build it in we first have to define it and then find it. There is no way to have a quality solution without building the quality into the process, the techniques and building thorough testing, at all levels, into the work that we do.

“Gotcha!” says the devs “We define done to tell us about quality in agile”. “Rubbish!” is my reply. In all my time in IT the most exciting concept I ever heard of was that of defining “done” – all the components, all the knowledge gathered, all the information passed on….the complexity of the solution defined up front, all the team (development and customer teams), being aware of the work to be completed to generate “done”. Defining done reminds me of what testing is all about. But the bad news is that we don’t do it! No! We don’t! Just like we don’t define quality…we just pretend we do. Ouch! Did that hurt?

Why did I say that? Firstly, the definition of done, like quality is very difficult. Quality is like beauty – it is in the eye of the beholder. Testing is all about being trained to focus on the definition of and then the detection of quality (or the lack of it) and also communicate what the quality levels mean across the project in terms of progress, risk and the work remaining. Defining done is the same really…done is in the eye of the “doer” (not the beholder) and this allows us to understand the many levels of done…done (my bit), done (our bits), done (the story), done (the iteration), done (the feature), done (the release), done (the product), done (the project).

 “Well that’s ok, we can define when it’s finished” is your witty response to this problem. Now here is the challenge! Defining “done” is very different to defining “done well”. The “well” bit of “done well” is not only about finishing the work required for the thing under production, but for also defining how we will know that it is finished to the standard required. Each level of done has a different standard of completion and a very different standard of quality of the “well”. There is one group of people inside a team who are ideally suited to not only assisting in defining “done well” but also the process and techniques that can be used to find the degree of “well doneness”.

Step one, define finished…well that seems easy – make sure that all the components needed to deliver the level of done have been completed by the doer. Ok, sounds good so far. But here’s the rub…nothing is “done” until the customer is happy with the product. That is one of the underpinning attributes of the Agile Manifesto. I quote “working software over comprehensive documentation”. For some unknown reason the definition of “working” got confused with the definition of “done” and the concept of “comprehensive documentation” got confused with the definition of well tested. And then this is trumped with the principle “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. So what makes software valuable? Is it that the product  is there? No! it is that it does its job well!! So is this done (my bit), done (our bits), done….?

So how can we do it? For a start we need to recognise that testing considers more than just the functionality which is where the developers and the users focus. What it “does” is the easy bit (tongue in cheek – I promise). It is easy to define, easy to build, easy to assess. Functionality tends to be binary….like done! There are two levels of done…”Done” and “Not Done”….there is nothing like “almost done”. Functionality is like doneness….it functions or it does not….binary! But then we get into the realm of “done” and then “done well”, and then even further “done well for whom?”

Testing focuses on understanding what makes a solution or an approach valuable to the people using it. Value is context dependent and has to be defined in the context of the project and the customer. Using standards such as ISO9126 with its 6 quality characteristics (functionality, reliability, usability, efficiency, maintainability and portability) with their sub characteristics allow the testers to provoke great discussions around what is good and well and valuable. But even better true testing is needed to find these attributes. This type of testing also takes time and planning to do well, and even longer to do very well.

All the non-functional attributes of a solution are design level attributes and usually cannot be evolved iteratively. They need to be discussed up front, as soon as possible in the definition of the solution and yes….as soon as possible in the definition of the design of the solution. If these attributes are not built in right from the beginning they will never be able to be found through testing at the end. Can unit testing do that? No!

“Ahhhh – that’s why we do Acceptance Testing Driven Development!” You say. I agree, but we don’t do ATDD properly, we only focus on what the customers know about and ask about, not about the things that are needed to be thought about and captured early.

 “Let’s just focus on the functionality” is a phrase I often hear that causes me to cringe….it means that it is too hard to think about anything else so let’s just get going and hope that it is right. Have you EVER heard of anything LESS agile? Agile is about building it right, the first time!

Testing contributes to this building it right, the first time, via static testing. Static testing is “testing the solution without executing the code”. The beauty of static testing is that it can be done anywhere and at any time. Static testing should happen when someone comes up with the first idea for a solution. Ideally a tester is there saying things like “that’s interesting functionality…how will you know it is valuable?”. Testing the concept to see it will actually deliver the required solution through questions, diagrams and the planning of the solution is a vital part of the life of a product.

We can also test the planning of the delivery, focusing on risks and the timing and dependencies of the components and then how the various levels of “done” can be used to prove that we are all heading towards the right solution. Defining done to the extent of defining “well done” requires the engagement of the right people at the beginning of the work, not after the code cutting has started. Testing the plan is vital – have the right environments, teams, resources, approaches been defined to deliver value? This is often a question that is NOT answered prior to code cutting beginning. The wonderful new regime of Testing being alive and well sees it being asked….AND answered prior to anyone moving to the next step.

This is best achieved by the up- front definition of acceptance criteria – using test design techniques. “WHAT??? Testing already???” you yell. Yes, of course! What is the value of all the training and certifications that testers achieve if they don’t apply it up front? Most of the test execution activities that you see testers do are as a result of their test design activities based on risk and specification based test design techniques. Concepts, features, epics and stories are just specifications masquerading under another name. Even better, in the ideal agile world, testers are involved in their definitions so that they are able to be statically tested and then have dynamic techniques applied to them prior to anyone attempting to cut a line of code.

So then we start the real work (chuckle….anyone who thinks that what happens before code cutting isn’t work does not understand the concept of work), We start to cut the code, do unit testing, promote the code, do integration testing, promote the code to the test environment all in the hunt for (drum roll)……Emergent Behaviour!

Emergent Behaviour  - This is the true value of the testers in an agile team, focusing on how the modules and code and stories hang together to deliver the required functionality. But we all know this is where the good bugs live! Bugs that only show their heads when we start moving through the solution in various ways. The skill of the tester is to design those paths through the system following both the customer’s needs and also the risk of the path, using test techniques to identify key areas of focus. This is where the skill in using Decision Tables and Finite State Models (such as N-1 switch coverage) really come into the fore. These are the bugs that won’t be found at unit or integration test time, but will cripple the acceptance testing immediately.

The process of system testing, designing tests to promote risky emergent behaviour and also designing tests to provide the empirical evidence required to assess such things as coverage, residual risk, defect density, progress rates, and other quality attributes is also the skill of the tester. I would not suggest that developers or BAs cannot do this. I would suggest however that they are way too busy doing their jobs! I would also suggest that the testing mind set and skills are best to be effective and efficient in planning, executing and reporting these things.

This brings us to the discussion of Validation and Verification. What’s the difference? Verification is making sure it is built correctly – adhering to standards, following patterns, doing the right thing at the right time. Validation on the other hand is defining what is the right thing! Both of them need to be done, and testing gives us the skills and techniques to do both, as well as do them to the various attributes of the system that need to be covered (such as the quality).

The next question is “do we still need testers?” IMHO – yes!!! Why? The testing practitioners think differently to everyone else on the team. Testers are “professional pessimists” (ISTQB Foundation Syllabus). Good testers spend our time focusing on the potential problems, not the potential solutions. Right from the beginning we consider the bad news  - what could go terribly wrong, and how quickly can we find it, or even better, what can we do to stop it? This fits perfectly with the agile concepts of “failing fast” and understanding the risks as soon as possible. We need this mind set engaged as soon as possible in the project and solution design to identify as many of the potential hurdles as soon as we can.

Not many people know enough about testing to be able to accurately plan the testing effort, and in an agile team the focus on where and when to test stuff is huge! There needs to be a clear line between story level testing, iteration level testing and feature level testing; remember the levels of “done” from before? Who and where and when each of the tests need to be done needs to be clearly defined to ensure that all the environments, tools, techniques, data and people are available to execute it. Testing (like most things) does not happen by accident in good teams…good testing takes great planning. Great testing takes excellent planning. Testers and testing need to be intimately considered in this planning to make sure all the appropriate set ups are established and put in place.

“How do testers do that?” you ask? Most people only think of testing as test execution, but in the real world the bit of testing that you see is the bit of testing that is the easiest. Executing test cases takes about 25% of the total test effort. Most testing is done in the mind or in documentation. “OMG”….you are shocked…..”Agile says “working software over comprehensive documentation”! Yes it does! But testing can happen on any and all documentation (stories, whiteboard designs, acceptance criteria etc). The first and biggest hurdle of all is people or teams who do not want to define “well, value, done” or don’t want to get into specifics because it is too hard.

Blended teams allow us to have the best of every world, deliberately excluding a skill set or knowledge set is naïve and truly immature behaviour and does not promote longevity of solutions or longevity of the approach. An integrated team that covers all the skills required to deliver the best possible solution in the best possible time for the best possible price is just plain smart and good business. Recognising the skills of the other people in the team and leveraging them to their maximum is also just plain smart.

Do testers need to be a special group of people? No….anyone can be a tester on an agile project, in fact everyone is a test executer on an agile project. The main thing is that all the team members  have the discipline to ensure that they have put their “testing heads” on during the day to day work to complete all the testing activities (not just execution) that are required. If team members don’t take the time or make the effort to plan, design and then applying testing to their work products, to their approach and their solution then the team will have no idea of the progress they are making and the issues that they are facing.

So what advice can I leave you with:

  • Make sure your whole team has a clear and shared understanding of the definition of done at every level - my task, the story, the iteration, the release, the project and the product
  • Make sure your whole team has a clear and shared understanding of what quality means on this product - what constitutes "working software" 
  • Testing is not bashing a keyboard hoping to find defects, nor is it just running unit tests
  • Testing is a whole team responsibility, that should start with the very first concept discussions and pervade every aspect of an agile project 
  • Test early, and test often - waiting until the end of any piece of work is the wrong time to start thinking about testing
  • Static testing (examining every piece of work to ensure it contributes to the quality needed) is more valuable than executing test cases
  • Designing good tests is a specialist activity, all members of an agile team can do it but it needs the right mindset

Is testing dead in agile? Yes….traditional, old fashioned, end of the lifecycle testing is dead. Long live the new testing, integrated, up front, actively involved, challenging mind sets, challenging the status quo and enabling the team to deliver…deliver Value, deliver “working software” and deliver solutions that customers actually want!

About the Author

Sharon Robson is a Knowledge Engineer/Consultant specialising in Software Testing and Agile practices for Software Education. With over 20 years’ experience in information technology, software testing, software development and business analysis, Sharon is a talented trainer who develops and delivers training courses at all levels - from Introductory to Advanced – particularly in the realm of software testing. Sharon’s passion for software testing also comes to the fore when she is consulting with Software Education’s customers - implementing software testing approaches, techniques and methodologies. Sharon also consults on Software Testing, Software Testing Process Improvement, Testing in Agile Methodologies, and Agile Implementations for all aspects of the Software Development Lifecycle.

Sharon is a founding board member of the Australia New Zealand Testing Board (ANZTB). This board sets the examination and training provider criteria for software testing certification in Australia and New Zealand as part of the International Software Testing Qualifications Board (ISTQB). Sharon was also the chairperson of the Marketing Working group for the ISTQB, managing and organising the international marketing approach for ISTQB for a number of years. In addition, Sharon has been an active member of Women In Technology (WIT) and Females in Information Technology &Telecommunication (FITT) in Australia.

Rate this Article

Adoption
Style

BT