Missing Test Competencies in Agile
Fran O'Hara, Director and Principal Consultant at Inspire Quality Services, recently shared his lessons learnt integrating test into the agile lifecycle. O'Hara's core message is that test competencies are needed even when traditional testing roles are gone. When agile teams focus on automated functional testing alone, gaps emerge in terms of both exploratory testing and other risk areas such as system integration testing and non-functional areas (performance and usability, for instance).
Although the goal is to have cross-functional teams, these should neither be composed of individual specialists (e.g. a tester, a programmer, a designer, etc) nor of all generalists. In the latter case the team might lack the skill level needed for effective software development. In particular test competencies (such as test case design, clarification of business requirements or clean test automation) tend to get lost when organizations interpret Scrum to the letter thus devising teams composed only of developers (besides the Scrum Master and the Product Owner).
According to O'Hara teams with professional testing capacities regularly outperform teams without them, proving the need for a mix of skills and even personality traits (testers tend to have a pedantic attention to details which can unveil misunderstandings and gaps in requirements before they get built into the product) in the team.
Other competencies typically assigned to test or QA managers in non-agile environments such as test process and strategy definition and test planning might be required as well in agile, O'Hara added. On one hand, many organizations still bundle new features into larger releases, which require some level of system integration to deliver. On the other hand, even agile teams that do on-going testing (using TDD and BDD for example) still typically focus on the functional aspects of the user stories, leaving out performance, usability and other non-functional requirements. The latter still need to be performed at some point before delivery, thus the need for coordination and planning.
O'Hara recommends appointing test champions or consultants who can work with and advise across multiple teams on how to fill in the gaps in terms of test competencies. The ultimate goal is to reach full integration of testing in the cross-functional team, as in scenario C below.
O'Hara warns that multiple working practices are required for reaching this level of test integration, from an acceptance test driven approach, to including test tasks (such as test environment/tools setup or exploratory testing) in sprint planning, breaking large stories into smaller ones that are testable early in the sprint, limiting work in progress to a few stories at a time, dismissing testing or verification as a last step activity (for e.g. a "verification" column in the board is a bad smell), effective and frequent backlog refinement, focus on code quality (code standards, reviews, TDD, BDD) and adhering to a strict definition of done (even in times of stress).
Discussing requirements from a business perspective during backlog refinement (and to a lesser extend in sprint planning) is also a crucial activity but according to O'Hara in order for this activity to be effective the product owner should bring to the table not only high level features but also an initial set of acceptance criteria which can then get further discussed and refined by the team into small, clearly scoped stories.
Finally, retrospectives should focus on improving the definition of done in order to help manage technical debt and ensure sufficient internal (code) quality and external (functional and non-functional) quality, says O'Hara.