BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Value and Purpose of a Test Coach

The Value and Purpose of a Test Coach

Bookmarks

Key Takeaways

  • Like climbing a mountain, an organisation’s automated testing culture is somewhere between base camp (“we just write code”) and the summit (“we’re domain oriented and quality focused”). The test coach’s job is to help them reach the summit.
  • Even teams that are writing user stories and acceptance tests often don’t realise that they’re not yet at the summit; they’re forgetting to explore error conditions in depth.
  • A good test coach spends a lot of time listening and adapting the process to the project, and can learn as much from the team as they can from the test coach.
  • The test coach role is demanding and requires a broad set of skills, ranging from programmer and tester to “people person” and agile coach, even salesperson. But…
  • The reward is to see an organisation transition into one that’s domain oriented and quality focused, and working in a particular way to achieve this.

Automated testing is a culture that requires careful nurturing.

Actually, let me qualify that. Domain oriented, quality-focused automated testing is a culture that requires careful nurturing. It’s a state of mind, and either an organisation has it, or they don’t.

So wouldn’t it make sense for teams to have someone who can coach them on the testing mindset, while also being actively involved in teamwork?

It’s almost like we need some kind of (air quotes…) “test coach”…

The test coach role is a fundamental part of Domain Oriented Testing (DOT). It’s a way of instilling into the team a sense of product quality, pride in their code combined with a particular way of working that results in a system that’s more in tune with the business domain and requirements.

I’ve been developing and “living” the test coach role in a number of organisations, most recently (and significantly) at London-based fintech pioneers 11:FS. For their cloud-hosted banking services, software quality requirements are stringent to say the least, and their deadlines are suitably ambitious. So far, DOT has been providing great results for them.

How much help an organisation needs can vary, depending on how far along they already are.

Climb every mountain

Depending where the organisation’s testing culture already is, the journey might start right at the base of the mountain, or some way up:

Where on this journey is your organisation right now?

  1. Nothing here needs fixing (base camp): I’ve written some code, what’s not to like?
  2. TDD FTW: I’m about to write some code, but first I’ll write a failing test
  3. Happy path: Our customers are shaping the user stories. Let’s use BDD to turn the stores into acceptance tests 
  4. Domain oriented (the summit): Let’s get the domain expert thinking about exceptions, alternate flows, unhappy paths, and what he/she wants to happen in each case. Then let’s turn these into test scenarios

I’ve worked at many places that were languishing at base camp (stage 1) particularly 15-20 years ago. Back then, much of the industry was stuck at base camp too. At one particular start-up in the early 2000s, I heard the question more than once, “Have you really known a unit test to catch a bug?”

The question by itself reveals why team culture has to change for automated testing to really work. The developers were cynically viewing unit tests as a panacea for catching bugs after the software was written. There are so many things wrong with this… to name a few:

  1. Unit tests are really not meant to be regression tests; they’re too fine-grained
  2. You write unit tests to catch bugs as you’re writing the code under test. If you’re adding unit tests afterwards, it’s already too late
  3. If you want tests that are effective at catching bugs afterwards, you need a more “zoomed-out” form of tests - integration tests, in other words

In a well functioning automated testing culture, many forms of bugs don’t ever appear. It’s a bit like the compiler eradicating type-level bugs by preventing them at compile-time. Domain oriented tests (which we’ll come to) also help to prevent errors of understanding from ever appearing. So “did you ever see a test catch a bug?” is clearly meaningless in this case, as you would need to trace an alternate timeline where the same project didn’t have tests, in order to see if bugs would have been introduced.

The start-up team I mentioned did eventually progress to stage 2 on the mountain climb… to the extent that they began writing JUnit tests. However it was a timeboxed activity, showing that they still didn’t really “get it”. Not by coincidence, they went out of business less than a year later. They could never have progressed to stage 3, though, as they didn’t have clear business requirements to base their tests on.

Thankfully, the industry has matured considerably since that time, and on the rare occasions you might encounter a team who are still happy to remain at base camp, it’s increasingly a surprise… like seeing a lesser-spotted woodpecker in the wild.

Many teams remain at stage 2, even today. The developers are running the development culture (which kinda makes sense). They’re using TDD - or at least writing unit tests as they go along. They’re thinking in terms of code coverage - “Leave no function untested.” As a result, their tests are oriented towards the code.

If you’re introducing a more business-focused level of testing into such a team, you’ll likely face resistance, as the developers won’t really see the point. I’ve found they tend to see acceptance tests as duplicated effort. Sometimes the only recourse is to begin writing business-focused tests and show them the difference. Gradually win them over.

The climb from stage 3 is steep because the “cultural revolution” must leap out of the developer pit and into the larger organisation. Developers are generally pretty receptive towards improving their art and are often the ones banging the drums for change. But outside this group, the changes must be framed in terms of P&L and business prioritisation. This is often a tough sell, even if the results speak for themselves.

Another reason for the steep climb is that teams often stop improving. They believe stage 3 is the summit. After all, they’re writing automated acceptance tests and involving product owners in working through each user story.

But they may not realise that they’re leaving big gaps in the acceptance tests, as their focus is on a single desired outcome for each story. Hence the steep climb beyond stage 3 is a form of resistance, as they’re unaware of the “bigger picture” waiting for them at the top.

Luckily, when they do reach the mountain’s summit (stage 4), they’ll understand why being domain oriented makes such a difference to the tests and the overall project.

I’m using “domain oriented” to denote both the urge to drive tests, code and documentation from an executable business domain model, and to explore business scenarios in great depth, before any code for a particular story is written.

The test coach’s job is to help everyone in the project reach the summit. But the test coach can’t carry everyone up the mountain. He or she will need some level of buy-in, cooperation and all-round goodwill from the team, even when they’re all still at base camp and insistent that nothing about their project is broken.

So, how does the test coach get everyone to the summit?

How to be a test coach

The role is similar to software development-engineer-in-test (SDET), but with expanded responsibility to share the knowledge and enthusiasm for the discipline they’re introducing.

While an SDET tends to reside in one team at a time, the test coach’s role can span multiple teams.

Similar in principle to a Scrum master or agile coach, the test coach isn’t necessarily a specialist. They’re simply someone who champions the cause, works down in the trenches to help instill a cultural change among the developers if needed, works with the product owners, domain experts and management to help expand the stories into “lightly structured” scenarios, and collaborates on development work and writing of tests.

A test coach also listens to each team, and adapts the testing process if it isn’t working.

So as a test coach, with the “mountain” metaphor in mind, a typical day-in-the-life for you could involve any of these activities:

  • Following BDD, or at least a BDD-like process, and writing automated acceptance tests - Cucumber, Fitnesse, Gauge etc.
  • Writing and refactoring code, along with unit tests and component tests
  • Encouraging Three Amigos-style sessions where the BA, developer and tester expand a user story into its details - scenarios/examples, and so forth; each stakeholder provides their unique perspective on the problem
  • Listening and learning — the other people on the project have skills and experience too; use that to improve yourself, but also to adapt the process to the people and the project itself.

    For example I found this at 11:FS where they’ve embraced a reactive, CQRS-based microservices architecture. My goal initially was to introduce e2e tests for all the business scenarios including unhappy paths, but this would have made the acceptance tests too unwieldy and slow to run, or even fragile, as an individual microservice deployment could break some unrelated tests. Instead, we kept the happy path scenarios, and wrote the majority of unhappy paths as component-level tests, within each microservice. The result was good comprehensive requirements coverage, with a test suite that still completes quickly.

    In other words, we avoided the notorious “inverted test pyramid” antipattern… we actually ended up with more of a test diamond, with the majority of testing going on at the service level. That may not be ideal for every project, but in this particular case it’s been ideal.
     
  • Nudging the developers towards writing their tests in a domain oriented way: that is, testing business behaviour rather than the innards of their code

    A good example of this: recently I’ve been working closely with Parallel Agile (PA), a company with close ties with the University of South California (USC). PA’s main product is CodeBot, a cloud-based enterprise code generator. In fact, CodeBot is really more of an executable architecture generator, as it creates and deploys a complete server platform from a UML domain model. Such a “rules-intensive” project requires a comprehensive suite of tests, especially as the product continues to be extended with new capabilities.

    These tests include more low-level unit tests than I would normally write, as the underlying generator “engine” is so complex. However, we kept the unit tests focused on the business domain, by feeding it example class models from a number of business and technical domains. This approach also extended to the higher-level service tests, of course.
     
  • Nudging the BAs and product owners towards mapping out the business domain, and ensuring each story has been expanded into sufficient detail for a developer to pick it up, and for the acceptance test scenarios to be written. Doing this well involves exploring unhappy path scenarios far more than the happy paths. People like to focus on the goals (the fun stuff, really) and need frequent reminding to also think about edge cases and what they want to happen when things go wrong - or just happen differently.

    Among other benefits, this really helps improve sprint estimating. At one organisation a couple of years ago, we’d noticed a trend where the sprint burndown chart - showing the estimated amount of work remaining in the current 2-week sprint - would “burn down” nicely for the first week, then reverse direction and burn back up during the second week. The amount of work remaining often finished higher than when it started!

    It was only when we started to expand each story into test scenarios, and analyze the scenarios in detail that the problem became apparent. The team was estimating each story based on incomplete knowledge… we’d been doing “happy path estimating”, meaning probably 90% of the functionality in each story - error handling, business-centric alternate courses etc - was missing. This problem is surprisingly common, leading to frequent under-estimation of tasks and cost overruns; yet it’s quite easily addressed.

    This goes to show that a domain oriented testing approach can have a positive effect on the overall health of the project, not just on the state of the code and the tests.
     
  • Nudging the testers (or developers-in-test) in a particular direction, so that their acceptance tests are driven directly from the story scenarios, and similarly explore unhappy paths far more than the happy paths

Sure, that’s a lot of nudging. It might even seem like coercion, but the actual practice is far from it. Like moving a huge mirror inch by inch, you’re gradually coaxing the team’s frame of reference until your message suddenly clicks into place. This also helps stakeholders reach the right conclusions themselves, which further reinforces the result.

Of course, you can’t necessarily move the mirror on your own; this is why people’s buy-in is so important. Stakeholders will buy into the principles and goals of a process if they can see that it’ll make their lives easier, and result in a higher quality product.

Overall, the test coach is a demanding, highly skilled role. You must have a good grasp of all the disciplines you’re “nudging” the stakeholders towards. You must have great people skills, or at least a knack of presenting things so that people realise you’re on their side, working with them.

For example, at one company I was presenting the new automated testing strategy that we’d collaborated on, and murmurs went around (particularly from the upper management) that this sounded “too much like QA”. The company wanted code quality and tests, but they resolutely didn’t want a “QA department” as such. It took some digging to find out why; and the solution, it turned out, was more a case of strategic naming than making any huge changes.

“We want testers, just don’t call them QA”

In this agile climate, for many organisations QA has become a dirty word. However unfairly deserved, for many people QA is now synonymous with waterfall, big bang integration, process overload with long forms to fill out, and a department separated from the developers, promoting a “sling it over the fence to the testers” approach to software delivery.

But let’s be honest, a test coach’s purpose is very similar to that of QA: to introduce and maintain a process that gets the team focused on software quality.

How they go about it is different though. Like an SDET, the test coach works within each development team, involved in their day-to-day activities. There’s still a clear separation of responsibilities, but there’s also a shared understanding of any underlying problems, the business domain, and of what’s actually being delivered.

Test coaches encourage a highly integrated setup, with product owners, testers and developers all working together.

So even though a test coach fulfills the same set of goals as QA, they provide a way to introduce quality-oriented cultural changes into an “agile infected” organisation, while operating within the practices and principles that have made agile so popular. Is this post-agilism, perhaps?

Interested to know more? Join the LinkedIn discussion group to help shape DOT as it continues to develop.

About the Author

Matt Stephens has worked in software development for nearly 30 years, in a mix of startups and large financial organisations mainly in London. He's written a number of books on software design and testing, most recently Parallel Agile with Barry Boehm, Doug Rosenberg and Charles Suscheck. He's currently putting together a book-length version of Domain Oriented Testing (DOT), the software process he talks about in this article. You can find him at the DOT group on LinkedIn.

Rate this Article

Adoption
Style

BT