BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles A Journey in Test Engineering Leadership: Applying Session-Based Test Management

A Journey in Test Engineering Leadership: Applying Session-Based Test Management

Key Takeaways

  • Session-Based Test Management is an effective way to manage testing by focusing on activities testers perform in test sessions.
  • Testing activity is broken up into three task categories: test execution, bug investigation & reporting, and setup & administration.
  • Debriefing and analysis of test reports and task breakdown metrics give us visibility into the testing process.
  • The framework keeps the team organized so that they can continually learn from each other.
  • Test reports and debrief meetings reveal issues that slow down testing, giving the test manager a chance to improve the process.

In late September 2018, I started managing two testers, both straight out of code bootcamp and in their first tech job. We were embedded in a close-knit team of developers and product managers, tasked with developing and testing a consumer product.

As a new manager, I wanted to make sure the testers did good testing and learned from their mistakes. It was important to me to create a psychologically safe space for testers to learn and talk about testing - a space safe from the pressure of deadlines or unrealistic expectations. I also wanted data. I wanted to know what, where, why and how we were testing so that our work could become visible to us, the product and development team, and to management. If the risks we were finding during testing were reported and discussed in a systematic way and visible to product stakeholders, testers and test leads could help inform more timely product quality decisions.

The central motivation for testing is risk - the potential for problems that threaten the value of the product. It is the role of testing to uncover risks in the product and bring those risks to the attention of product stakeholders who make the product decisions.

Testing is a complex activity, just like software engineering or any craft that takes study, critical thinking and commitment. It is not possible to encode everything that happens during testing into a document or artifact such as a test case. The best we can do is report our testing in a way that tells a compelling and informative story about risk to people who matter, i.e. those making the decisions about the product under test.

Earlier that September, I attended a Rapid Software Testing (RST) course where I learned about the context-driven methodology developed and taught by James Bach and Michael Bolton. Afterwards, as I studied the RST course materials, I read an article by Jonathan Bach about Session-Based Test Management, a framework he and James Bach conceptualized and applied in the early 2000s at Hewlett Packard. The framework had all the elements I was looking for: a method for organizing, reporting, talking about, and measuring testing activity. I read through the article a few times and visualized the implementation. It was easy for me to imagine how it could work.

Going into it, I didn’t really have a plan other than to start and learn as we go. While that scares many people, the idea of not knowing where all of this would take us made it more fun.

To understand how we approached this challenge, let’s talk about the method first.

Session-Based Test Management

SBTM is a kind of activity-based test management method, which we organize around test sessions. The method focuses on the activities testers perform during testing.

There are many activities that testers perform outside of testing, such as attend meetings, help developers troubleshoot problems, attend training, and so on. Those activities don’t belong in a test session.

To have an accurate picture of only the testing performed and the duration of the testing effort, we package test activity into sessions.

Here is James Bach’s definition of a test session in Rapid Software Testing/Session-Based Test Management:

A test session is a period of time during which a person performs testing, and that ...

  1. Is virtually uninterrupted
  2. Is focused on a specific mission
  3. May involve multiple testers; may or may not also involve automation
  4. Results in a session report document of some kind  
  5. Is debriefed by the leader, unless the leader performs the session

Let’s break down each part of that definition.

A test session is a period of time during which a person performs testing

A test session is focused only on testing activity. Sessions allow us to differentiate between testing time and everything else, so that we can develop an accurate picture of the testing effort.

Testing activity, in a session, can be broadly categorized into three task categories:

  • Test Execution (T time) is the time when the tester is actively hunting for bugs or otherwise doing any activity that has a reasonable chance of uncovering a bug.
  • Bug Investigation & Reporting (B time) is activity in a session focused on investigating and reporting a specific bug, which interrupts the course of test execution.
  • Setup & Administration (S time) is any activity in a session that is required to fulfill the charter, but which preempts or interrupts bug finding and bug reporting. Setup and admin activity includes test design, equipment configuration, reading documentation, writing session reports, etc.

A test session is the basis of measurement in SBTM. We use these task breakdowns to derive Task Breakdown Metrics for every session. To derive accurate measurements, it is important to focus session time only on activities that fall into those three task categories. In the session report, the tester records the length of the session and estimates the percentage of session time they spent on each type of task. I discuss more about metrics and analysis later in this article.

Is virtually uninterrupted

We want the tester to spend most of their session time performing testing that fulfills the testing mission. Ideally, the tester will have minimal interruptions during the session. This means turning off Slack, emails, and other distractions. Interruptions and changes in priorities do happen, and the tester has the power to suspend or abort a session and come back to that testing mission at a later time.

Is focused on a specific mission

A test session always has a mission, or a "charter", where we specify what we are testing and what problems we are looking for. Everything that happens in a session is a result of a responsible tester making a series of judgments and decisions to fulfill a specific charter. If at the end of a session, the tester has not met their mission, more sessions may be needed to complete the testing. Testers learn important chartering and planning skills as they become experts at this method. In the session debrief, we talk about the mission and evaluate whether the mission of the testing was completed or not.

May involve multiple testers; may or may not also involve automation

Test sessions can involve many people or just one person. Often, we had developers on our team help us out with a testing problem during a session. In this scenario, developers are supporting testers. They’re there to assist, but they’re not the ones accountable for the testing.  

Results in a session report document of some kind  

At the end of a test session, the tester produces a session report, which contains their findings and what happened during the session. The report is where the tester tells the story of their testing: what they feel is important for the team to know.

There is a learning curve with good test reporting. When we first started reporting, we recorded as best we could what we thought was important. Eventually, we started to use a structured, yet flexible reporting format. The report included the following areas: tester name, test data, product area tags, a section for test notes, a section for a list of bugs, a section for a list of testability issues, the duration of the session, and task breakdown metrics.

The test notes section contains a wealth of information - test strategy, test plan, risks, tests performed and the results of those tests. I do not only want to know the test findings. I also want to know the "why" behind the tester’s thinking. What strategy is driving the design of their tests? What else do they think needs to be tested that they didn’t have time to test? How do the risks they perceive impact product quality? What did they do about the risks they found in the product?

The key benefit of reporting is increased visibility into the risks found during testing. The testers on my team used their session reports to support conversations with developers and product managers. In meetings, they would often pull up their session reports when discussing a story they tested and they would show the team the bugs they found and what they thought were potential risks to product quality. This started useful discussions about the product and how the team could improve it. The product manager used the reports to decide what risks to turn into improvement stories. The whole team made more timely and informed decisions about product quality.

Is debriefed by the leader, unless the leader performs the session

When the session or sessions are complete, the lead meets with the testers and debriefs on the testing performed. New or inexperienced testers benefit from debriefing right after or soon after they are done with a session. More experienced testers need less frequent debriefing, but it is still important that it happens regularly.

When debriefs do not happen, the quality and substance of the test reports tends to suffer because the testers have not had a chance to voice their reasoning to others. Hearing yourself speak about your testing often reveals other information that could be useful to report. The team has an opportunity to ask you questions and learn from your testing. The point of the debrief is to communicate with each other about the information that matters and to learn from each other.

Factors such as tight deadlines, competing priorities or a lack of domain knowledge contribute to how often and how long a team will debrief. For example, if a team is new to a product and asked to test it, we may spend more time debriefing initially to maximize knowledge sharing as we all learn the space together. As we get more familiar with the product, we may debrief less often or cover more work in one debrief.

On my team the testers became really good at telling their testing stories, so it was easy for me to read their completed session reports and then ask them about it when we met to debrief. It saved me a lot of time, and we could focus on the riskiest stuff. There are other options too. Slack is a great forum for chatting about the testing. Having a demo with the team to go over findings in a test report could also count as a debrief. It’s important to make it your own. There is no one way to debrief, because every team has different needs and priorities.

Task Breakdown Metrics

As we first started, we didn’t focus much on the concept of a session or on metrics. Instead, we focused first on building our test reporting and debriefing skills. Many testers are not proficient in test reporting at first. Once testers had been practicing for a while, we started introducing more organization into the process. Building the practice incrementally helped us not feel overwhelmed and made the transition to a more structured session-based process much easier. The testers were active participants in changing how we worked, rather than passive recipients.

Once we were at a point where we felt ready to do so, we started estimating task breakdown metrics at the end of each test session.

Let’s say I’ve just completed four hours of testing in a session. I spent a lot of that time bug hunting (T time), then came across a bug, investigated it and reported it to the developer (B time). In the meantime, I thought of another test idea, informed by new knowledge from the bug investigation. I modified the test data to test out my new idea and set up a new test (S time) and found more problems which I then reported (T&B time).

At the end of the session, I evaluated how I spent my session time. I estimated that I spent most of that four-hour session actively bug hunting, and found a couple bugs that I stopped to investigate, and I had a relatively easy time setting up and reporting. In my report, I estimate that I spent about 80% of my session in T time, 10% in B time, and 10% in S time.

When we’re testing, we move between these tasks fluidly. It is not necessary for a tester to painstakingly measure each minute of session time. An estimate is good enough. Conversations about these measurements helped our test team gain a collective understanding of how we were spending session time, and what issues stood in the way of productive testing. Managers and leads can also use data collected from the test reports, the debriefs with the testers, and the session metrics to report on risk to management stakeholders.

Over time, our team became more aware of how we spent our session time. Our measurements became more accurate as we gained a deeper understanding of the task breakdowns and we decided how to categorize what we were doing in our test sessions. These measures became useful to us.

Analyzing and using session data

One of the main benefits of testing in sessions is that testers produce many reports, containing data that is helpful to the project and to management. For each session, I gathered and analyzed the task breakdown metrics, session duration, bugs, issues, and risks found and entered the data into an Excel document. I normalized the data across sessions over time, by product area, type of testing, quality, testability, and risk dimensions. I counted up all the hours we had available per person per week, and compared that to the total session duration for the same time period. I imported the data into PowerBI to play with data visualization tools. I created dashboards and learned more about how to visualize data sets and how to tell a compelling data story. I also made risk lists updated with the latest information, which I could then share with various stakeholders.

Looking at the data I had analyzed, I had a pretty good idea of how much testing was happening out of all the hours we had available, the type of testing that was happening, the product areas we were covering, and the issues and bugs in each area. I broke down each product area by T time, B time, and S time. Product areas with high S times are an indicator that something about the testing process or the product made that area harder to test. High T times and low B times give us more confidence that those areas need less testing, so that we can focus on areas that need more.

High B times indicate those parts of the product may be buggier and need deeper testing. Many surface bugs may indicate an area where deeper bugs may be discovered once the surface bugs are fixed. This is also a cue to have a larger discussion with the product and development team members about those problem areas, so that the team has a chance to resolve them as early as possible or so plans can be made to improve that area.

If the tester spent most of a session investigating bugs, they may not have actually met their testing mission, because B time interrupts T time. If we must stop often to investigate and report bugs, we are spending more in B time, and that lessens the time we have to cover that area deeply. In this case, the team could decide to perform further test sessions in that area to cover all that still needs to be tested.

By evaluating S time, we can get a good idea of what areas have testability problems and where we may need additional coverage. A testing environment may be unavailable for a part of a test, or some test data may be hard to set up, thus preventing T time. Metrics for that session would show a higher S time and would suggest that more test sessions are needed to meet the mission. The manager can evaluate the test report data for sessions with high S times and talk to testers about setup issues that need to be resolved, to speed up testing the next time around.

Make it your own

SBTM can be implemented in many ways. What I described is my particular implementation. One of the best things about SBTM is that it is expected and encouraged that you will change the details of how you implement it. Your implementation will likely not work for people in other contexts. Use what makes sense and change what does not make sense to your context.

Having tried it and modified it to different contexts I have worked in, I have found Session-Based Test Management to be an effective method for managing testing. By making it our own, we were able to gain more visibility into our testing. Having a structured yet flexible approach to test management allowed us to make better, more timely decisions about the testing, and gave us more opportunities to influence quality decisions.

About the Author

Djuka Selendic is a context-driven software tester living in Milwaukee, WI. She is currently a mobile app test lead for Equinox Media.

 

 

 

Rate this Article

Adoption
Style

BT