BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Growing an Experiment-Driven Quality Culture in Software Development

Growing an Experiment-Driven Quality Culture in Software Development

Bookmarks

Key Takeaways

  • In complex environments there’s no clear path towards achieving your vision and mission
  • We can get inspiration from other people’s experiences while not letting ourselves be constrained by them, as these experiences might not apply in our own context
  • Designing and running our own measurable experiments enables us to learn what works in our context and adapt our next steps according to the results
  • Experiments should be safe to fail; even if our hypothesis cannot be proven, we will still have learned valuable insights
  • Staying aware of our own biases, empowering people to own their journey and minding the system helps with growing a quality culture based on experimentation 

 

Have you ever faced a challenge at work that you weren’t sure how to tackle? You had a clear vision and mission, yet no one’s done it before and you ask yourself how to get there? This is part of my everyday experience in tech, on a team level as well as on an organizational level. Working at FlixMobility Tech, every team product and every product showed a different context, so how to solve this? Experiments to the rescue! In a complex environment like software development, no one can tell what might work, so we have to try things out. Read on to learn about key challenges, insights and lessons, and get inspired for your own path into experimentation.

Experiment-driven quality culture

On my journey of growing an experiment-driven quality culture, I learned that continuous improvement is key. It starts with creating transparency about the status quo as well as the pain points and needs of individuals, teams, leadership, and the system. 

When we’re clear about where we are and what we want to improve, it also includes raising awareness of the options we have at hand. We can then tackle the identified challenges through experimentation, trying things out in our specific context, and seeing where results lead us. 

Taking these insights and having them inform our next experiment allows us to figure out iteratively what gets us closer to our mission and vision. Intentional continuous learning and applying what we learned enables a quality outcome.

Culture change

A few years back, we faced quite some challenges around our product teams’ testing and quality cultures. We lacked transparency in how they were doing. Were they succeeding and might be able to share what worked for them to inspire others? Were they struggling and in need of support? We knew that knowledge was not common among teams, despite several sharing initiatives. We also knew we wanted to scale, so we’d rather tackle this challenge before it tackles us. Our mission was to improve the testing and quality culture of our product teams by leveling up knowledge, skills, and practices. But how to get there, how to trigger this culture change? In a complex environment, we didn’t have a straight clear path towards our mission. We needed to experiment to find out.

This challenge initiated a whole series of experiments with various product teams. We started out with a big “experiment of experiments” that was very insightful, yet in the end too heavyweight to scale. We came up with a smaller experiment with the teams, focusing on aspects where we had the biggest impact before. It looked like a promising approach, yet then the pandemic hit and our priorities changed. Nonetheless, I continued experimenting, this time focusing on the underlying needs of the people we’ve worked with: the product teams, my peers in tech leadership, as well as our experimentation initiative group.

Each time we learned from the last experiments to inform the next, continuously trying things out to get closer towards our mission. It’s a journey, after all.

Team experiments

Let’s take a concrete example to see how this looked like in practice. When working with the teams, we first helped them to create transparency on their status quo including their pain points, and then raised awareness of the options at hand to improve their practices. One of the teams identified as a challenge that they lacked having an early testing strategy, as well as in-depth exploratory testing. After brainstorming potential solutions to tackle this, they came up with the following hypothesis:

We believe that bringing in a team-external party to a planned ensemble exploratory testing session will result in finding and fixing more issues in the pre-production stage. We'll know we have succeeded when the ratio of bugs found before releasing on production to those found after releasing on production has improved, as well as the team feeling that we have improved.

To test this hypothesis, the team decided on the following experiment details:

  • Have a task for ensemble testing in every sprint
  • Only bugs discovered in sessions count
  • 1.5 hours per session
  • Use the mood bot every Friday to collect the team’s feelings
  • Mark bugs with pre- and post-production labels
  • Experiment runtime: from 2019-05-27 to 2019-07-10

The team ran the experiment and after the defined timebox expired, we helped the team evaluate the collected data. They found themselves running the experiment slightly differently compared to the original plan, while still testing the hypothesis.

  • Have a task for ensemble testing in every sprint → instead, the team had sessions impromptu, on demand
  • Only bugs discovered in sessions count → they forgot to keep track yet remembered they found a lot in the first session
  • 1.5 hours per session → the first session was 1.5 hours, the second 1 hour
  • Use the mood bot every Friday to collect the team’s feelings → they checked after each ensemble session instead
  • Mark bugs with pre- and post-production labels → done
  • Experiment runtime: from 2019-05-27 to 2019-07-10 → kept the timebox

Overall, their evaluation on the main measurement criteria was that they could indeed prove the hypothesis: through having the ensemble exploratory testing sessions, the bug detection ratio as well as the team’s feelings improved - a positive outcome. They decided to keep the new practice and added it to their testing strategy.

Now it was time to help them design a second experiment to keep them going. This time they decided to solve a different challenge they perceived: they wanted to improve their product quality and the developers’ confidence, and came up with the following hypothesis.

We believe that creating a monitored number of automated tests per sprint
will result in more confidence.
We'll know we have succeeded when the number of tests increased (measured in reviews) and the team mood improved.

Please note that we only supported the teams with identifying their challenges and designing their own experiments. We intentionally wanted them to try things out themselves and see what worked in their context; getting inspiration from other people’s experiences while not letting themselves be constrained by them, as these experiences might not apply in their own context. 

Experiment of experiments and their impact

One positive impact of our first big “experiment of experiments” was that it triggered a lot of team conversations about testing and quality. Through this experiment, we could also raise awareness and increase knowledge and skills in the teams, for example on topics like exploratory testing, testing for specific quality aspects like accessibility, and collaborative approaches. We also observed the teams becoming inspired to improve their practices.

However, there was also an impact that we hoped for, but didn’t get. Not the whole team was on board with our initiative, so silos remained. Also, not all people in these teams opened up for new concepts; misconceptions around testing and quality prevailed. Yet the biggest problem we’ve observed was that teams really struggled running the experiments. Most of them fell back into everyday business, feeling they couldn’t focus on improving things as there was “work” to do and “roadmap items” to deliver. Despite all encouragement, support offers, and explicit backing by senior leadership, actions speak louder than words - and teams had clearly chosen their priorities over experimentation and learning.

As mentioned above, this was an insightful starting point, yet certainly not the end of experimentation. We took what we learned and came up with new hypotheses to get closer towards our mission. 

Learnings from doing experiments

Experimentation itself is worth it. There’s so much I’ve learned through the experiments that I simply wouldn’t have if I had never given them a try. Especially if things seemed unlikely to me to work. When I gave them an honest try, more often than not the results surprised me. We simply cannot know what will work out and what not if we never tried it in our context.

Experimentation also changed how I look at failure, how I frame failure. If I’m trying things with an experimentation mindset, my hypothesis might not turn out correct in the end, yet I will still have learned valuable insights. On the spectrum of reasons for failure by Amy Edmondson this is far on the praiseworthy side. An experiment should be safe to fail - if I feel the pressure that this absolutely has to work out, it’s not an experiment anymore.

Another aspect I realized is that our biases play an important role in experimentation. Sometimes I’ve found myself falling prey to confirmation bias and seeking data that proves my hypothesis, even if there’s more than enough evidence that speaks against it. Also, I realized it’s hard to accept if an experiment did not result in the desired outcome no matter how much you wished for it. Especially if you’ve already invested quite a bit into it, it’s hard to let go. The sunk cost fallacy kicks in and we’re likely to produce waste. Over and over, I need to remind myself to go “many more much smaller steps” as GeePaw Hill calls it. Or “many small, simple, fast, frugal trials” as Linda Rising advocates for. We need to stay aware of our own biases for honest learning.

There’s one more thing I’d like to point out. Sometimes I feel I have the best idea ever for the team. So I go ahead and plant the seed, then build an experiment around it - only to see it not getting acceptance from the team and leaving me feeling I’m pushing it on them. Over and over, I’ve learned that experiments have to be phrased and owned by the ones most immediately affected by the outcome. For example, if a team gets a pre-formulated experiment, even by one of the teammates, I’ve usually seen it either not prove the underlying hypothesis, or not getting incorporated into the team’s practices. It’s not “lived”. On the other hand, if the team was involved and contributed to experimentation, there’s a chance the learning outcome will actually be considered and built upon. What so far worked best is to focus on inspiration first, creating a pull, and then build on people’s curiosity and intrinsic motivation to learn more and solve a challenge they face. It’s all about enabling them for their own experimentation.

Growing an experiment-driven quality culture

When people share their thoughts and wishes, there might be different needs underlying them. For example, tech leadership can express their desire for global quantitative metrics, while they might actually need to figure out which impact they want to have in the first place and which information they need to have this impact.

Remember the teams falling back to everyday business? The systemic part plays a huge role to consider in your experiments. For example, if you’re setting out to improve the quality culture of a team, think about what kind of behavior contributing to quality gets rewarded and how. If a person does a great job, yet these contributions and the resulting impact are not valued, they probably won’t get promoted for it.

The main challenges usually come back to people’s interactions as well as the systems in which we are interacting. This includes building trustful relationships and shaping a safe, welcoming space where people can bring their whole authentic selves and have a chance to thrive. Most challenges I’ve encountered had their root in these foundations, no matter whether they were labeled “tech”, “tooling”, or “process”. We need to build the base for good things to happen for everyone.

In the end, quality is on all of us. We need to own it together, improve it together, and share what we learned so that others get inspired. What helped in my experience is to create transparency, then raise awareness regarding available options, and finally experiment in our context to grow a quality culture - together. Let’s learn and have the gathered insights inspire our next move.

About the Author

Lisi Hocke graduated in sinologyand fell into agile and testing in 2009 and has been infected with the agile bug ever since. She’s especially passionate about the whole-team approach to testing and quality as well as the continuous learning mindset behind it. Building great products which deliver value together with great people is what motivates her and keeps her going. She received a lot from the community; now she’s giving back by sharing her stories and experience. She tweets as @lisihocke and blogs. In her free time, you can either find her in the gym running after a volleyball, having a good time with her friends, or delving into games and stories of any kind.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT