Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News How to Assess Software Quality

How to Assess Software Quality

This item in japanese

The quality practices assessment model (QPAM) can be used to classify a team’s exhibited behavior into four dimensions: Beginning, Unifying, Practicing, and Innovating. It explores social and technical quality aspects like feedback loops, culture, code quality and technical debt, and deployment pipeline.

Janet Gregory spoke about assessing quality using this model at Agile Testing Days 2022.

The quality practices assessment model has ten quality aspects, described in Helping Team Deliver With a Quality Practices Assessment Model.

The behaviour exhibited by teams for each quality aspect, falls into one of four dimensions: Beginning, Unifying, Practicing and Innovating. This does not mean that every quality aspect for a team falls into the same dimension, Gregory mentioned.

Teams in the Beginning dimension have few quality practices in place and lack structure, Gregory explained:

Low-quality code is deployed to production, defects are logged, and the invisible backlog of defects grows. Not all teams are in the same place, some will be more chaotic than others, but pretty much every team knows they want to improve.

In the Unifying dimension, the organization has adopted one or more agile methods forming cross-functional delivery teams:

The teams follow rituals like having daily standups, keeping a product backlog that they regularly refine, or time-boxing their work into iterations. They try to take smaller chunks of work that they can finish by the end of each iteration and are learning to work together as a cross-functional team.

In Practicing, team members feel good because the practices they have learned feel natural, and they consistently deliver value to their customers:

Teams have developed fast and effective feedback loops to pivot quickly when needed. The emphasis is on preventing code defects, so few are found. Those found early are fixed immediately or as high priority in the next iteration. They build quality into the product from the beginning by bringing testing activities forward early in the cycle, and use feedback from their customers to improve their product.

Innovating teams are high performing. Their cycle time is short with customer and business value delivered frequently:

The team knows their market and has high quality defined in identified aspects. They experiment where appropriate and adapt their practices. Self-learning and self-discipline are the norms, with the team consistently striving to learn and improve. Because psychological safety is high, failure is seen as a learning opportunity. The feature development is focused on flow but is thoughtful and based on value to the customer. The team understands and monitors the impact of changes using continual feedback from production usage. Quality is built-in from the start.

The hard part is consolidating all the information gathered from the different sources and figuring out discrepancies, Gregory explained:

I like to use a spreadsheet with the different quality aspects and the practices that go with them. That makes it easier for me to compare the findings with each of the dimensions.

The model is only as good as the person using it, and facilitation is a skill, Gregory mentioned. Often when teams try to self-assess, they rank themselves higher than others may see them. That doesn’t mean it’s not a good exercise for teams to try, Gregory said.

The quality practices assessment model is described in the book Assessing Agile Quality Practices with QPAM which Gregory co-authored with Selena Delesie and is listed on Gregory’s publications page.

InfoQ interviewed Janet Gregory about assessing quality.

InfoQ: What tips do you have for assessment facilitators?

Janet Gregory: There are many ways to get information. I use all I can – a combination of process retrospectives, interviewing, observing meetings or workshops, and examining artifacts like user stories and tests.

In our book, we list open-ended questions for facilitators to use. A facilitator needs to listen and observe carefully to be able to extract the information - often, what is not said is as important as what is said.

We are creating a follow-up book as a guide for facilitators which will help anyone conducting the assessment - no promises when, but hopefully in the first half of 2023.

InfoQ: How can we present the results of an assessment?

Gregory: What a facilitator shares will depend on the context, but it is important that the information is anonymous.

If you are an internal facilitator, you likely will gather all your observations, and share what you found so the team can choose what to improve on.

If you are an external facilitator (like I am), you will likely share observations and provide suggestions and recommendations.

About the Author

Rate this Article