BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Peer Reviews Either Sandbag or Propel Agile Development

Peer Reviews Either Sandbag or Propel Agile Development

Key Takeaways

  • A team’s peer review process is critical to their agile success
  • There are three attributes at the heart of a structured peer review process
  • Review transparency is crucial in bringing agile to the organization-level
  • How to establish clear and living communication expectations
  • How to select review metrics that enable meaningful process improvement

Can you look at this? 

You have to look at this… 

HEY, YOU. NOW.

This is the progression that happens all too often when code or documents need to be reviewed. You need your work reviewed for a few reasons. First, peers likely provide valuable feedback and have fresh eyes to catch mistakes that you might miss after spending hours working. Second, working on a fast-moving Agile team, you need to continually build consensus so that there is not a communication backlog. Lastly, for teams working in highly-regulated industries, peer reviews may be a required piece of a larger software assurance program.

As more software development teams trend toward an Agile approach, software releases are becoming more frequent. If you are not able to speed up your peer review cycles in tandem, you may start to sacrifice quality to hit deadlines. That then translates to a buildup of technical debt. How can you avoid this scenario? It takes structure, but flexible structure.

Structuring Iterative Communication

Most teams don’t have an explicit plan around their internal communications. The tools that they employ typically dictate the communication norms. If your team adopts Slack or another messaging app, then it quickly becomes common for folks to have short, timely chats. The expectation is that the other person replies within a relatively short timeframe. If you send that same person an email, it is likely that they will take longer to respond. It is acceptable because of the inherent expectations of the tool.

Just as teams deliberate about what tools they should be using for repository management or performance testing, they should also consider what tools make the most sense for peer reviews and how those tools might affect their behavior.

For example, if my team’s code review process is just a flurry of pull requests in GitHub, are we going to be able to drive centralized process improvement? If my team conducts document reviews by sending an attachment in an email, are we setting ourselves up for long-term development traceability? These are the kinds of questions that you may want to ask, but before you ask them, it is important to have a tangible vision in mind as an end goal.

3 Attributes of a Structured Peer Review Process

Every review process has the same basic workflow. An author writes new code or writes changes to existing code. Then, reviewers are brought in to make comments and identify defects. The author then fixes the bugs and reviewers verify that the bugs are fixed. There are common workflow variants (pre-commit, post-commit, etc.) and common approaches (ad hoc, meeting-based, tool-based), but going into detail on those is the topic for another post. 

Regardless of your unique preferences, an ideal structured peer review process for Agile development is comprised of three critical attributes:

  1. It should clarify expectations and make rules explicit.

When someone asks for feedback on their code, what exactly are they asking? Is it an invitation to explain your preferences and how you might have done it differently? We get better feedback faster when we are able to ask for more than just “feedback”. A structured peer review process asks the reviewer to look at and assess the quality of x, y, and z. Your team or the associated author can decide what those checklist items are.

Checklist items could include anything from running unit tests and penetration tests to checking compliance with coding standards and proper annotation. Start with a basic list of items that your team is already doing for the most part, but might miss occasionally. As different review types pop up, you can add and adjust accordingly.

By establishing checklists and clear instructions for your peer review process, everyone understands who did the review and what the reviewer did. It also reduces review scope creep. If the reviewer can comment on anything and everything, they might decide to open a proverbial can of worms on something and subsequently flat tire your sprint.

Since your peer review process should ultimately function as a quality measure, your process should have rules and those rules should be enforceable. Set guidelines for who should be included in different types of reviews. If you are launching a new feature and need it to work flawlessly, require a certain number of veteran developers to be a part of the review. If you want to use code reviews as a training tactic, require that new members be added to a certain number of reviews as part of their onboarding. Without rules in place, there is nothing to stop team members from going rogue with their reviews or lack thereof.

  1. It should enable many-to-many review transparency.

The Agile movement has gained traction, in part, because siloed development leads to unnecessary communication stress. Just as teams are now working more cross-functionally, departments should open their processes so that there is a clear audit trail across an organization’s software development. This concept in manufacturing is referred to as a digital thread. 

In order to achieve process improvement at both the team-level and organization-level, you need the continuous aggregation and analysis of key performance indicators (KPIs). You can only aggregate this data if teams are using tools that have inter-team and inter-department data interoperability. This interoperability unlocks significantly better defect tracking. 

Aggregating the data is only half of the challenge. In order to conduct meaningful analysis, select custom KPIs that make the most sense for your team and track them over time. The tool(s) used for your review process should empower this kind of analysis and custom reporting. 

Besides intentional process improvement, transparency also fosters collaboration. If a software architect working on a design challenge makes a change that could impact testing, the testing members on the team should be able to access the initial peer review and see the intention behind the change. Intent breakdown, especially through a text medium, introduces compounding risk across your software development. It is a high-stakes game of telephone. When teams and functions have shared information, it is much easier for them to get answers directly and collaborate without friction.

  1. It should replace regular manual setup with automated systems and templates.

SmartBear’s 2017 State of Code Review report revealed that 57 percent of development teams conduct meeting-based peer code reviews. Nearly 75 percent of teams conduct “over-the-shoulder” ad hoc reviews. While these approaches might be effective, they require either manual documentation or are simply not documented. A structured peer review process should aim to reduce this manual accounting as much as possible. 

Adopting a tool-based approach for peer reviews automates much of the setup and documentation. Tools like Collaborator by SmartBear enable teams to create review templates, automate notifications, and build custom reports. These features can dramatically reduce the drag on your peer review process. 

Despite this article being focused on structuring a process, let’s highlight one of the key tenants of the Agile manifesto that may seem counter to this argument, “individuals and interactions over processes and tools.” By templatizing your peer review process, you are able to put a lot of the process work in the background of your development. Developers get the clarity of checklists without needing to create one from scratch for every review. Managers get to customize and refine the review process without needing to badger their team members about changing their behavior. Customizing and automating the peer review process provides more time for individuals to interact and build.

Tracking Progress

Too much data or information can be overwhelming. As mentioned earlier, if your team can identify a handful of meaningful metrics that are easily trackable, then you can successfully execute process improvement. Small teams may want to create a custom satisfaction metric like a thumbs-up/thumbs-down or 1-5 grading. If managers notice a trend, they can then explore the problem more qualitatively.

For teams that are utilizing a pair programming approach, a code review metric like defect density could serve as an objective proxy for development savvy. While most teams know who their top coders are, using one metric as a source of truth reduces concerns of favoritism or recognition neglect. By tracking the defect density of each author’s code, you can pair high-performing developers with more defect-prone counterparts to improve your teams overall skillset. Also, consider how it could be used in a retrospective context. You could easily recognize a most-improved author or top-performer for a sprint.

For larger development teams, having access to metrics like lines of code (LOC) reviewed and time spent on reviews can create a bigger picture of your review process. For example, if the number of LOC reviewed is very high and the defect density is very low, you either have all-star authors or reviewers who are skimming too quickly. By having access to all these metrics, you can check the time spent by that person or team on reviews and make changes.

Incorporate industry benchmarks to get a better sense for what these metrics really mean. For example, according to a study that SmartBear conducted with Cisco, the ideal number of lines of code to review in one hour should be around 500 LOC. These industry benchmarks might not apply perfectly to your team, so over time, develop your own internal benchmarks. By comparing your team’s current performance to both industry benchmarks and your own historical numbers, team leaders are better equipped to assess outlier scenarios.

What Your Peer Review Process Says About Your Team

I mentioned earlier how the nature of different tools like email and messaging applications impact user behavior and expectations. The same premise applies to your team’s peer review process. If you add rigor and structure to your process, your team will, to some degree, adopt the behaviors that you are emphasizing. 

That doesn’t mean that every developer will immediately become a code review saint or that all reviews will be properly recorded all the time. What it does mean is that your team has the opportunity to establish communication expectations, put forward explicit quality values, and empower collaboration across teams to propel development forward. Agile teams that incorporate code and document reviews into their Definition of Done framework are able to ensure that everyone is knowledgeable about each artifact that contributes to the larger user story and project.

What is unique about using peer reviews to communicate these messages is that reviews are a living thing. From sprint to sprint, these checklists and rules are in front of every author and reviewer. As an Agile team, you naturally iterate on these until they fit your team’s unique needs and preferences. Then you have created your own living manifesto, reiterating priorities and values until they are embedded in your team’s culture and code.

About the Author

Patrick Londa is the Digital Marketing Manager for Collaborator at SmartBear Software. With a background growing agile startups in the Clean tech and Digital Health space, Patrick is now focused on software quality, process traceability, and peer review systems for companies in highly-regulated, high-impact sectors.

Rate this Article

Adoption
Style

BT