Huge Retrospectives with Online Games
Agile retrospectives are mostly done at the team level or at a project level. What if you need to conduct a retrospective with 50 teams or more? Luke Hohmann wrote the blog post how to run huge retrospectives across dozens of teams in multiple time zones in which he describes how a large scale agile transformation project did a huge retrospective to create insight on what was going well and what needed to be improved.
InfoQ interviewed Luke about organizing huge retrospectives, analyzing the data, and following up on the actions from such retrospectives.
InfoQ: Your article talks about doing huge retrospectives with many attendants. What do you mean with huge?
Luke: By huge, I mean hundreds of attendees, quite likely organized in multiple locations and time zones. Put another way, "huge" means "too much money to get them into the same room"! Scalable enterprise retrospectives provide a solution to improve enterprise performance when many teams are involved.
InfoQ: In your opinion what is it that makes huge retrospectives different from team or project level retrospectives?
Luke: The number of people involved means that you cannot cost-effectively conduct traditional retrospectives. The frequency is also different - single team retrospectives are typically run at the end of Sprint or delivery of a work product. A project retrospective is completed at the end of a project. An enterprise retrospective is conducted on a cadence based on identifying and removing organizational impediments. And because removing organizational impediments for large organizations can take significant time and effort, we tend to run enterprise retrospectives less frequently, albeit with higher impact.
InfoQ: Why did you choose the speed boat exercise to do huge retrospectives? What makes it suitable?
Luke: Speed Boat is an ideal game for conducting retrospectives because it is designed to give a group an open-ended, divergent process for identifying impediments. The game does not presuppose the kinds of impediments. It is also very flexible metaphor: teams have used boats, hot air balloons, race cars and air planes - anything that wants to go fast but might have something slowing it down is a suitable metaphor.
InfoQ: In the article you state that the reason that large distributed teams stop doing retrospectives is that they "bump into the limit of what they can improve". Can you elaborate on that?
Luke: Let's consider a small startup consisting of one Scrum team. Let's suppose that in their retrospective they identify a desire to move from one source code management system to another. All of the necessary decision makers are in the room, along with all of the people affected by the decision, along with the financial decision maker. So, they can make the decision.
Let's contrast this with a large and mature development organization consisting of 47 Scrum teams (about 300 developers). Let's say 28 of these Scrum teams identify a problem with their source code control system and seek to change it. This is not a team-level decision: It is an enterprise decision. Let's further suppose that after reflection the organization choose to move source code systems. Changing a source code control system for several hundred developers is not something you can do trivially: it is a significant project that must be thoroughly planned and carefully managed. In addition to the obvious technical issues, such as ensuring version history is maintained for any needed bug fixing for systems in production, developers must be trained on the new systems, integration and test automation systems may need to be adjusted, and so forth.
InfoQ: You are using online games to do huge retrospectives. Why?
Luke: Online games are lower cost, generate faster results, allow retrospectives to be conducted at times convenient for the teams involved and provide better data analysis. The online format allows people who are introverted or who speak a native language that is not the dominant language of the company to better capture and represent their thoughts.
Going online keeps each team intact as a team – because at scale, teams are the unit of all organizational engineering.
Going online also allows us to use multiple facilitators to reduce facilitator bias and further improve results.
InfoQ: How can you analyze the data from many teams and determine the actions that need to be done?
Luke: This is how we usually analyze the data:
Step 1: Each team plays a game.
Step 2: Results of each team are downloaded into a centralized spreadsheet. This is easy – each facilitator just downloads the results of their games and uploads the results into a common spreadsheet.
Step 3: Results are coded by People / Process / Technology AND by scope of control. Although a large team should be used to facilitate the games, we recommend a small team of 2-3 people be used to code the results for speed and consistency.
Each item placed into game is coded – if you’re using Speed Boat, this means every anchor and propeller!
We recommend coding items into a Primary People / Process / Technology and an optional Secondary People / Process / Technology. For example, “My PO doesn’t attend review meetings” could be coded as primarily as People, secondarily as Process, and “We should switch to GitHub,” would likely be coded primarily as Technology.
We then recommend using Diana Larsen’s Circles and Soups taxonomy to assess the perceived degree of control a given team has in addressing any impediments.
- Team: This is an issue that the team should address. For example, a PO not attending review meetings should be handled by the team.
- Product / Group: This is an issue that the team can’t address, but is likely the scope of the product or group.
- Enterprise: This is an issue that requires coordinated effort at the enterprise. For example, moving to GitHub is likely to affect all of the teams within the enterprise. As such, it should be carefully assessed as a potential enterprise project and compared with other high-impact projects.
The online chat logs are invaluable in identifying underlying issues.
We often do extended analysis to identify various kinds of biases that can creep into the game play. Here are some biases that can affect your results:
Positivity Bias is a pervasive tendency for people [teams], especially those with high self-esteem, to rate positive traits as being more true of themselves than negative traits. This can happen when a team is asked to identify Propellers. To catch this, we look for propellers or chat logs with aspirational language, such as “We could do this…”, or prescriptive language, such as “We should do this…”.
Sampling Bias occurs when a small portion of the organization plays (e.g., 20 out of 60 teams) or only one kind of team is engaged. Your goal should be at least 90% of the teams participating.
Method or Question Bias can inappropriately guide participants into answering questions. By keeping things open-ended, Speed Boat and other games minimize method and question bias.
We expect producers of LDTRs to take into account any potential biases and to provide assessment of potential biases in their research reports.
Step 4: Results are analyzed to identify patterns. One of the great advantages of digital results are the ability to analyze data using sophisticated tools like R and Qlikview. For readers trying to convince Senior Leaders that an organization wide impediment exists, this kind of visualization of results is invaluable!
Step 5: Patterns are shaped into potential projects. This step can take a week or so – which is a good thing! You’re looking for incredibly high-impact opportunities. Investing time in identifying them will pay incredible dividends.
Step 6: Projects are selected. If there are a small number of projects we just select them. Or we use Buy a Feature for large numbers of projects.
InfoQ: How can you follow up on the actions that come out of a huge retrospective?
Luke: I'd like to think that an organization using Agile or Lean Kanban is honoring the values of transparency by stating the projects they're engaging to remove the identified impediments and then maintaining visible progress on these projects across the company as they are implemented. Most of the time this is very obvious, as removing an enterprise impediment almost always changes how individuals and teams work.
InfoQ: Are there other ways then retrospectives to find out what is going well and what needs to be improved in organizations? When would you use them?
Luke: Some companies find using anonymous surveys to identify opportunities for improvement helpful. I'd recommend a survey over a set of Speed Boat games if the employees simply don't want to collaborate with each other or do not want to adopt Agile.