Safety, Software, and Accelerated Learning
Agile methods have the potential of creating great results. But those great results are not a guarantee; in fact anecdotal evidence suggests that those great results are only achieved by a small percentage of those teams and organizations adopting and adapting agile methods. There are invisible requirements for this success. One of these requirements seems to be safety.
Let's start from the beginning. When do agile methods produce great results? When teams can accelerate their learning. This reporter co-wrote in 2007:
Agile practices, from Test-First Development and Continuous Integration to Iterations and Retrospectives, all consist of cycles that help the team learn fast. By cycling in every possible practice, Agile teams accelerate learning, addressing the bottleneck of software engineering. Call it "scientific method," "continuous improvement" or "test everything."
This point of view slowly made it's way around the agile community and three years later Dan North, known as the primary author of Behavior Driven Development, wrote in an article titled Deliberate Discovery:
“Learning is the constraint” Liz Keogh told me about a thought experiment she came across recently. Think of a recent significant project or piece of work your team completed (ideally over a period of months). How long did it take, end to end, inception to delivery? Now imagine you were to do the same project over again, with the same team, the same organisational constraints, the same everything, except your team would already know everything they learned during the project. How long would it take you the second time, all the way through?
Stop now and try it. It turned out answers in the order of 1/2 to 1/4 the time to repeat the project were not uncommon. This led to the conclusion that “Learning is the constraint”.
That was in 2010. At about that time, many leaders in the field had started looking far beyond the practices; into the human dynamics and cultures of teams and organizations for the keys to success. And one of those ideas that seem to be taking root is safety and its place as a fundamental attribute to enable accelerated learning.
Scott Belware wrote about the negative affects of hazards in software development organizations in Workplace Safety for Software Developers in 2010.
Safety is an essential requirement for social learning or what I call Tribal Learning. As such, it needs to be an area of focus for managers, and more carefully studied. In the book, I explain how and why low safety levels associate with very low levels of social learning. Low learning levels in turn regulate how much adaptation can actually happen in the face of change. Research out of Harvard Business School from Professor Amy Edmondson shows that psychological safety, levels of social learning, levels of engagement, and levels of productivity are all correlated. This is why we must pay careful attention to the dynamics of human behavior.
And in this year, Joshua Kerievsky and this reporter have been blogging about how a new look at software development, through the perspective of safety, gives new light to when things work, and when they go wrong. In Tech Safety, Joshua writes:
Tech safety is a driving value, ever-present, like breathing. It is not a process or technique and it is not a priority, since it isn't something that can be displaced by other priorities. Valuing tech safety means continuously improving the safety of processes, codebases, workplaces, relationships, products and services. It is a pathway to excellence, not an end unto itself. It influences what we notice, what we work on and how our organization runs.
And in Is it Safe to Fail?, Guy Nachimson relates Christopher Avery's Responsibility Process Model, Non-Violent Communication, and Jim McCarthy's Core Protocols to creating a culture of safety to produce great software.
Finally, John Krewson thinks that, while Tech Safety is a noble idea, it is doomed to fail and will be ineffective:
It’s a noble concept and I wholeheartedly believe that it has merit. However, there’s an issue with the idea of Tech Safety lurking below the surface, and it’s called “risk homeostasis”. Risk homeostasis, a theory developed by Gerald Wilde in 1994, is the idea that we don’t save risk; we consume it. In other words, when we implement something to make our lives safer, we use it to justify riskier behavior in other areas of our lives, and so on the whole we’re no safer than we were before.
Whether or not seeing and creating safe cultures and environments really will catch on as a way to create more effective software development teams and better software has yet to be seen. It is, however, a worthy topic of exploration and experimentation. What are your experiences?
Tom Gilb & Kai Gilb Jan 26, 2015