Why We Fail to Change: Understanding Practices, Principles, and Values Is a Solution
In talking about change initiatives or improvement programs, there’s research that gets mentioned almost all the time: John Kotter’s “Leading change” article. His Kotter International website notes that “more than 70% of all major change efforts in organizations fail.”
This year, 2015, is the 20th anniversary of “Leading change”. Interestingly, when Kotter published his results, it was not news. Earlier research had repeatedly shown similar results.
We’ve had to do better over the last 20 years. Over last decade, we’ve witnessed agile becoming mainstream and a huge rise in lean as well. Both movements aimed to make the paradigms of management more adaptive, evolutionary, and ultimately successful at changing. The numbers just had to improve.
They did not. In fact, McKinsey & Co. repeated Kotter’s research in 2008 and ended up with exactly the same conclusion. In the Harvard Business Review in 2013, Ron Ashkenas summarized a number of studies on the success rate of organizational change projects and showed that it has remained stable since the ’70s.
We see virtually no impact of agile or lean on the bottom line of success rates of improvement initiatives.
This shouldn’t come as a surprise, not when we pay attention to how agile or lean adoption typically goes. Organizations often look for recipes. This is a common theme in conferences, training sessions, and even coaching gigs.
“People copy the most visible, obvious, and frequently least important practices.”
Jeffrey Pfeffer & Robert Sutton, Hard Facts, Dangerous Half-Truths, and Total Nonsense (2006)
When a new approach pops up, this is the only thing we have: we see what others do and how they do it. We don’t have deep enough understanding to see anything beyond that. However, given time, we should be able to learn more about the context.
I won’t say it is easy. In fact, there’s a famous story about American managers from the automotive industry who visited Japanese factories in the early ’80s. They wanted to know why their Japanese competitors were doing so well. Surprisingly, Japanese openly showed the visitors their factories, explained the processes, etc.
An even bigger surprise came later, when Americans applied the practices in their own factories. Not only did they not see the improvements they expected but in some areas they actually started doing worse.
What they didn’t understand was that there are more than practices. There are principles that people follow and values they share. These are all parts of a bigger whole, which is an organizational culture.
The story is exactly the same for what we are so frequently doing with our change programs, improvement initiatives, and agile and lean implementations. We are copycats. No wonder that we don’t see sustainable change.
The metaphor I like to use is an iceberg. We see only a small portion of an iceberg above the waterline. The rest stays hidden from our eyes. However, if you remove only the visible part from the rest of the iceberg and put it in water by itself, it won’t look nearly as impressive as it originally did.
The visible part of an iceberg is practices. The important part of the iceberg though, the one that is underwater, is made of the principles and values.
For each practice, we can derive an underlying principle. Take visualization as an example. The principles behind it would be to better understand what is happening and better availability of information.
Now, we can take another step and figure out the value behind the practice and the principles. In this case, it’s transparency.
We can go through the stack up or down. It may be that visualization provides better availability of information and better understanding of what we are doing, and thus drives transparency. It may as well be that if we want transparency, we need people to have more information available to better understand what we are doing, thus visualization is a practice that will work for us.
To understand why a practice or a method works and when it is reasonably applicable, we should understand what’s below the waterline. We should think of the principles and values that are the cornerstones of practices and methods.
There are, in fact, two icebergs. The first would refer to a set of practices or a method we want to adopt. As proposed earlier, we may derive principles and values by simply starting with a thorough understanding of practices. Sometimes, we don’t even need to do that. For example, for the kanban method, Mike Burrows has done the homework in his classic article on kanban values.
The second iceberg is no less important. It describes the principles and values that a specific organization embraces.
It may be tricky to figure that one out. While organizations often have values and principles explicitly described in their vision and mission statements, these are frequently just a claim. In reality, the way people behave suggests that a very different set of values is in place. On that level, authenticity gap is a common disease.
We do have, however, a proven method to derive the real principles and values of an organization. We can start with practices and behaviors, since we can actually see these when we look at an organization.
It doesn’t matter whether or not an organization has customer satisfaction and high quality explicitly written in their mission statement. What matters is whether people are actively working toward such goals.
If you hear a project manager forbid a team to work via pair programming or TDD because “that’s not efficient,” you get a good sense of how much an organization really cares about quality. If you see a line manager tell their team not to share the real status of features with a client, you quickly figure out how much they value customer satisfaction.
What we do is we start with specific behaviors. In the first example, the project manager doesn’t allow developers to use specific practices in the name of efficiency. What can we figure out from that? One thing is that people – developers in this case – aren’t allowed to adopt practices that are commonly associated with high quality.
(I don’t want to argue whether or not TDD or pair programming ultimately improves quality. I simply point out that people who believe that it does aren’t allowed to work in such a way.)
We can also derive that distributing autonomy is not the part of organizational culture. What we see is that a simplistic view of efficiency is important, i.e. more features faster. Which, by the way, doesn’t end up very well.
“Processing the waste more effectively is cheaper, neater, faster waste.”
The second example shows that transparency in communication with the client is not a priority. That means that building trust relationships with clients is not valued and clients’ needs aren’t important. Ultimately, customer satisfaction is not one of the values that the organization cares about.
What values do these people care about then? A line manager who asks a team to sell half-truths and lies to a customer suggests that they feel only a limited sense of safety in the organization as well as the existence of office politics. Going further, we may speculate that the company values internal competition because people need to and try to look good even at the cost of customer satisfaction.
Even if you have no other information on which to base your analysis, observing existing behaviors would allow you to build an iceberg for an organization. Don’t be surprised if it’s not a rosy picture to look at.
I need to warn you about one thing, though. Don’t go too far in drawing conclusions from a single observation. A single jerk doesn’t define the whole organizational culture. Repeated observations will tell you a lot about the values and the principles of the organization.
Commonly, a specific practice or a method is eventually rejected because it goes against the organization’s principles and values. In other words, a new approach simply goes too beyond the existing organizational culture.
Let me give you an example. One practice that Scrum has widely popularized is time boxing. We plan the whole work around fixed iterations. It helps a team to focus on the details that are relevant for the small amount of work that fits inside a time box. It also provides a fair level of isolation for a team that ideally can work without interruption for the length of the iteration.
The principles behind time boxing are reducing the number of work items the team is working on and providing a fairly constant rhythm of delivery. It improves efficiency and predictability.
Let’s take Amazon. They deploy software into production every dozen seconds or so. What would happen if they adopted Scrum by the book with, say, two-week iterations and synchronized deployments to production? It would literally obliterate their ability to respond to changes in a fashion similar to what they have right now. Their ability to rapidly experiment would disappear.
Does this mean that time boxing is bad? Not at all. It simply means that the principles and values that Amazon cares about differ from or even conflict with the ones that stand behind time boxing.
In this case, what Amazon cares about is shortening feedback loops for the features they build and the ability to deploy frequently and independently of other deployments. This provides them capabilities for rapid experimentation. And they wouldn’t trade it for better predictability or even efficiency.
That’s why understanding a match, or a mismatch, between the principles and values that fuel a method and those by which an organization lives is so important. Without that, we will be building yet another cargo cult. And we know that cargo cults aren’t a successful strategy when aiming for a sustainable change.
The key here is a thorough understanding of the practices and methods we want to adopt. A common pattern I see is following the shuhari model: let’s start with the basics and only then move to deeper understanding.
This approach doesn’t work in our context as it assumes that we’ve chosen the right method in the first place. Our case is different because we need to understand the methods before we can choose the most suitable one for a specific organization.
One way to reach such a level of understanding would be to use sources like books, articles, blogs, videos, etc. While they definitely don’t replace experience, they can help you to choose more wisely. Instead of blindly applying whatever is the new black these days, ask questions. Why is this practice a part of a method? What do we try to accomplish using this technique? Are there any other tools that are more relevant in our context yet yield similar outcome? Treat it as a thought experiment. They are cheap, after all.
Another option is to run a small-scale, safe-to-fail experiment. Pick one team that is willing to tweak how they work. See how a new method works. It will help you to understand the method as well as its fit to the organizational context.
A discussion about organizational values is not a simple one. A value is not simply adopted on the spot. We can’t simply announce that from now on we value diversity and have it become true. There isn’t even a simple answer to whether we already value diversity or not. It’s not a binary question. It’s a scale – in fact, a multidimensional scale.
Take transparency as an example. In a given organization, there may be no problems whatsoever with transparency within teams. Everyone would happily share the status of their work with the rest of their team on a task board. Any issues would be openly discussed within the team.
Would it work the same way between teams though? Would people be so open about how they are doing in front of other teams? For example, one team may be slacking but not willing to admit that to everyone around. They wouldn’t want their slack time sacrificed to rescue that doomed project that people at the office keep talking about.
Then we have the hierarchy. Is the team equally transparent when a senior manager is around? Are they willing to tell the manager that they are not doing nearly as well as expected in the master plan for a project? Are they willing to share all the issues they face? Maybe they’re afraid that it would backfire if they speak up.
Speaking of senior management…. Are managers open and transparent with the team? Would they honestly share that plan for reorganization that’s been brewing for some time? Would they share that the reason for the reorganization is the recent quarters’ mediocre financial situation?
Finally, we have a client. Would the team be as transparent in front of the client as they are in front of each other?
We have established that for visualization to work well, we need to value transparency. What if transparency is embraced, but not universally? It could work well in a local context, to a certain extent in a broader one, but not at all in the global organizational context or in relationships with customers.
That’s a simple issue to maneuver around. You can choose to limit the application of any specific practice that’s based on transparency. Visualization may work well and be useful on a team level, but encounter resistance on the portfolio level. And, of course, many teams do not want the clients see these visualizations at all.
The bottom line is that the impact of introducing a specific practice will be as heavily limited as the impact of any value. The depth of the change would depend on how universally spread the relevant principles and values are in an organization.
That’s, by the way, why we see a lot of shallow yet working implementations of agile methods. It works on a team level but not much further than that.
Should we introduce a practice or a method when we are aware that the best we can get is limited depth? I have a few answers to that question.
The first answer is to look for a method that better suits the context. Fortunately, we don’t live in a world where we can have any method as long as it is Scrum. Given that we have done our homework and we have deep understanding of a variety of practices and methods, we are likely to find something that’s suitable.
Another answer is to accept that the results of implementing a method would be limited. One interesting effect of kanban implementations is that even when the method is adopted on a shallow level, teams still report improvements. It isn’t much of a surprise. Introducing visualization lets you harvest low-hanging fruit: you can address most of the painful queues, understand root causes of blockers, and pinpoint bottlenecks.
Finally, there is a hard way. While we can’t directly change organizational culture, we can influence it. Of course, the bigger the organization, the more difficult the full-scale change would be. This topic is worth an article on its own.
Once we see the whole picture, we may have other thoughts about the methods. Most often, we know methods as a collection of best practices. The trick is that “best practice” is quite a statement.
“There are no best practices, only good practices in context.”
Larry Maccherone, Impact of Agile Quantified (presentation) 2014
There’s no silver bullet. To explain that using domain definitions from Cynefin: most of the time, our work is in complex or, at best, complicated domains. This means that, at best, we can be talking about good practices. They would work in some contexts and in others they wouldn’t.
Once we understand the cornerstones and base assumptions of specific practices and what our organization cares about, we may figure out how relevant these practices would be to us. Not only would we avoid building another cargo cult but we’d also increase our chance to successfully adopt a method of our choice.
We would contribute to drive the failure rate of change initiatives down by at least a tiny bit.
The game changer is mindfulness. Ultimately, mindful use of a practice leads to learning while mindless use of that practice leads to a cargo cult.
There is no simple answer to the question of why we fail to change – at least not in a form of a recipe. In fact, we have plenty of recipes and they are one the key reasons why we’ve kept repeating the same mistakes for more than 40 years.
It’s not only that we have plenty of recipes but also how we’ve codified them – and then, of course, started certifying people. The end result is that it is easy for organizations to simply choose a method from a menu and expect everyone to comply with the method – and expect to repeat a success story.
It’s not much of a surprise that it doesn’t work.
My challenge is to move the success rate of change programs up at least a little bit. For that, we need to change our mindset. There’s no reward for being a Scrum or kanban shop if we are not delivering value to our customers.
Don’t just jump on the next bandwagon, whatever that might be. Learn thoroughly about a new approach before you roll it out deeply and widely in your organization. Run small-scale experiments to get first-hand experience. Understand the organizational culture. Evolve the approach so it better fits your context.
While it requires some work and an open mind, it isn’t rocket science. It is about mindfulness that results in learning. Only then will a change be sustainable and successful.
About the Author
Pawel Brodzinski is a leader, a team builder, and a change agent, but most of all he is a constantly experimenting practitioner who helps his teams to work better (and learn in the process). He leads Lunar Logic, a professional-services web-development company, where he practices what he preaches. This makes working with Lunar Logic an exceptional experience. Pawel has been a leader of the first kanban implementation in the software industry in Poland. He is a Brickell Key Award nominee and an active member of the core lean kanban community. He shares his thoughts on broadly understood, software project management on his blog. Pawel is passionate about building great teams, creating superb organizational culture, and helping people to grow.
the 70% stat is a myth...
I've gone through many of these studies, and all of them interpret the 'data' differently. For starters, they fail to state the difference between moving a printer, installing a new business process software package, and complex org change. Second, the organizations that have popularized the 'stat' do so in order to sell more books and consulting/training services. IE: 70% of changes fail...unless you use our method...although our data shows it still fails because you didn't use our method right.
Myself and many others have been writing about this for years, and many change practitioner also know it's false. I'm sure it's going to take the agile community another few years to catch up. Here are the articles: