Key Takeaways
- The planning fallacy leads us to underestimate our own tasks, but not the tasks of other people. It causes us to ignore our past experience, especially if that experience suggests we cannot complete our tasks on time.
- However, there are some surprising benefits from this fallacy, mostly around positive illusions and our psychological well-being.
- Try playing a planning game. It will raise your awareness of the planning fallacy and also reduce your vulnerability to it.
- To reduce estimation errors, always include an independent, outside observer in your process. Listen to what they say, then modify your plan.
- If time permits, create a red team, who will not only challenge your estimate but may also uncover hidden assumptions and identify potential flaws in your plan.
In his seminal book, “Facts and Fallacies of Software Engineering”, Robert Glass identified poor estimation as one of the two leading causes of software project failure (the other cause was unstable requirements).
Since then, many books have been written on software estimation, and almost all of us will have attended at least one course on the subject. Yet, underestimation is still the rule, rather than the exception.
One reason is that we suffer from cognitive biases and our estimation processes do not take these biases into account. One bias especially relevant to the estimation process is the planning fallacy.
Addressing our biases is always difficult. We are blind to our own biases and do a great job of explaining them away by blaming external events. However, we can circumvent this blindness through gamification. If we immerse ourselves in a game designed to raise our awareness of that bias, we can explore it in a non-threatening way, gain understanding, and perhaps even work out ways of reducing our vulnerability.
In this article we will explore this fallacy and how we are vulnerable to it. We will also explain how you can reduce your vulnerability to this fallacy through playing a planning game that has been specifically devised to help mitigate it.
Planning fallacy
The planning fallacy causes us to underestimate how long it will take us to complete a task. This underestimation in turn causes us to deliver late on those tasks.
There are actually three aspects to the planning fallacy hypothesis:
- We underestimate our own plans, but not the plans of other people.
- When estimating how long a task will take, we tend to focus on plan-based scenarios, rather than any relevant experience we have.
- Even if we have relevant past experience, we tend to diminish its relevance by using attributions.
The first of these – underestimating our own plans but not those of other people – was brought home to me when I was doing my PhD in semiconductor research. We were a fairly large research group, so there were several students who had started before me. I watched each of these students write up their PhD theses. Each student predicted that their thesis would take between three and five months to write up. No student took less than nine months to write up. Some never finished.
After watching a few of the students, I had no problem in recognising that they would take between nine and 12 months. However, if I had told any of the students, none of them would have either believed me or thanked me.
However, when it came to writing up my own PhD thesis, I fell into exactly the same trap as all the students before me, despite watching each of them make the same mistake. I figured it would take me between 3 to 5 months to write a 90,000-word thesis, and, rather foolishly, this was the amount of time I scheduled for it.
Nine months later, I sent my final draft off for publication, just like all the other students before me.
The second aspect – focusing on plan-based scenarios, rather than relevant experience – comes about as follows.
When we look at a large, complex task and estimate how long it will take us to complete, we mentally break down the large task into smaller tasks. We then construct a mental story of how we will complete each smaller task. We identify the sequential relationship between tasks, their interconnectedness, and their prerequisites. We then integrate them into a connected narrative of how we will complete the large task.
All of these activities are good, and indeed essential for completing any large task. However, by constructing this mental story, we slip out of estimation mode and into planning mode. This means that we focus upon the how-to’s, rather than thinking back to past experiences, of potential impediments and how they may extend the task duration.
Planning is a bit like software development, whilst estimation is a bit like software testing. In development, we are trying to get something to work. So, if our initial approach is unsuccessful, we modify it or try something else. Once we have got it to work, we are generally satisfied and move onto solving the next problem.
In testing, we are trying to find out if there are circumstances when software does not work. So, if the software works on our first approach, we are not satisfied and modify our approach to see if we can break it. In testing, we would be unsatisfied if we could think of a way that the software could break but have not tried it out.
The third aspect – diminishing relevant experience – occurs as follows:
When we look back at a past experience, where we failed to achieve our objective, and need to identify a reason for that failure, we are likely to diminish the relevance of that experience under two circumstances:
- That experience would imply we cannot achieve our current goal.
- That past event would imply something undesirable about ourselves: laziness, ineptitude, etc.
Neither of these implications are desirable. Therefore, we diminish their relevance to the current situation using attributions, sometimes known as explanations or excuses. These explanations have the following three characteristics:
- External. We failed to deliver due to an external cause. For example, the database team did not deliver their change in time.
- Transitory. For example, the database team had higher priority work at that time.
- Specific. The current task does not involve a database change.
Given the explanation for our poor past performance was external, transitory, and specific, we decide that this experience is not relevant to estimating the current task.
Hence, the planning fallacy causes us to underestimate how long it will take us to do a given task.
The benefits of using games as teaching aids
I tried a few different methods of teaching about the planning fallacy, but nothing really seemed to stick within people’s minds. They’d come on a training course, where we went through some of the cognitive biases and other factors that drive project underestimation, we’d do a few estimation exercises, then everybody would go back to their day jobs and repeat most of the errors they were doing before coming on the course.
I started working with some people within agile teams, where there was a big emphasis on games. Around this time, I was reading articles about researchers using serious games to address cognitive biases. So, I thought – could we learn about the planning fallacy through a game of some kind?
I put together a small pilot version of the game, then tried it out with a few graduate consultants who happened to be on the bench and hence available for trying things out. Not only did it keep the graduates engaged, but by mid-session they were prioritising better and including contingency plans. By the session's end, most teams had significantly reduced their underestimation errors, although every team still overestimated how much they could achieve.
Based on feedback from the graduates, I refined the game and continue to refine it further.
Playing the game
A map of the town, as used in the planning game
Remember, the intention of this game is to help players understand their vulnerability to the planning fallacy, not learn how to plan a list of tasks. This aim is achieved by getting players to produce a plan, then show them how unrealistic it is and ask them to replan.
The key learning point comes after the teams have re-planned and committed the same error of trying to do too much, despite being explicitly warned not to do this.
The game starts with participants being placed into a scenario at 11:00 AM outside the swimming pool on the map of the fictional town. They are told they have a free day and can plan the remainder of the day as they wish. They are given a list of tasks to do, together with locations of the tasks and other instructions.
Although participants are not explicitly told to prioritise their list, they are told that the list contains more tasks than they can possibly do in the allocated time. Hence they are expected to drop items, plus have reasons why specific items were dropped that are self-consistent with a goal, e.g having a leisure day or carrying out your chores.
Interestingly, there is slight evidence that teams with a clearer goal were better at cutting down their list and hence reducing their vulnerability to the planning fallacy.
The map is sealed within wipe-clean plastic, so participants can use washable felt pens to draw on the map whilst they plan out their tasks and route through the town.
Usually, we play the game with 3 to 5 people in each team. If there are too few people, then participants don’t get the benefit of listening to other people’s point of view. Conversely, if there are too many in a team, the game is dominated by one or two noisy individuals and everyone else gets less enjoyment out of it.
At the end of the time allocated for the planning task, which is usually around 45 minutes, I get each team in turn to present their selected solution to the other teams, who then ask questions and make comments.
We then have a debrief session, where we go through the main learning points, followed by a repeat of the planning game. Participants then have an opportunity to apply the lessons from the earlier session and debrief in the repeated game.
If there are only a few participants, say enough for one largish team, then we sometimes play the game in a different, more interactive fashion.
After the initial planning session, instead of getting each team to present their solution, we walk through the plan, as if it were happening in real-time. This allows the instructor to present the team with a series of unanticipated events.
For example, if the team had planned to meet a friend for lunch at a restaurant and had budgeted 45 minutes for the activity, the instructor might inform the team that when they arrive at the restaurant their friend has already ordered a three-course lunch that will take 90 minutes to consume.
Participants find this interactive approach much more enjoyable. In addition, it has two further benefits. Firstly, the unexpected nature of the unfolding scenario repeatedly reinforces the lessons of the planning fallacy – that we try to do too much – making it far more memorable. Secondly, it gives participants the experience of having to rapidly replan, and also highlights the advantages of including flexibility and contingencies within a plan.
An alternate refinement is to include on the team an external observer, who does not participate in any of the planning or estimation activities, but instead merely observes. Later, this observer can present to the team an independent estimate of how much the team will accomplish.
However, if you do include an independent observer, be sure to choose someone who wants to do the role, then brief them separately from the main team.
What you learn from the game
The most important lesson to take away from playing this game is that we try to take on too much in our plans, but we do not learn that we take on too much, despite repeated negative feedback.
This important lesson is best learned by playing two iterations of the game. In the first debriefing session, the participants are given feedback on several things. For example:
- Prioritisation. Some stuff on the list is more important than other stuff.
- Hidden requirements. For example, you probably don’t want to do the food shopping as your first task, then spend the rest of the day carrying heavy, perishable food around town.
- Some activities have external dependencies. If you arrange to meet a friend or an estate agent, they may not turn up at the agreed time. If at all.
- Some activities are highly variable. For example, lunch with a friend.
- The value of backup and contingency plans.
- Participants easily learn these lessons. In the second round, the teams prioritise better, and avoid getting caught out by hidden requirements, external dependencies or highly variable activities. Plus, they generally think about a backup plan in the second round.
However, what participants generally fail to learn is that they attempt to do too much. The teams generally go into the second round with a list of tasks that is smaller than the first round but is still unrealistically large.
It is only after you give feedback in the second debriefing session about this ability to learn every single lesson from the first round, apart from the most important lesson and the one for which the planning game was developed, that we begin to understand how deeply ingrained the planning fallacy is.
Reducing your vulnerability to the planning fallacy
Interestingly, the first aspect of the planning fallacy – we underestimate our own tasks but not those of other people’s – points to a mitigation approach.
Include in the estimation team an external observer, who will not be carrying out any of the project tasks. The importance of an informed, external observer was brought home to me, somewhat painfully, during my PhD thesis write up.
A few months into writing my thesis I was struggling, so I approached my second supervisor for help. The first thing he did was pull out my original schedule and compare my actual progress against what I had planned.
Every task had taken twice as long. However, I could explain each delay. Moreover, every delay was a one-off event, none of which were ever likely to be repeated.
“Did you foresee any of these delays?” My supervisor asked.
I shook my head. “No, none of them were foreseeable.”
“I see,” he said, frowning in the disconcerting way that only university professors have perfected. “Every single task took you twice as long as you expected. And each task was hit by a complication or delay that you could not foresee.”
I nodded.
He pointed towards the remaining tasks on my plan. “Given that every single task has been hit by an unforeseeable complication, how many of these remaining tasks do you think will be hit by similar complications?”
I remained silent, as the implications of his words sank in.
“Come back to me tomorrow, with an updated plan,” my supervisor said. “Do that and I’ll get you the extra funds to cover the extension.”
External observers tend not to discount previous relevant experience and are far more likely to use a past failure to achieve a goal as a likely predictor of future performance.
In addition, observers are unaware of the team members’ future commitments and cannot construct future scenarios with confidence. This means that their assessments are not tainted with undue optimism.
We may not like the story that an external observer is telling us, but such an observer is able to give us a far more realistic picture of the likely future trajectory of our project.
Another way to reduce your vulnerability is through using a red team. This is a team put together to challenge an organisation’s orthodox thinking around a subject, such as a major project or product launch. A red team involves far more work than simply including an external observer in the estimate process. However, it does offer additional benefits, such as uncovering major flaws in a plan that are not revealed by simply adding an observer.
For example, consider a desktop upgrade project at a major oil company. Bringing in an independent observer may challenge an ambitious, i.e. unrealistic, estimate and cause you to re-evaluate the schedule and resources required.
However, a red team with a wider brief is more likely to uncover serious hidden issues. For example, you may be horrified to discover that your energy traders are using spreadsheets in an Excel version that is over 10 years old and full of security vulnerabilities. What’s more, they have no intention of moving to a desktop that cannot support their old spreadsheets, and there is little you can do about it.
Positive illusions and other benefits
If the planning fallacy is bad for our plans, why does it exist within us?
This is a valid point, as any behaviour universally bad for us should be eliminated through natural selection. In fact, the planning fallacy provides three areas of potential benefit to us.
First, the planning fallacy provides us with a positive illusion – we think things are better than they really are. This illusion promotes our psychological well-being and encourages us to continue when discouraged. Those rose-tinted glasses really are for our own benefit.
Second, although the planning fallacy causes us to take on more than we can realistically achieve, it also allows us to accomplish more than we would have otherwise. That ridiculously long to-do list of weekend tasks we make may never get completed, but it does help us achieve more than we would have otherwise.
Third, the planning fallacy encourages us to initiate a large endeavour which, although beneficial to us, we would never have started if we had realised how truly gargantuan it really was.
Conclusion
Almost any worthwhile project is going to be large enough that it needs planning out, which means we are always vulnerable to the planning fallacy. One reason we find this fallacy so seductive is that the alternative is so unpalatable – it means that either we cannot achieve our goal or our past performance was less than impressive.
We can use planning games to raise our awareness of this fallacy, as well as reduce our vulnerability to it. However, perhaps the best antidote is to always ask an independent, outside observer for their honest opinion and accept what they say, rather than becoming defensive about it.
A final word of caution: the planning fallacy serves a useful purpose for us. If you find yourself too discouraged by the outside observer’s dose of reality, you may be better off remaining deluded.
References
- Facts and Fallacies of Software Engineering. Robert L. Glass
- Exploring the "Planning Fallacy": Why People Underestimate Their Task Completion Times. Roger Buehler, Dale Griffin, and Michael Ross. Journal of Personality and Social Psychology 1994, Vol. 67, No. 3.366-381
- An Economic Model of The Planning Fallacy. Markus K. Brunnermeier, Filippos Papakonstantinou. 2008
- How a Video Game Helped People Make Better Decisions. Carey K. Morewedge. Harvard Business Review.
- Positive Illusions and Well-Being Revisited: Separating Fact From Fiction.Shelley E. Taylor and Jonathon D. Brown. Psychological Bulletin 1994. Vol. 116, No. 1.21-27
- Red Team: How to Succeed By Thinking Like the Enemy. Micah Zenko