Key Takeaways
- Software project estimation is not dead. In fact, estimation is still a very valuable practice, even in organizations that are dependent upon agile development methodologies.
- Teams tend to be overly optimistic in their estimates, teams fail to re-estimate when something changes, and point estimates are done instead of ranges.
- There are several best practices that stakeholders can use to get their software estimation processes back on track toward adding value to their organizations. These include taking a top-down approach to estimation, focusing on five core metrics, and estimating size through a “ballpark” approach.
- Remember that an estimate is just that – an estimate. Teams should give themselves room to breathe instead of pinning themselves down.
- Software estimation does not have to be difficult, onerous, or ineffective. Done right, it can be a highly effective tool that can help project managers provide value to their organizations.
Engineering Culture Trends Report
Find out what technologies from the Culture space you should keep an eye on this year. Be the innovator in your team and learn more about Engineering Culture, Ethics in Software and Craftsmanship. Read the report.
In a world trending away from traditional waterfall and toward agile development methodologies, it would be understandable to assume that there is no longer a need for software project estimation. Many agile practitioners feel there’s no value in estimation, since they are already working with smaller increments and sprints and grooming their backlogs.
However, that assumption would be wrong.
In a recent interview, Ken Schwaber and Jeff Sutherland, the founders of Scrum, were asked about the #NoEstimate movement. Schwaber believes a more appropriate term may be #NoMeaningfulCommitments. He feels that people often confuse estimation with commitments and that, in fact, estimates should be used in making commitments. Sutherland mentioned a recent Rally (now CA) survey that asked members of 70,000 scrum teams about the estimation techniques they used and then correlated those techniques with speed of delivery. They found that those that eschewed estimates altogether yielded some of the slowest delivery times, while those that employed scope-based estimation delivered the fastest results.
These findings indicate that estimation may very well be more important today than ever before, even to agile practitioners. Many development projects are becoming larger and more complex, making it increasingly challenging for teams to ascertain and meet realistic deadlines. Meanwhile, senior executives are demanding accurate cost estimates to help them define their annual budgets and determine whether or not a project is feasible and matches with business needs. All of this is true regardless of the software development methodology being used.
Instead of abandoning software estimation, organizations should focus on estimating better. Practicing top-down, macro-level estimation, and exclusively focusing on some core metrics can help managers gain more control of their projects and develop realistic development scenarios. This will result in projects that meet managers’ expectations and are delivered on time and on budget.
Before taking a closer look at these best practices, however, it would be helpful to understand why teams continue to struggle with software estimation in the first place. As it turns out, there are a number of reasons why project managers have a difficult time with the process.
They tend to be overly optimistic.
Project managers and software developers tend to have very sunny views of the projects they are working on. There appears to be a natural tendency to overestimate their ability to create, deliver, and do projects right the first time around. They assume that everything will go great, and that no speed bumps will get in their paths.
Unfortunately, roadblocks inevitably will pop up. There will be interruptions, unanticipated schedule changes, and other unforeseeable conditions that will impact development.
Thus, managers make two mistakes.
First, they fail to leverage historical data from similar projects, particularly in regards to staffing, which can be useful in accurately estimating a project’s size and schedule. Our research shows that when managers rely solely on development methodologies, instead of understanding and optimizing their available resources, projects end up over-staffed, over-budget, and with significantly more defects. QSM's 15-year study of agile performance has consistently shown that good planning, not development methodology, is key to successful software development - and historical data plays an integral role in effective planning and estimation. That's why we created, and continue to update, our Software Project Database - to provide our customers with the highest-quality historical information and trendlines to make well-informed decisions in the software development process, regardless of which methodology they are using.
Second, they tend to confuse estimates with commitments, target values or goals. A target value is what someone would like to have happen. Conversely, an estimate is based on a quantitative analysis of what is likely to happen. These two things should never be confused. Instead, targets and estimates should be evaluated independently to see if they align. That will help teams maintain more realistic levels of optimism. Estimates should always be used as input before making cost and schedule commitments.
They fail to re-estimate when things change.
As mentioned, inevitably, something will happen to change the course of the development effort. For instance, a team may find the need to adjust their project’s scope and requirements in the midst of the development cycle. These changes will likely impact the schedule and could also impact other factors, such as number of team members needed to work on the project, the overall cost, and more. In such cases, managers must be prepared to re-estimate to get a more accurate measure of their project’s status.
Unfortunately, many people do not do this. They continue on with their original estimates. When they miss these estimates – their projects fall behind schedule, or run over budget, etc. – they blame the software estimation process. However, it’s not the estimation process that has failed; it’s simply the fact that the team has remained steadfastly beholden to their original estimate.
Project managers must remember that estimates are not set in stone. They can and should be adjusted accordingly as changes take place during the development process. This is something that agile practitioners learned a long time ago and is the reason why they build the potential for changes into their development methodologies.
They produce point estimates instead of ranges.
When it comes to estimation, managers have a tendency to apply point estimates, or single values, to their projects. “Our project will be delivered on May 1, 2018,” they definitively pronounce, leaving no wiggle room whatsoever.
But estimates are inherently uncertain. They are driven by what we do not know – about size, scope, and productivity, for example. When we estimate, we are really attempting to develop a timeline to strive for, rather than one to hit without question. Teams that tout a particular date are more likely to subject themselves to potential failure.
Instead of producing point estimates, project managers should develop estimated ranges. For example, “Our project is likely to be delivered no sooner than April 15 but no later than May 1,” or “Our project will not cost less than $1 million but no more than $3 million.” This provides the team some room to breathe without holding them to commitments they may not be able to keep, while still giving management a good idea of what to expect.
Consider hurricane prediction models - meteorologists never use a thin line to predict the hurricane’s path. Instead, they present it as a cone of uncertainty that gets wider over time - and although the model is not exact, it is good enough to determine what population centers should consider evacuating. This idea is exactly the same in modeling and predicting software project outcomes. Just as meteorologists use the cone of uncertainty to account for possible hurricane course changes, project managers can use estimated ranges to generate reasonable risk buffers, while still being able to give management realistic expectations.
The best practices that can help software estimation thrive
Now that we have examined the mistakes that are being made, let’s take a closer look at how software developers can revive their software estimation processes to deliver exceptional value to their organizations.
Take a “top-down,” macro-level approach.
Traditional project management involves allocating people and the number of hours they are estimated to work on specific tasks at the very beginning of a project. This usually happens before teams have figured out the specific requirements of each person’s task or prior to estimating the total duration and effort of the entire system. This “bottom-up” approach to software project estimation often results in inaccurate guesswork and re-planning, costing organizations additional time and money.
Implementing a top-down, macro-level approach to estimation can be much more effective. Top-down estimation takes into account the entire project from the very beginning. It employs historical and empirically based models to accurately estimate size, cost, effort, and other factors. Managers can run “what if” scenarios to account for various challenges that may occur throughout the course of development (i.e., “What if we run over budget?” or “What if we have to do some re-work?”). They can then make adjustments as necessary before work commences, potentially saving significant time and money in the long run.
Focus on five core metrics.
Project managers do not have to capture a lot of different metrics to ensure the success of their estimation processes. Simply focusing on five core metrics – duration, effort, size, productivity, and reliability – can deliver very accurate and credible estimates.
My father, Larry Putnam, Sr., initially posited this theory in his book Five Core Metrics: The Intelligence Behind Successful Software Management. In the book, my father argued that simplifying the estimation process and tuning into the areas that truly matter can help managers better assess risk factors, anticipate and respond to changes, and successfully re-plan their projects as necessary. It addresses the common misperception that software project estimation is too hard and complicated. As my father showed, that certainly does not have to be the case.
Size & Scope measures.
Out of those five core metrics, project sizing – which takes into account the amount of functionality in a given software release - tends to create the most headaches. Teams often find it extraordinarily difficult to quantify the size of their projects – even more so if those projects involve large-scale software development efforts. Teams will often need to use different sizing methods depending on where the project is in its lifecycle and the information they have at hand, making sizing a complex task.
And yet, sizing is also extremely important. Without it, stakeholders will not be able to determine how long a software project will take to complete, how much it will cost, how many people they will need to complete it and how productive they will be, or how many defects they can expect to find during testing. Ignoring size leads to bad estimates.
While counting exact units of work is great, it’s not always possible. In these cases, teams would be well served to ballpark their size estimates. These can be as basic as “big,” “medium,” “small,” or some variation of those terms. They can also employ high level measures, such as “business requirements” or “number of user stories or epics” to estimate size.
Estimation tools can be used to complement these initial frames of reference and account for any uncertainties that may exist. After running some initial estimates, managers can get a good and reliable idea of the size of their projects. That estimate can then be checked against industry trends to ascertain the projects’ overall costs and schedules.
Estimation is not dead. It remains essential.
Software estimation does not have to be difficult, onerous, or ineffective. On the contrary; done correctly, estimation can be absolutely essential to the timely development and delivery of projects. It can help teams gain a better understanding of how much time, effort, and money it will take to deliver solutions that are of value to their organizations. It can also provide project managers with the information their stakeholders demand to show how their investments are being managed.
In short, software estimation is far from dead. It is, in fact, very much alive. With a little bit of attention, organizations can use it to its fullest extent to deliver valuable insights for projects of all methodologies and sizes.
About the Author
Lawrence H. Putnam Jr. is co-CEO of QSM, a leader in software process improvement and systems development estimation. Larry's primary area of responsibility is to oversee the strategic direction of QSM’s products business. This includes meeting revenue goals, strategic product direction, customer care and research. Larry has over 25 years of experience in software measurement, estimating and project control. He joined QSM in 1987 and has worked in every aspect of the business, including business development, customer support, professional services and now executive management.
Community comments
Forecasting over estimating
by Johnny FromCanada,
Re: Forecasting over estimating
by Lawrence Putnam Jr,
Re: Forecasting over estimating
by Lawrence Putnam Jr,
Use of estimates
by Eric Zimmerman,
Re: Use of estimates
by Lawrence Putnam Jr,
All Lean-Agile initiatives are on-time and on-budget
by Johnny FromCanada,
Re: All Lean-Agile initiatives are on-time and on-budget
by Lawrence Putnam Jr,
Re: All Lean-Agile initiatives are on-time and on-budget
by Johnny FromCanada,
Re: All Lean-Agile initiatives are on-time and on-budget
by Lawrence Putnam Jr,
Forecasting over estimating
by Johnny FromCanada,
Your message is awaiting moderation. Thank you for participating in the discussion.
Self-reporting surveys is not science (regardless of how large). (Ironic, since Agile is derived from empirical process control.)
Instead of sinking effort into estimating (which is dominated by other factors), consider forecasting via Monte Carlo statistical analysis. Check out Troy Magennis' excellent presentation:
Forecasting Using Data
www.infoq.com/presentations/forecast-data-analysis
Use of estimates
by Eric Zimmerman,
Your message is awaiting moderation. Thank you for participating in the discussion.
In the interview you cite with Schwaber and Sutherland, they talk about the importance of estimating in order for teams to get their arms around what work they can do in a sprint in order to drive a high level of commitment for completing the work in the sprint. They go on to cite rally data to to show the importance of estimating for sprint planning purposes.
What they do not say in the interview is that estimating is important to achieve predictability for long term outcomes for software products. Here in lies the crux of the estimation problem. Organizations desperately want to believe that they can achieve predictability through estimation. This is why estimates become commitments. If estimates aren't commitments, then how can an organization know when the project will be complete, and how much it will cost before they commit to the project? This is how you end up hearing, "The project is falling behind schedule." or "The project is not on budget."
Organizations have a terrible time trying to get their heads wrapped around the fact that software is inherently complex and unpredictable and agile methods like scrum help you work and yes even flourish in that reality.
Re: Forecasting over estimating
by Lawrence Putnam Jr,
Your message is awaiting moderation. Thank you for participating in the discussion.
If the “Self-reporting surveys is not science” refers to the CA (Rally/SEI) study, I think there was more to it than that. Here is a link to an infoQ article about “Quantifying the Impact of Agile Software Development Practices” that interviews the authors where they describe their approach. I believe this was the study that Schwaber and Sunderland were referring to: www.infoq.com/articles/quantifying-impact-agile
I totally agree with the incorporation of proven statistical methods like Monte Carlo simulation into the estimation/forecasting processes. In fact, we have had these built into our estimation solutions from the very beginning. Below are some links:
www.qsm.com/blog/2014/modeling-uncertainty-soft... - Don sums it up nicely: In the background, SLIM-Estimate uses uncertainty ranges around estimation inputs to perform Monte Carlo simulation to determine the width of the uncertainty ranges for estimation outputs (cost, effort, and project duration).
www.qsm.com/about-us/timeline : 1979 - QSM introduces a mainframe timesharing version of SLIM-Estimate® – the first tool to introduce and use Monte Carlo simulation to perform risk analysis and linear programming for optimized resource planning.
At QSM we differentiate forecasting from estimation. Estimation is something you do early on forecasting something you do as the project progresses in execution. QSM actually built a data driven solution (SLIM-Control) around forecasting that leverages empirical progress data coming off the project.
Re: Use of estimates
by Lawrence Putnam Jr,
Your message is awaiting moderation. Thank you for participating in the discussion.
I would differentiate estimates from commitments. Estimates are an analytical process of figuring when (schedule) and for how much (cost) a team might be able to deliver assuming a given scope, efficiency, staffing and uncertainty level. A commitment is a business decision that is (hopefully) based on an estimation scenario that includes how much risk the organization is willing to accept given the estimation uncertainty level. As I mentioned in my reply to the previous question, forecasting during project execution can help further mitigate risks.
Re: Forecasting over estimating
by Lawrence Putnam Jr,
Your message is awaiting moderation. Thank you for participating in the discussion.
If the “Self-reporting surveys is not science” refers to the CA (Rally/SEI) study, I think there was more to it than that. Here is a link to an infoQ article about “Quantifying the Impact of Agile Software Development Practices” that interviews the authors where they describe their approach. I believe this was the study that Schwaber and Sunderland were referring to: www.infoq.com/articles/quantifying-impact-agile
I totally agree with the incorporation of proven statistical methods like Monte Carlo simulation into the estimation/forecasting processes. In fact, we have had these built into our estimation solutions from the very beginning.
At QSM we differentiate forecasting from estimation. Estimation is something you do early on forecasting something you do as the project progresses in execution. QSM actually built a data driven solution (SLIM-Control) around forecasting that leverages empirical progress data coming off the project.
All Lean-Agile initiatives are on-time and on-budget
by Johnny FromCanada,
Your message is awaiting moderation. Thank you for participating in the discussion.
There still seems to be a prevailing misinterpretation / misunderstanding of the fundamental difference between Lean-Agile and traditional project management planning.
All Agile initiatives are on-time and on-budget - the scope is what varies, and hence what is forecast. In a creative / complex problem or solution space, the easiest variables to constrain are the schedule and budget, while the scope is almost always fuzzy and not yet known.
This choice of constraints inverts the classic constraints of the Iron Triangle. Traditionally a (very fuzzy and unknown) scope is (perceived to be) constrained. Then the team (or worse, some level of indirect lead, analyst, or manager) is required to estimate / forecast the budget and schedule. Then, those all get codified in a document, and penalties apply if anything varies. It inevitably gets gamed, and almost always results in loss of another variable - usually quality. :-)
Also the use of "commitment" in Lean-Agile is a commitment by a team to collaborate toward a goal, not guarantee some kind of old-school ROI or Earned Value. If the Lean-Agile commitments of a certain forecast scope are not met within the (iterative & incremental) time and budget constraints, then the ROI becomes the learning and adaptation of the policies, processes, tools, environment, etc., (via team Retros).
For more info on metrics that make a difference in Lean-Agile, check out Vacant's excellent book:
Actionable Agile Metrics for Predictability
www.actionableagile.com/publications
Re: All Lean-Agile initiatives are on-time and on-budget
by Lawrence Putnam Jr,
Your message is awaiting moderation. Thank you for participating in the discussion.
I do understand the concept of fixed times and costs and designing to the scope that will fit within those boundaries. The trick here is for the team to work with the product owner to try and figure out as best they can what will be the minimum “consumable” value that the customer would accept and be happy with. Once team has an idea of what this minimum value represents (epics, user stories/points), they can figure out how many fixed iterations (sprints) it will take to get there. One of my colleagues put together a 2-part webinar series that goes into these concepts in detail - www.qsm.com/webinar/agile-estimation-beyond-myt....
Yes, in agile the team makes commitments to collaborate towards the goal of delivering value to the customer. But the team must also make commitments to the customer via the PO on what capabilities will be delivered within an agreed time frame and budget given the uncertainty inherent in its definition and reprioritization.
Thanks for steer on Vancanti’s book! I’ll give it a read.
Re: All Lean-Agile initiatives are on-time and on-budget
by Johnny FromCanada,
Your message is awaiting moderation. Thank you for participating in the discussion.
If your domain is static, not creative / complex, and has a stable Cone of Uncertainty, and your reference class of historical estimates-actuals data is large enough, then extrapolating from throughput (velocity) into the future can be valid.
But many software domains these days are highly dynamic, creative / complex, have more of a fibrillating "tunnel" of uncertainty, and a very unreliable historical estimates-actuals data.
If you are going to try to enforce "commitments" on (especially otherwise Lean-Agile) technical engineers, then you better be ready to handle the myriad concomitant dysfunctions.
Thanks for the link - I will check it out when I can. Got a new baby to raise - talk about creative / complex! :-)
Re: All Lean-Agile initiatives are on-time and on-budget
by Lawrence Putnam Jr,
Your message is awaiting moderation. Thank you for participating in the discussion.
Thanks for comments and insightful dialog. Congrats on you new arrival! Kids help us put everything in perspective:)