5 Factors That Could Be Making Your Project Estimates Go Wrong
Why do our estimates for a project or a testing phase so often turn out wrong?
Improving our ability to estimate accurately is important, as our inability to deliver to an estimate affects the confidence of the business in IT projects and teams.
What is puzzling is that not only do we make estimation errors, but they are systematic errors of underestimation. If errors were simply due to a random spread of noise, then we should see overestimates just as often as underestimates. However, we only tend to underestimate, suggesting that factors are systematically biasing the estimate.
Systematic underestimation is not the only effect we observe. Typically, the eventual delivery date is not just after the estimated date, but far outside the range of predicted delivery dates.
And whatever causes underestimation, we clearly do not learn from experience, as we repeatedly make estimation errors, despite feedback showing previous errors. It’s a chronic problem.
What could be driving these errors? Here are five potential factors.
IT projects frequently deal with new technology, which often is not fully understood and may cause estimation errors.
Intentionally manipulated estimates
Estimates may be intentionally manipulated, possibly to receive project funding or due to a belief that teams are more productive when kept under pressure.
Gold-plating, where time saved by early task completion is consumed by completing the task to a higher standard than is required, could cause projects to deliver late, even if the average estimate for all tasks is accurate.
Adverse selection is where undesirable members of a group are preferentially selected. For example, an all-you-can-eat buffet attracts customers who eat to excess. Adverse selection probably occurs in software projects as well, as underestimated projects will get funded in preference to accurately estimated projects.
However, even on the (admittedly few) projects where the above causes are absent, underestimation still occurs. We must ask what other effects could contribute to estimation errors, and to find an answer, we need to shift our gaze away from external and project factors and look inside ourselves.
Most of us jump to hasty conclusions, think that we are smarter than average, and believe that we would have spotted others’ past mistakes. All these effects are the result of cognitive biases. There are many such biases, but those most relevant to underestimation errors are the anchoring effect, optimistic bias, planning fallacy, and overconfidence effect.
Teams attempting to improve their estimation techniques tend to focus on the known external drivers, like technology uncertainty, while ignoring the human biases, but we are then disappointed when we continue to miss estimates.
If we wish to improve our estimates, perhaps we need to spend more time trying to understand our human biases and how they impact our estimates.
Andrew Brown is presenting the session Improve Planning Estimates by Reducing Your Human Biases at STARWEST 2018, September 30–October 5 in Anaheim, California.