Discussion about this post

User's avatar
Shaun's avatar

In my short experience I find the estimation process incredibly arbitrary. I work for a software agency (though today is my last day before moving on) and it’s crazy to see how estimations differ between projects and clients.

One project that stands out to me right now is one which faced huge amounts of unknown unknowns, yet none of the estimations changed. Granted the estimations were for FE and BE work which was broken down to a coarse granular level, and that work will still exist, but the resolution of the unknowns resulted in additional work items being made that would either augment or invalidate other work items. I wonder if that’s just that particular client however as they’re very favourable of the vanity metrics on Jira.

Expand full comment
Ralph Case's avatar

I wasn't familiar with Hubbard, but I'm reminded of my experience.

In my first management role, I was on a team with a big estimation problem. There were many unknowns and it was frustratingly difficult for the team to know when they would be done. On the other hand, I was impressed with how much we had to be right. The Marketing folks needed to buy ad space in advance. The Sales team needed to plan their activities and reach out to major customers. The Manufacturing team had to plan when to buy parts inventory and ramp up production. None of these stakeholders would be satisfied with the engineers saying "It's too hard to say."

Instead of asking the teams for estimates and holding them accountable for meeting them, we asked the teams for ranges of dates that they felt comfortable with. When will you be done if things go well? When will you be done if there are unforeseen problems? We didn't try to measure the confidence to 50% or 80%. We just let each team determine their own range. Some teams were much more confident in their abilities to estimate their work. They had done similar things before. They had narrower ranges. Other teams working on newer features were more wary about what could go wrong. They had larger ranges. The project managers had to think differently, but they were able to pick a ship date with an acceptable level of risk.

The estimates could have been better, but that would have taken more time. That time would be better spent doing the work and learning from what went wrong than doing more planning to anticipate better what might go wrong.

The overall project was large - months, not days. So, the next step was to have the teams think about indicators in their work. How can we tell now whether we're tracking toward the short end of the timeline or the long end of the timeline? The closer they get to done, the more confidence they should have in the estimated dates.

Regularly looking at the indicators and reassessing the risk allowed us to make re-scoping changes to the project plan necessary to reduce the risk. Should we move a component from one team to another that had similar skills and was running ahead of schedule? Should we cut a feature with an unacceptable level of risk? Should we skip some "low-priority" testing? (I know, I know…) Should we switch to the simpler design that was rejected because of issues less important than shipping on time? It's critical that these discussions are blameless and focused on delivering the whole product. But they're exactly the discussions that were needed.

Expand full comment
1 more comment...

No posts