Practical Blog

Knowledge Platform

Practical Blog

Knowledge Platform

Practical Software - Revising Estimation

Revising Estimation

Many teams feel the need to go over finished stories and update their story points in order to reflect the amount of effort needed to complete them. They general notion is that it’s a good idea to fix the original estimates in order to reflect the “true” velocity of the team. Which later on will result in better estimates.

However, as  much as this sounds reasonable, actually doing this is counter productive.

and to explain that we first need to go back to the basics and understand better the purpose of velocity tracking and the entire estimation process.

Velocity

First the definition (taken from Agile Alliance website)

At the end of each iteration, the team adds up effort estimates associated with user stories that were completed during that iteration. This total is called velocity.

Knowing velocity, the team can compute (or revise) an estimate of how long the project will take to complete, based on the estimates associated with remaining user stories and assuming that velocity over the remaining iterations will remain approximately the same. This is generally an accurate prediction, even though rarely a precise one.

 

reading this definition its important to notice that the notion of “Productivity” isn’t mentioned at all. And indeed using Velocity to measure a team productiveness is a misuse of the metric, Especially if one tries to compare team based on their velocity data. The reason for that is that the actual numbers are actually controlled by the team itself, therefore it will not be an objective

Ask yourself this:

If we have one team with a measured velocity of 20 and another with a measure velocity of 40? does it have to mean that team B is working twice as fast as team A? Or is there a chance that team B just gives larger estimates? in order fot his to be true the estimate of each teams need to be normalized to some sort of standard unit of measure, and both team need to conform to that unit. Usually that is not done.

how about this: if a team Velocity has just increased from 10 to 20, does it means that the team has become twice as productive?  Maybe the team (consciously or not) is just encountering some difficulties in getting things done and react to this by increasing the estimates given to the same tasks (I’ve seen hat to happen) while still more or less competing the same amount of work.

Velocity should only be used as a predictive tool. It’s a good technique for extrapolating how much a team can finish in a given amount of time helping to answer how much worki can be finished in the allocated time (or how much time we need to allocate). This is based on two assumptions. The explicit one is that the velocity of a team will not change drastically in the time frame we have. And the hidden one (which is important also understand) is that most teams while are bad at estimating, do tend to be consistent about that. That is, a team in general will either overestimate or underestimate (or average out) but will do the same for all stories. So if we actually measure the team velocity and leaves the estimates as is, we can actually find the error factor in the team estimates. and can safetly extrapolate how much work is going to be finished.

Changing the Past

So going overt past estimates and “fixing” them is actually counter productive. once we do that we break the hidden assumption of consistent error. Estimates for the finished stories are now reflecting an error factor caused by our ability to track past information (i.e. know how much effort was invested in stories) And future stories reflect a different error factor (i.e the accuracy of the team estimates). In most cases those two are not the same. (I think its reasonably safe to assume that teams are better at tracking effort invested than at estimatign future work).

So no. revising past estimates is not a good idea.

Reflecting on Estimation Errors

All that being said, I do think that reviewing estimation errors is very useful. In fact. I suggest for teams to periodically (every 2-3 sprints) invest some time during retrospect, focus on a few (3-4) stories the team estimate was very far off and do a root cause analysis. The main goal of this practice is no to get better at estimation, but as a general improvement thing. As it happens stories with very wrong estimates, holds the highest potential for learning. A wrong estimate is usually caused by some sort of a very unexpected thing happening. it can be caused by something new the team encountered, something unexpected that happened, or just something the team failed to recognize during the initial planning. In any case, what ever is the cause,  its always an excellent opportunity for learning. Maybe the new things actually represent a new technology that the team needs to learn. Maybe the unexpected things are caused by changes in the context which are going to repeat themselves and therefore require some thought on how to deal with. Or maybe its just things the team needs to learn to pay attention to so they wont get missed. The nice thing is that when the mistake is big, its usually easier to pinpoint its root cause and therefore easier to learn and fix.

So remember it is important to reflect over your past trying to learn from it. But as always trying to cover up your past (estimate) mistakes is less productive.

 

 

Image by vectorjuice on freepik.com