On agile projects, we use a team’s historical velocity to help us plan for how much work they can get done in upcoming iterations. Velocity is the rate at which a team is able to deliver software in a period of time. The units of measure for delivered software are Story Points, the same unit of measure that we use to estimate the work items in the first place.
For a particular iteration, the velocity is computed as the sum of the Story Points for all the work items that were completed in that iteration. Note that there is no partial credit. Work items either satisfy all their acceptance criteria or they do not.
The team’s velocity is the trailing average of the three previous iterations. The team’s velocity is what is used to make commitments for the next iteration. Using the trailing average results is a more reliable commitment because it washes out the peaks and valleys of productivity variances. These may result from under- or over-estimated work items or circumstances beyond the team’s control.
Continuously measuring a team’s velocity is a critical component in being able to reliably commit to delivery of a scope of functionality based on what the team has historically demonstrated that they can accomplish.
The following VersionOne velocity trend diagram shows the week-over-week velocity for an agile development team for an 8-week period. Each vertical bar represents the amount of work (total Story Points) that the team completed in a week.
The target estimate is the total number of Story Points the team should be able to deliver based on the trailing average of the previous 3 iterations.
What can we learn from a picture like this? And how can we use it to make smarter management decisions?
The short answer is that this is a very information-dense picture that tells us a lot. The longer answer is that it doesn’t tell us the whole picture. We’ll have to ask the team some targeted questions to know the full story, but this gives us a great starting point to begin the analysis of the team’s health.
One observation about this chart is that while each iteration’s velocity may vary widely, the target estimates still tend to cluster relatively tightly to the overall average velocity across all the iterations. That would indicate that while the team may need to turn the dial up and down to deal with changing project demands, they would seem to have some nominal rate which they can be expected to reasonably sustain over time. What it also affirms is that the trailing average of velocities is far more reliable than simply using the velocity of the previous iteration.
Another thing that stands out is the major spike in velocity in Iteration 2.4. This is even more interesting considering that average velocity has been trending downward to that point. What happened here? The target velocity for Iteration 2.4 was 49.5. This particular team happened to be coming up against a deadline that was imposed by their client. Even though the team was working at a sustainable pace, there were still several items in the backlog which the client stated they wanted by that deadline. So the development team had 2 choices: (1) Ask the client to prioritize the 49.5 points which they were confident they could deliver, or (2) Turn up the dial and try to get it all done. The team chose Option 2. They added some team members and worked extra hours. The results of their hard work is shown on the chart. They were able to crank out a whopping 75.5 points in one week’s time! But what was the cost of this spike in productivity?
The cost seems obvious: the team ended up having diminishing returns on velocity in subsequent iterations. In Iteration 2.5 they were only able to muster 39.5 points. That one was fairly predictable, but what’s perhaps more interesting is what happened after that. The Iteration 2.4 velocity pushed the target velocity up to 56 from 49.5, resulting in the largest negative deviation from the target velocity in the measured history of the project! But what it also did was raise the bar in terms of expectations about what the team should be able to accomplish. Notice in Iteration 2.6, the team makes an attempt to match the new target. They get close, mustering 51 points against a target of 54. But that gain quickly gets wiped out again in Iteration 2.7 when the team attains 48.5 points against a target of 55.
What does this mean? It means a velocity of 55 is not sustainable for this team. At least not right now. Not by simply working harder or adding more team members. This team will need to make changes to the way they work in order to see sustainable velocity gains.
What I found really interesting was what happened in Iteration 2.8. By now, the anomalous Iteration 2.4 velocity of 75.5 no longer factors into the trailing 3-iteration average, but the subsequent Iteration 2.5 velocity of 39.5 factors in prominently, bringing the target velocity down to 46.5. Coincidentally (?) the team actually matched this number exactly for that sprint. By then the team was settling back into their normal sustainable flow of production.
At some level, all of this is common sense. When you work a team unnaturally hard, they will not be able to maintain that level for very long. Using a quantitative management approach and agile development methods allows us to actually measure the impact of management decisions on the team’s output. In this case, the decision to overwork the team in Iteration 2.4 was a bad one. Everyone knew it at the time, but they did it anyway. In the end, it didn’t really benefit the client all that much. There were still plenty of things the client wanted in the following iterations that they just didn’t get because of the team’s diminished capacity and recovery from Iteration 2.4.
The really good news is that this team now has the tools, methods and experience to make smarter decisions moving forward.