One of the oldest conversations on record is the discussion of how to measure effective software development. One of the most used, most abused and least understood metric is “velocity.” Think Corvettes versus Volkswagens.
Just to keep terms straight, velocity is the sum of the estimates of delivered/accepted features per iteration. Velocity can be measured in the same units as feature estimates, whether this is story points, days, ideal days or hours.
On the one hand, velocity is a very simple measure for evaluating the speed at which teams deliver business value. It can provide tremendous insight into a project’s progress and status. Velocity will tend to stabilize over the lifecycle of a project unless the project team varies or the length of the iteration changes. Due to this, it can be valuable tool for future planning purposes. If you, as a planner, accept that project goals and teams may change over time, velocity can be used to plan releases deep into the future. Velocity proponents claim many managers over-think the idea of velocity and add too much complexity to it.
On the other hand, like many metrics that are attractive because of their simplicity, velocity has a dark side, notes Esther Derby in a recent post. She notes that velocity is easy to manipulate or misuse. You can alter the definition of “done” and finish more stories. Managers can use velocity as a metric to compare, reward and/or punish teams.
Velocity emphasizes the wrong attributes. It implies that if velocity isn’t continuously increasing or is erratic, there’s a problem with the development team. This could potentially be true, but there are many factors that affect velocity that are out of the hands of the development team. The fact that velocity tends to stabilize over time, making it a good predictive tool, also can punish teams where managers expect continuous improvement. There might also be issues with how work flows to the team, and the team might be interrupted periodically with support calls or other activities that disrupt their work.
Some managers shift the focus of velocity to measure the rate at which a project is moving forward (versus how much a team is producing). This also creates issues, because it incents the team to avoid the hygenic aspects of the project in favor of maximizing the amount of code that’s written.
Still other managers use velocity to measure the rate at which a team completes work. This is a measure of how encumbered they are by circumstances such as their knowledge levels, bureaucratic overhead, technical debt, external vetoes and other issues. It leads to people doing busy-work just to generate a high level of activity.
It’s also easy to manipulate velocity. If you want velocity to go up, just redefine activity – what used to be a 2-point story is now a 5-point story.
Managers should remember that velocity does not equal productivity. Velocity manages how much a team is getting done. It doesn’t tell you if the team should be getting that job done faster or not. The most effective way to do that is to use industry standards, such as function points, to try to measure productivity.
A team could continuously develop the same amount of software and be unproductive. Why? Because the team might find out later that the code they delivered didn’t provide any value. At the end of the day, it’s the satisfaction of the customers that determines the ultimate success of value.
An important part of measuring value is analyzing the quality of the software developed. Introducing transparency into application development, maintenance and sourcing helps ensure the most effective and productive software development process. It also reduces the potential for software-related business disruption and risk, while reducing IT costs.
And while the Corvette might get the girls, the Volkswagen is often the better solution over the long haul.