What Agile Metrics Should We Report?
What metrics should be reported to management in an Agile software development environment?
On the Scrum Development mailing list, Charles Bradley asks:
A department has 3 dev teams that all report to one Senior Manager. That Senior Manager reports along with a few others to a VP.
What metrics, if any, would you suggest to report from the team to the Sr. Mgr?
What metrics, if any, would you suggest to report from the Sr. Mgr up to the VP?
George Dinwiddie proposes a straightforward solution: ask management what metrics they would find valuable. In the case of this example, however, management doesn't know what kinds of metrics they want yet. They are wary of asking for metrics that break the principles and spirit of Scrum, and they are asking Bradley for advice.
Are there any kinds of metrics that should definitely not be reported to management in an Agile development environment? George Dinwiddie writes:
In general, don't report raw numbers. Those removed from the work often won't have the context to interpret them reasonably. And, upper managers don't have the time to do the analysis for themselves.
Specifically, according to Dinwiddie, team velocity is one number that should not be reported to management. Estimates about the future should be sent up rather than the velocity number itself.
Ron Jeffries writes that any metric whose intent is to compare productivity between teams is a bad idea:
Whatever you measure here, even if it were revenue dollars received cannot (I do not mean should not) be used to assess the teams' comparative "productivity". It will not work. It cannot work. I mean cannot in the sense that a rock cannot hang in the air unsupported. I mean cannot in the sense that a cat cannot build a bridge.
A car salesman sells 30 cars a month. A realtor sells three houses a month. Which is more productive, and what should the other one do to become as productive as the one?
So what kinds of metrics should be sent up through the management chain? One approach, suggested by "karatasfamily", is to consider the questions that management will have about any software development project, Agile or not, then provide metrics that give insight into those questions. Some sample questions might be
- What are you doing?
- When are you doing?
- Are you on schedule?
- Are you on budget?
With this philosophy, the fact that a project is an Agile project is mostly encapsulated from management. The metrics reporting structure for Agile projects is designed to conform to management's current reporting formats, rather than management having to learn a whole new way of doing business. This attitude towards metrics is similar to the one described in Ken Schwaber's book Agile Project Management with Scrum.
The least-wasteful metric, though, is the one that never has to be created in the first place. In his book How to Measure Anything, Douglas W. Hubbard says,
If a measurement happens at all, it is because it must have some conceivable effect on decision and behavior. If we can't identify what decisions could be affected by a proposed measurement and how that measurement could change them, then the measurement simply has no value.
By this standard, every metric proposed—Agile or not—should be challenged with the question, "What is the decision this metric is supposed to support?" If no decisions would be affected by the metric, then the metric is useless and should not be reported.
1. Is it and outcome or an activity metric? Outcome metrics should be preferred.
2. What can I learn from the metric to help me improve? A good metric is one which generates knowledge for improvement.
John Seddon's Freedom from Command and Control is a good source for further info on these guidelines.
You shouldn't report progress metrics
Request for metrics to keep an eye on the progress of an iteration is a clear sign that the iteration length is too long.
Send a snapshot of the tasklist board taken with your phone every morning that manager doesn't show at the stand up meeting. That would be sufficient.
Dont forget code quality metrics.
Though I loath to call these "agile" metrics, they are quality metrics. However, try adding features to a highly coupled code base (or have a member of your team do it) and tell me that coupling and cohesion aren't the true measure of your codebase, and hence your teams, agility. Test code coverage and the like are all useful, but not directly indicative of the software's quality (ease of maintenance, ability to change code in a library without breaking other code through unintended consequence due to coupling, readability, robustness). I'd recommend javacss for readability, and jester for robustness.
In short, I think a lot of the metrics people focus on now are missing the point. Most management doesn't understand what it means to have quality code, and that your team is constrained by the quality of code it works on.
This increasing code quality is what will allow your team become more effective over time, beyond merely becoming more acquainted with a code base. Not just during a project, but month-by-month, year-by-year in a way that will help to keep an organization efficient (relative to competitive organizations).
Queue criticism of being to close to the code base.... Just trying to put some of that good old-fashioned computer science back into software development.
Re: Dont forget code quality metrics.
Two types of metrics
The first type are internal metrics, like burndown charts, number of tests failing, quality index, etc. These are shared with senior management, and the goal of these numbers is to monitor, evaluate and improve the R&D processes.
We then have external metrics, where we do surveys to our customers asking them what they think about parts of the product, and the product as a whole. We ask questions about usability, quality, usefulness, etc, and we share the results of these surveys with upper management. This is a great way to measure what really matters: What the customer thinks about your product.
Start with goals/questions
George Dinwiddie proposes a straightforward solution: ask management what metrics they would find valuable.
George almost got it right, but instead of asking what metrics management would like to see one should start with goals: i.e. why management wants to see metrics at all, what management would like to do with that data (e.g. track product quality, improve processes, ...). When goals are defined, the next step would be to compile list of questions, so that answering them would enable to track achievement of the goals. And only then start thinking about metrics.
This is a general outline of the metrics framework known as Goals-Questions-Metrics (GQM).
Another very useful "start-up" guide is Linda Westfalen's "12 steps to useful software metrics"
The one-liner to take out here: one should first decide why she needs to see some metrics and only then decide which particular metrics would satisfy those needs.
Sarah Howe Jul 06, 2015