BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Creating Better Metrics

Creating Better Metrics

This item in japanese

Bookmarks
A recent article in The Economist pays tribute to three of the finest graphics from the last two centuries: Florence Nightingale's graphic displaying the effect of disease during the Crimean War; Charles Joseph Minard's map showing Napoleon's disastrous invasion of Russia, and William Playfair's chart comparing the price of wheat to the "weekly wages of a good mechanic" over 250 years. It was Playfair who essentially invented the use of graphics to display statistical information:
[William Playfair] was the first in a series of economists, statisticians and social reformers who wanted to use data not only to inform but also to persuade and even campaign—and who understood that when the eye comprehends, the heart often follows.
The article also references the work of Edward Tufte, who has been enormously influential in the field of statistical graphics. In his book Beautiful Evidence, Tufte lays out six "fundamental principles of analytical design":
Show comparisons
The fundamental analytical act in statistical reasoning is to answer the question “Compared with what?”

Show causality

Show multiple variables
The world we seek to understand is profoundly multivariate.

Integrate evidence
Completely integrate words, numbers, images, diagrams.

Document the evidence
Provide a detailed title, indicate the authors and sponsors, document the data sources, show complete measurement scales, point out relevant issues.

Content Counts Most of All
Analytical presentations ultimately stand or fall depending on the quality, relevance, and integrity of their content.
Knowing the principles behind good graphics, can agile practitioners use these principles to more effectively present metrics? Better yet, can the principles be used to help identify and create more meaningful metrics? In chapter eight of The Visual Display of Quantitative Information, Tufte applies his design principles to identify patterns of good graphics; one of these is the "small multiple" - showing a large number of similar images next to one another to enable comparisons. A useful application of this pattern shows end-of-sprint burn-up charts for multiple scrum teams, all working together on a single product:

Small multiple chart: three teams, three sprints


Assuming that these teams have roughly the same sizes, sprint schedules, and assigned story units for each sprint, then this graphic reveals a wealth of insights: unsurprisingly, all the teams struggled during the first sprint; Team A has consistent velocity, but consistently over-commits, requiring stories to be moved to later sprints;  Team B has inconsistent velocity; Team C had some troubles early on, but now appears to be on track. Evaluating the performance of a team is often an difficult task for managers, but such apples-to-apples comparisons can help leaders within an organization ask and find answers to the all-important question: "Compared with what?"

Of course, metrics are simply a tool, and - like any tool - can be misused. But the innovative application of graphical design principles can help agile teams create better metrics which, when taken with a grain of salt, can enable agile leaders to find and fix problems within their project teams.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Quite some assumptions

    by Martin Schapendonk,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi, you make a lot of assumptions (single product, same team size, sprint schedule, assigned story units). I doubt if you will ever find a situation that allows for a meaningful comparison between teams?

    Furthermore, aren't you afraid for suboptimization? The business value (= the product) is delivered by all three teams combined. Suboptimization at team level might not lead to the desired result from a product owner's perspective.

  • Re: Quite some assumptions

    by Kurt Christensen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for the critical comments. There are indeed a lot of assumptions rolled into the above comparison; let me give two different responses to your questions:

    1) I'm currently working with a group of teams on a large product, and in this case all the assumptions are actually valid: roughly the same team sizes working in functional silos that have roughly the same level of complexity. Of course, comparing teams in this way can give insights that are not always obvious. If Team C has a bad sprint, was there a problem across *all* the teams? Does one team seem to consistently have a problem (e.g., not finishing stories) that isn't a problem for the other teams? Charts like this can help identify issues, but they should be the start - not the end - of an investigation.

    2) This is just one example of a "small multiple" chart; my real intention here was just to show how Tufte's work can be applied to invent new, more meaningful metrics for software development teams. I would *love* for other readers to post examples of metrics that they've created which go beyond the simple burndown chart.

  • Re: Quite some assumptions

    by Martin Schapendonk,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for the extra info. I don't have any other examples at hand, but I'll try to remember you if I will.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT