The Prioritization Divide: With Numbers or Without?

Posted by Alex Adamopoulos and Paul Dolman-Darrall on Nov 19, 2012 |

How do you prioritize when you start development? Do you assign each story a \$ value, based on expected revenue or savings? This may work when you’re discussing a major feature, but when you drill down to moving a dialog box to aid user experience, how do you determine its value?

Or do you select stories according to Must, Should, Could, Won’t and then assign them a relative value with story points or labels? It’s not difficult to do, but does it really select the most critical or highest value out of all the stories that are crammed into Must?

There are many methods, but a basic divide runs through the heart of prioritization – do you do it with numbers, or without? There are arguments for and against both positions, but instead of examining these, people tend to fall into one camp or the other naturally. Once there, they can become quickly entrenched in the belief that the other camp is foolishly mistaken.

Those who criticise the numbers approach say: ‘Those number-crunchers, they spend so much time finessing their estimates they don’t get any actual work done. By the time they’ve calculated a Cost of Delay they’re delayed already.’

Those who prefer numbers mock the alternative: ‘High Value / low effort? What kind of subjective, gut-feel way is that to run a business? Why not just throw the cards into the air and develop them in the order you pick them up?’

Consider:

2. What prioritisation methods have you used on different projects? List the main advantages and disadvantages as they appeared to you.

The most common model today

Although there are numerous prioritization models in play, one of the most commonly mentioned on discussion boards and in interviews involves assigning relative points. Essentially the team get together and assess the value of a whole bunch of stories. The lowest value story on the table becomes the baseline, and other stories are given points relative to that. Next the technical guys give relative effort points to the stories: easiest story on the table, 1 point; next story three times as hard? 3 points. Some teams use T-shirt sizes, Fibonacci numbers and planning poker, but these are just frills.

The method makes it easy to create a relative priority order. Take the values, divide them by effort and voila! you have your order. As the team begin to work, they establish a velocity. Meeting overheads go down because they can simply pick from the pre-agreed list according to their velocity. Every few weeks the team can re-evaluate whether the order is correct or if values and estimates have changed based on the team’s performance.

Great! Doesn’t this sound like the perfect combination? You get a comparative figure but it’s nice and simple so everyone can grasp and enjoy using it …

There are a couple of basic problems with relative figures – both of which stem from the fact that they are not based on ‘real’ numbers.

My value of 10 points divided by 2 effort points gives me 5. This is clearly better than a value of 3 points divided by 3 points, which gives me 1. But nothing in this relative term tells me that my value means anything. If a value of 10 equates to only \$20, while 2 effort points equates to two days of a developer’s time then the project is not going to make me any money. In fact I will go bust very fast.

Secondly, relative value hides a whole host of assumptions, which are never laid out for us to examine. Is this figure based on revenue? Does it take into account urgency? Does it ignore risk? Because the label hides the work that should go into deciding whether something is valuable or not, it turns out to be just as subjective a measure as saying ‘it’s all important’ or ‘the customer is going to love this, I just know he will’.

Observe:

1. If you have used this method, go back to a previous project and find the original value and effort labels. Now look for the real figures. On a very granular level this can be difficult, but at a feature or epic level, there are probably user figures and thus \$ values assigned against a part of a product. Compare these and look for any large variations – a feature considered valuable but which turned out not to be used, for example.
2. What assumptions lay behind the original value assignation? Would it have been different if these had been explicit or you had a real \$ figure at the beginning?
3. What lessons can you draw from the comparison? What actions can you take to improve accuracy in the future? Don’t forget actions you can take now – if a feature is not being used – delete it!

The Value of Numbers

‘When benefits are not quantified at all, assume there aren’t any1.’

That’s the numbers attitude in a sentence. If you’re inputting a feature then it should have a proper justification, and this justification needs to take account of the different elements that make up a label like ‘value’. It should take account of future and present revenue, savings, urgency, risk and learning. How you express this figure, whether as a \$ quantity, cost of delay or cost/benefit ratio, is up to you.

Sometimes the calculation is simple. Our new system will automate data entry, so we will save the salaries of the 5 data entry clerks currently doing it manually. Their salaries form the \$ value of our system to the company.

You can usually make a good estimate, even if you have to make a few assumptions. Good design is often referred to as ‘intangible’, but improvement to user experience should be measurable. Do people get to the registration page and then fail to complete registration? Do we think that redesigning this page will improve registration by 50%? How much is each registration worth on average to the company? With these questions we can assign a \$ value to a redesign – even if the change is quite granular (moving a dialog box or changing filters).

Look in these places to try and find what type of value your product is delivering:

Increasing revenue

Savings (these might be direct or equivalent, i.e. if we don’t have this feature it will cost us \$x to do it another way)

Protecting revenue (i.e. without this feature we can expect revenue to drop)

Protecting against costs (i.e. without this feature we will be exposed to certain costs)

Why use numbers?

• Numbers make the argument more objective. Rather than arguing about whose idea is better or more important, numbers help change the conversation. Now it’s clear that ideas are competing against one another based on their value to the organisation.
• Numbers make the argument more objective. Rather than arguing about whose idea is better or more important, numbers help change the conversation. Now it’s clear that ideas are competing against one another based on their value to the organisation.
• Numbers can speed decision-making. Once you have an economic framework, many trade-offs can be expressed as decision rules. For example, you might decide that if the cost of delay is more than two times greater than the cost of the resource required to avoid that delay, the team should be empowered to incur that cost. That might mean hire more staff or invest in automation. Power has been devolved to the team but managers retain overall control because they set the framework within which decisions are made.
• Numbers don’t need to be difficult. There’s no need to strive after a spurious exactitude. The point is to avoid big errors, not to sweat a decision between a project with a \$3,000 cost of delay and one with a \$3,100 cost of delay.

In general, where you can come up with a number, it is worth trying to do so.

Act:

1. For a current project, pick the top three items and try to work out a real value for them. Don’t forget to consider four factors that make up ‘value’ – financial (revenue and savings), cost, learning and risk.
2. Check the figures with business owners and keep them, you need to compare them with reality once the product has launched.

3. If you can come up with a figure of what the feature or project is worth to the business, then you can also calculate a cost of delay. Are there any decision rules that this might help you make? For example, might this justify overtime or investment in automation. Try to get the decision rule approved in advance so that you can act swiftly if anything occurs to delay the product.

The problem with numbers

It seems simple. Use numbers more often. But it’s not quite that straightforward because there are a few, very real disadvantages to using numbers of which you should be wary.

• Numbers feel like unassailable ground. That spreadsheet of future earnings looks so convincing and it took you so long to set up (plus it has these nifty little macros)… People often fall in love with their projections – so much so that they are reluctant to take new information on board. Instead they cling to the spreadsheet like a drowning man and refuse to adapt.
• It’s easy to game the system. Most development teams know what is required for a project to win approval at a phase gate. It does not take much to just tweak the assumptions (increase conversion by 0.5%, reduce costs by 5%) in order to help the project over the hurdle.
• All numbers are based on assumptions. Your figures are only as good as your estimates and guesses – and there may be serious flaws in your assumptions.

• Transparency and review. You have to record your assumptions and share them widely. You thought that your system would save the salaries of 10 people, but the HR Manager has pointed out some redundancy costs that you need to take into account… You need to review your assumptions as you go, inputting real data as it arrives.
• Test early, test often. The only way to gather real data is to get feedback on your model. This doesn’t mean just pushing out a prototype and seeing if people like it. Instead you need to test the assumptions on which your business model is built. Many companies at the moment assume they need to be on Facebook. They rarely ask what it will do for them – increase customer involvement? Reach a new audience? Even more rarely do they try to quantify these to see if the cost of having a staff member permanently responding to Facebook posts is justified.

Should you always use numbers?

There are only a couple of situations in which numbers are not the most useful technique. Unfortunately, these occasions tend to be fairly common in software development…

Innovation

When you’re launching a well-understood product into a familiar, established market, you can have fairly firm assumptions. If you’re about to set up a deli in Manhattan, then your model is going to be pretty much like every other sandwich shop’s. That doesn’t mean you’ll succeed (maybe you don’t make tasty sandwiches), but it does mean you have a fairly good idea of operating margin, daily sales and likely costs.

In an entirely new market your assumptions are so uncertain as to be almost valueless. When Eric Ries set up IMVU, the team had no idea how many people would want a 3D avatar. Customers themselves didn’t know, which meant that focus groups and surveys were utterly useless. At this point it makes more sense to record your hypotheses and then test them one by one, using numbers or not. These need to include a business element as well as a technical one. Ries, for example, had assumed that users would not want to bother setting up new contacts and that therefore the software needed to integrate with existing platforms. This turned out to be wrong. Testing that assumption was more important than the fact that sales were not as healthy as Ries had hoped, but it was the low numbers that had alerted the team to the problem.

The team don’t own the numbers

Many commentators talk about the process of estimation as waste; the more time that estimation takes, the bigger the waste. There are some circumstances in which this is true – and unfortunately those circumstances are not uncommon in IT…

Picture this: a project management team spend three months creating a list of requirements and then assign a numerical value to each. They hand the list over to the development team and ask how long the project would take. After due consideration the development team come back and announce they think a good estimate would be two years. The project manager flings her hands up in horror, ‘that’s way too long! We need it in 6 months!’

In this example, the estimation process is a waste. The team needed to establish the true constraint up front – a six-month timeframe. This is not very unusual, but it permits the team to do something very useful – establish what the fixed cost of delay is and then make decisions based on that, whether they choose to reduce scope or increase capacity.

Apply learning:

• For a new project think hard about which prioritisation approach would be the most effective. If you decide to go without numbers then ensure you have a suite of tests which will provide early feedback on your assumptions. Write those assumptions down and share then with the team. Keep them visible as the work progresses and change them as you go. If there are no changes then this is a sign that you could have used numbers up front because you had firm assumptions. Start assigning \$ or cost of delay values and check your prioritisation – it’s a good, rigorous discipline.
• If you choose to use numbers then make your assumptions explicit and have them visible for review. Keep testing your figures and your assumptions. A willingness-to-buy study can be as simple as emailing all your contacts to describe the product and ask them to reply if they’re interested. This is free to do, and although it’s almost certainly an over-estimation of interest (these are warm contacts and they’re not being asked to input credit card details) it’s better than a guess.

Conclusion: love the figures of failure

Numbers help – except when they don’t. Oh what a helpful statement to take away with you. Let’s try and boil it down to something you can use as a rule of thumb – most teams should be translating their ‘value’ labels into real \$ figures more often than they do. Why?

A cost of delay, a revenue projection or a predicted cost saving may not be right. Indeed, you might get them wildly wrong. But their clarity is designed to help you focus your efforts on a visible, explicit and objective set of assumptions, which you then test through early and frequent feedback.

If you believed that 1 in 10 people would buy your product, but in a test only 1 in a 100 agree to try it … you know that there is something very wrong.  You might decide the product is a bad idea and kill it. You might learn something important and pivot to take the product in a new direction.  Or you might decide that the 99 people just didn’t understand your genius and you will continue anyway and raise the risk level…

The numbers don’t tell you what to do. Their purpose is to provide an objective check on your assumptions. If you receive 10 emails saying people love your product, it’s easy to feel that everything is going well. Only when you compare that number (10 out of 1,000) to your sales projections, can you place the good feedback in context.

Transparency, feedback and objective decision-making are helped by numbers.

That’s why, they might just help you avoid the painful sight of a roomful of managers looking around for someone to blame when the ‘game-changing project’ sinks like the lead balloon you always feared it might be.

1. DeMarco, T., Lister, T., 2003. Waltzing with Bears: Managing Risk on Software Projects. Dorset House Publishing.

Paul Dolman-Darrall is an IT director known for developing people and successfully leading large global teams across various change programs for some of the largest companies in the world and contributed to strategy of government. At Emergn, in his role of Executive Vice President, he has helped launch Value, Flow, Quality (VFQ) Education, a work-based learning program to help practitioners achieve immediate business results through the application of skills in practice. The program is designed to help IT departments and business leaders who rely on technology to put in place smarter, more effective work practices to facilitate change, generate significate return on investment and inspire innovation in practice.

Alex Adamopoulos is an executive with more than 25 years’ experience in global services organizations. He has extensive international experience with a deep understanding of culture, work, and life ethics especially in relation to establishing alignment and crossing cultural barriers. Over the years, Alex has brought know-how and practical business experience to companies that want to excel and compete globally. With a focus on performance measurement, business value and bottom-line profitability, Alex has successfully applied working models and practices to accelerate the solutions and strategies of companies to drive results.

Relevance
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Yes and no. by Seb Rose

I'm very surprised about your claim for "the most common model today." It's certainly not one that I've ever seen in use, nor have I seen it advocated widely. However, I agree with your conclusion that it "hides a whole host of assumptions, which are never laid out for us to examine." I think you could be more clear: Don't do it this way!

I can agree with your conclusion, though. Make some guesses. Use them to guide your development. Gather feedback/metrics early and often to compare with your guesses. Use this to modify your planning. All popularised under the banner of "Lean Startup."

And as DeMarco and Lister said in the same book: "If there is no risk in your next project, don't do it"
Close

by

on

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

1 Discuss
 General Feedback feedback@infoq.com Bugs bugs@infoq.com Advertising sales@infoq.com Editorial editors@infoq.com Marketing marketing@infoq.com InfoQ.com and all content copyright © 2006-2016 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with. Privacy policy