Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Interviews Larry Maccherone on Cognitive Bias, Decision Making and Metrics

Larry Maccherone on Cognitive Bias, Decision Making and Metrics


1. Thank you for coming to talk to InfoQ. Can you firstly give me your name and tell me a little bit about yourself?

Thank you for having me, Katherine, and thank you for InfoQ. I am Larry Maccherone, I am the Director of Analytics and Research for Rally Software. I ended up in that position via a circuitous route actually that goes through the 2009 version of this very conference. I gave a talk there on Agile Metrics, nervously, worried that people would throw me out because the Agile community had largely thrown out metrics at that point and it did not turn out that way. I was actually given three job offers as a result of that talk and that is how I ended up where I am now, at Rally.


2. Where did your interest in Metrics come from?

I started my first business while I was still an undergrad, took it to 80 employees, 20 million a year in sales. When I got to about 25 employees, I realized that my intuition, my qualitative insight about what was really going on was insufficient to get a good picture of how to predict what would be the best choices and so I started to use quantitative means to complement my qualitative insight. Essentially, I fell in love with that. I actually did a spin-out company from my first company that was very much oriented towards the way we actually grew our first company.


3. Fascinating. We were discussing earlier this whole idea of decision-making which is something that interests you. So, firstly, just give me a quick summary of what interests you about decision-making. What are the concepts you are thinking about there?

The concepts. So - as humans we have formed, evolved to a certain way of making decisions that is very pattern-matching oriented and this is a necessity for survival. So if you were to go across the street and analyze the physics of the cars moving and where you should place your first foot and all the analysis, by the time you figured all that out – “boom” – you would be run over and you would be dead. So, we, as humans, we pattern-match for almost all of the decisions we make and we don’t do a “Best fit” pattern matching, we do a “first fit” pattern matching and we do it on about 5% of the information because it would actually take longer to absorb the other 95%. And this leads to cognitive biases. Cognitive bias leads us frequently to make decisions, to look for evidence to reinforce the opinion we already had to begin with and not actually see past that. Now, the good news is that with the right tooling, training, understanding, you can actually get past this in a business context. Social psychologists say that as we live our lives it actually nearly impossible to get past cognitive biases, but businesses can put structure in place that allow them to get past cognitive biases.

Katherine: You said there was an example, a tragic example, that perhaps you could elaborate on to give us some idea of how this works.

Right – and it is a bit sensitive because people’s lives were lost because of the bad decision in this case and I don’t want to make it sound like every decision you make in the business world is a life or death decision, but it could be a life or death decision for your business. So it is important to make good decisions. This example is the space shuttle Challenger launch, the one that blew up shortly after launch. So, many people don’t know that the night before that launch a group of engineers put together a presentation for management to try to convince them not to launch. The visual that they used and the approach that they used in that presentation was largely responsible for their failure to convince the management to make the right decision because the data was there to backup their worry and they just didn’t present it in a way that was convincing enough.

So, another aspect of good decision-making, especially good decision-making with data, is storytelling and the way you present it. So I have this little pneumonic called “What? – So what? – Now what?” and I actually got it from one of my employees – Sean Melody. He brought it with him and said “That is “What? – So what?– Now what?””. I was describing the way I go about putting together an effective visualization and he said “Oh, I know what that is called” and gave me this pneumonic. But anyway, the way it applies to the space shuttle Challenger situation is: the “What?” was that the O-rings would fail randomly in previous launches and they had records of this. Those records were chronological and that is the way they presented it. It turns out that they also had temperature information on the visualization that they were presenting and that was the key element that they needed to bring to management’s attention. It was not the chronology, it was the temperature.

So, rather than organize the data chronologically, they needed to organize the data by temperature and once you did that, you could clearly see that there was a trend towards colder days, higher O-ring failure was more likely and they were predicting a 31 or 35 degree temperature at launch which is very rare for Cape Canaveral, Florida, basically where we are here, but they were predicting a very low temperature the next day and that was why they thought it was unsafe to launch the Challenger. So, the “What?” in that case was that that data. The “So what?” that they failed to really convey effectively was that at lower temperature there was a higher likelihood of failure and then the “Now what?” would have been what do you do with that information: what decision do you make? If you forecasted that it is going to blow then there is one obvious choice in that case. In business it is not quite so obvious.


4. What are the common misunderstandings or the patterns of ways ... with this concept. I assume that you have being trying to pass this on and …..right. How do you try to teach this and what are some of the common misunderstandings you notice in others when you are training this particular concept?

Well, there is two ways that I try to pass this along or the learning from this, along. One is: in the products that we, at Rally Software, build we evaluate our products in this ”What? – So what? – Now what” way. So, for instance, telling you that your velocity is 20 is not very useful - is that good? Is that bad? You need: what it compares to. So “compared to what?” is essentially the number one question we ask in the “So what?” category. “Compared to what”, in this case, could either be “What was it last month or last week?” so “What’s the trend?”. “Compared to what?” can be “What is compared to the industry in terms of productivity?” and that’s not necessarily a safe comparison with story points so, you would have to be careful about that particular one. But there are others where you could compare it to the industry. So, “Compared to what?” is a very good way to get an understanding for the significance of the metric. Then the “Now what?” is forecasting.

So we’re adding things like Monte Carlo simulation and other capabilities that really allow you to do “What if?” analysis and evaluate multiple scenarios. So, the quality of the decision is more driven by the quantity of alternatives you consider than it is the reason you give for the choice you gave. So, people want to back up and defend the decision. They say “Oh, the reason I decided this is because of x, y and z” and that goes in one ear and out the other for me. I have trained myself to ignore that. What I want to know is what other alternatives you considered and what was the forecasting methodology that you used to predict what those other alternatives would result in and it’s the evaluation of these comparisons of these forecasted results that will lead you to the best decisions. So, A: we have baked this concept into the product by evolving our metrics capability and the product and B: on conferences like this, talking about it, we have blogs, we have white papers, we have materials, we actually have a very large group of coaches that go out. We just launched a workshop that essentially delivers this content and helps people make better decisions with the data, especially in a software engineering environment.


5. If you didn’t have the tool and you were in a small team and you were trying to think about a process to use to bring about good decisions, what would be some core activities or a little something that they could do in that environment to improve the decision-making that they have got?

So, as I said, I mentioned that the quality of your decision is frequently a function of the quantity of alternatives you consider. So, have a safe time where we can just throw alternatives on the table. No alternative is too crazy. The other thing that’s really key – and this is also a subtle nuance – it is very important to understand what your values are. Great products, for instance, frequently come out of extremes of values. So, for instance, the original iPhone didn’t have multitasking and everyone predicted that multitasking was a necessary feature for a smart phone or it would fail. We all know the iPhone was a huge success. The extreme thing that they valued in that product was battery life though. They actually did the work to create multitasking on the iPhone but determined that it would be a negative on battery life and so they didn’t ship it in the product.

They explicitly removed it from the product because they extremely valued, they understood, that the value of battery life was so much more important than the value of the check box of this feature called multitasking. So, really understanding what you value and don’t be afraid to have extreme values. One of the problems with group decision-making is that when you get people together, they tend to get “milk toast” mediocre values that go on the table and you really have to work hard to really say “No, this one thing is so much more important or so much less important than these other things” so you have to have that.


6. How do you determine when you are heading into that beige area and how do you then push yourselves to extremes a little in that process? What would be some of the techniques that you might suggest for a smaller team?

The first step is to be aware of this risk. The second step is to not use a linear scale when you are evaluating this. Saying that battery life is a ten and saying that multitasking is an eight makes it sound like there is only a 20% difference between those two but it would be better to use an exponential scale and say that it is ten to the ten and ten to the eight in that case.


7. When you go through the process initially you said ‘get alternatives’, are you looking from a team perspective where people come up with ideas or are looking for trying to pull out data? Which one would you suggest or is it a combination?

Well, as a data person, it is hard for me to say this, but you have to start without the data sometimes. I think of quantitative insight as complimentary to qualitative insight and in the case of identifying alternatives, creativity and hearing from multiple people are the key there. Then, when you go to evaluate the forecast that comes from each of those alternatives, that’s when good quantitative analysis capabilities come into play.


8. And what are some of the criteria you might use to give more of a quantitative analysis on the qualitative stuff that you have developed?

Well, there’s some risks associated with using quantitative analysis. So, one of the criteria, the most important one, actually, is “to do no harm”, so to speak. So, I have this content called “the seven deadly sins of Agile measurement” that I am actually giving - most of that content - in the first talk I am giving this morning. Essentially, it is a way to avoid the evil side of Metrics. So, any time you use quantitative analysis there’s a tendency for some bad habits to come into play - folks will game the metrics. So, you really need to avoid going to the dark side of metrics. So that’s one criteria that I have and it is probably the most important one: do no harm.

The next one is to get folks to understand that a little bit of data, a little bit of number crunching is better than no number crunching and just because the data isn’t perfect or you don’ have enough of it, you think, you almost always have enough of it to gain some insight and you have to get comfortable with doing that. We are very comfortable with taking partial qualitative information into account in decision-making, but we inherently resist having anything but near perfect models in a quantitative analysis and that’s wrong. You should move forward with even imperfect ones because the decisions you make with them are largely better than the ones you make with no quantitative analysis.


9. I think you can see the hesitancy that people might have not to take an extreme position and try to, and get worried about a light metrics combination because you might make some error in your perception. So, can you argue a little bit about why you really do believe that a little is better?

Let me give you an example. This comes from Douglas Hubbard actually, the author of “How to measure anything”. So, I’m borrowing his example. Let’s say you have a crate in front of you and you know that there is some percentage of green balls versus red balls in that crate and it could vary linearly from 1% green to 100% percent green and I ask you, the crate, “Is it mostly green?” I ask you this question: “Is it mostly green?” Without any information you have a 50-50 chance of getting that right. Now, what if I allow you to pull one marble or ball from the crate and you see that color. Let’s say that color is green and you say “It’s mostly green”. You now, with one bit of information, you’ve now gone from a 50-50 chance of getting that question right to now when you have a 75% chance of getting that question right. I actually have a little Monte Carlo simulation. It is nine lines of coffee script code. It is up on the JS fiddle I can share with anyone if you want. That proves that you proved your chances from 50-50 to 75%. If you make 75% decisions in the business world it is much better than making 50% decisions in the business world.


10. That is a good point. So, of your seven deadly sins of metrics, you may have mentioned this already, but what would be the one that is the biggest focus?

So, sin number 1 is sort of a medicine. CS Lewis said that courage is essentially the meta-virtue because at the testing point all virtues are essentially determined by courage. So think about it. If you’re supposed to be honest, that is a virtue, right? Well, when it’s hard to be honest, it takes courage to actually be honest. So, sin number 1 is like that for us. It’s not using metrics as a leaver to directly drive behavior, rather it needs to completely be “feedback for self-improvement”. And I will give you a couple of examples, if you want, to elaborate on this. So, this is a fitness tracker. It’s actually a “fit bit” brand fitness tracker but it would apply to any fitness tracker. Have you ever worn one of these?

Katherine: No.

You are completely thin and you would never need one.

Katherine:You don’t need one for yoga.

I have a little bit of a tire to work off. So, actually, at Disney here I have been traveling around, I’ve got over 20.000 steps a couple of days so it’s great. But when I’m at home and I’m not at Disney, I tie it to the dog’s tail. He is a very energetic puppy, right? I get lots of steps that way, right? And that’s really helping me work off my belly, right? No, it’s not. You laugh and you consider that absurd so, the purpose of even wearing this thing is getting metrics that would help me improve, that would help me get better, and guide my decision-making. Do I sit down on a couch, or do I go for a walk? Well, I have only had 10,000 steps today,I’d better go for a walk. Tying it to the dog’s tail defeats the purpose of even carrying the thing in the first place, buying the thing in the first place. But in the business world, we live with that. We actually tolerate it and that’s completely unacceptable and so that’s sin number 1. When you get to that point when you feel like the metric is what matters as opposed to the behavior improvement, then you have already committed sin number 1. You might as well start over.


11. Do you think the courage requires something like equanimity in the sense that having the bravery to face that data and what it says?

Yes. So you picked up on the meta there a little bit. So I used a metaphor that could be applied in a couple of ways therein. Yes, it does take courage, especially considering the history of metrics in software engineering. There’s been a lot of incidents, a lot of examples of really bad application of metrics that have lead to discord in organizations, lead to all of your best people leaving and we definitely don’t want to do that. So, it takes courage to reintroduce metrics to an Agile world that had largely rejected them, for good reason.


12. [...] How would they present it if they’re surrounded by a highly political environment, say, in media?

Katherine's full question: In the space of, say, you’ve got the smaller team and they have decided they are going to have this kind of courage to see things for what they are and use data to do so. Often, when you try to bring that to management attention, managers may be in a very political environment and they see, or can sometimes see metrics as something that you use to support your position as opposed to drive your decision. So, how would you advise teams to present their metrics that they have discovered? If they have gone through that process you suggested and they have got something that is quite valuable, how would they present it if they’re surrounded by a highly political environment, say, in media?

Back to your world, yes. Great, great, great question. Two aspects of that answer: first you must recognize as a team that your boss, your managers, your stakeholders have a job to do. You have to have faith that they want to do what’s right and do well at their job and that almost always is the case. They want to do well and they want to do better. Occasionally you have an evil manager and this doesn’t apply, in these circumstances. So, once you do that, then your mind set is “Well, let’s help him do it in a way that isn’t damaging to us” Then you have to work with him. The management has a right to see what is going on, they have a right to visibility, transparency on your part and in exchange for you giving them that, you get back autonomy and self direction. So, it’s a great trade off for a team if they’re willing to make it. And it isn’t a one way street and if you think of it as a one way street – “we have autonomy, you have no right to see what is going on” – then you’ve already asked for more than you deserve essentially.


13. [...] Is there anything generic that you could give as advice?

Katherine's full question: So, the way that you are approaching the managers in this scenario that I have invented randomly here, you are coming back to that story which you talked about earlier where the engineers didn’t present it in a particular way to management and then management weren’t able to make the key decisions or the right kinds of decisions. In this little scenario, tracking it back to a software engineering environment / small team they have discovered something. What advice did you give on how they were, would present the information or what techniques could they use to present the information in an effective way? Is there anything generic that you could give as advice?

Well, that example I do not know enough about the social and org chart dynamics of that situation to really comment. So that example was really given for visualization. But I think I can give you what you are looking but in a slightly different way. So, 1. Understand that you want to work with management and give them transparency so they can make better decisions. 2. There’s a way to create, to visualize, to think about your metrics, to work with your own team’s metrics that give them this visibility but doesn’t give them the sense that they should use it as a club to beat you up, become a meter beater with it. Let me give you a very specific example of a Monte Carlo Simulation and this is better done with a visual but we will try to work without it here.

So, think about a burn-up chart in an Agile environment.

It has a burn-up series which the velocity is the slope of that line essentially and then there is a scope that you’re trying to hit. And if you make a projection with the average velocity over time through the last point on the burn up series to where it hits the scope series, then that is the forecast of when you are likely to finish. Well, that immediately gets you into an exchanging of a date. You give management a date and they assume all risk of achieving that date is transferred to the team. It is immediately an adversarial relationship, right then and there. So, rather than do that, take that exact same visualization, that exact same data that’s in that visualization and rather than give a single date, give a probability distribution of when the work is likely to finish.

And you need to use a technique like Monte Carlo Simulation and the one that we are prototyping at Rally also has mark off chain analysis and has explicit risks so it is pretty evolved model. But anyway, the simple model just gives a probability distribution of when it’s likely to finish and the math and the science behind that is not the cool part about that. The really cool part about this is that instead of delivering a single date, by delivering a probability distribution you fundamentally change the nature of the relationship between the team and the stakeholders so that the manager instead says “I want a date” and the team says “We do not know when it is going to finish” which is the truth. “Well, you are the team – you know better than me. If you know, tell us.” “Well, we do not know either”. “So what am I suppose to do with a probability distribution? I need a date” “Well, we have lots of dates. What risk can you tolerate?” “Well, I can tolerate no risk” would be the original answer. And the team says “Well, we have that date. That is this one way out here and the tail that it way out there in the future”. “Well, I don’t like that date.” “Well, now you have some levers that you can play with. How much of that risk can you really tolerate?” “I do not really know. Let’s go talk to management together or marketing together to figure out what the cost of missing a target delivery date is going to cost them. Will they have ramped up wording around it, or a conference around it and we really have to hit that date or is it OK if we miss it. How much risk can we actually tolerate?” So it changes the nature of the relationship so that management and the team are working together to essentially narrow this probability distribution and move it to the left.

Katherine: So it is encouraging negotiation and conversation and collaboration.

It is discouraging what I would call negotiation and encouraging collaboration.


14. Good point. So, you mentioned a bit of what Rally’s exploring in that space because I was going to ask what’s next for you or what is next for ?

We just recently released a product around the software development performance index framework so we did research last year to extract performance metrics out of data that’s essentially gathered in the course of you using an ALM tool like Rally and we published some very interesting research correlating behaviors, attitudes and practices with outcomes using that framework. So, we now have a tool in the product that essentially allows you to conduct your own experiments, do your own research in your own environment, using this particular framework. It is called Rally Insights. So, we have a lot of work planned for that. We are adding two new dimensions to that. So right now it’s only productivity, predictability, quality and responsiveness and those all come from the use of ALM tools. We are actually adding survey capability so that you can survey your stake holders and get stake holder satisfaction or even customer satisfaction and an employee satisfaction, happiness or employee engagement some people like too. So we will have six dimensions in the framework now to give you a better picture of what is going on.


15. And so personally what is your next adventure in this whole Lean Agile space?

I am drifting towards something along the lines of overcoming cognitive bias. So, I think we have done a really good job of the “What?” and the “So what?” now and we can actually give people a very good picture of why the metrics they’re looking at matter compared to what. We have the industry’s only benchmarking capability baked into the product. That really gets you a feel of how good you are compared to your peers – other organizations. The problem then is the “Now what?” How do you get people to make the right decisions based upon that data? I’m really focused on cognitive bias, I’m reading up on cognitive bias and trying to understand how to overcome that in a business world.

Katherine: Well, I look forward to interviewing you about that when you have done a bit more exploring. Thank you very much.

Thank you, Katherine. It is a pleasure being here.

Oct 17, 2014

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p