BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Q&A with Larry Maccherone on joining AgileCraft, Large Data Sets and Monte Carlo Forecasting

Q&A with Larry Maccherone on joining AgileCraft, Large Data Sets and Monte Carlo Forecasting

 

Larry Maccherone is a researcher who has focused on collecting and presenting real metrics for agile teams and using analytics to help teams get better at forecasting in uncertain environments, he is recognized as one of the thought leaders in the areas of analytics and metrics in software engineering in general and agile software development in particular.

He regularly presents the results of research into areas such as cognitive bias on decision making in software projects and the measurable outcomes that organizations get from adopting agile development practices. Some of his presentations can be found here on InfoQ.

He recently moved from Rally Software and joined tools vendor AgileCraft as their Director of Analytics. He discussed the move with InfoQ:

InfoQ: Please tell us about AgileCraft and why you joined them.

Larry Maccherone: The concept for AgileCraft is to sit on top of tools like Jira and Rally and VersionOne and HP and TFS etc. And the advantage there is that a lot of organizations already have a heterogeneous tool environment. Some teams use Jira, some teams use Rally, and it’s very difficult to switch everyone over to one tool. But without switching everyone over to one tool, you don’t get the visibility that you need for cross team projects, and to see the whole portfolio etc. So AgileCraft basically exists to solve that problem. So we will pull data out of those tools to give you burn charts and Monte Carlo forecasts and dependency graphs and all that sort of information that you would want to be managing the whole portfolio. But in addition to that we also have program portfolio and enterprise planning capability in the tool so you can do that work directly in AgileCraft and then push the pieces of that, that are relevant to the teams, down to them so that they have visibility of what the plan is and you can maintain alignment. So it’s a push-down/roll-up model.

InfoQ: The products you listed are the major Agile lifecycle management tools, does it work with others as well; does it have an API driven interface?

Larry: RTC is one of the latest one we are running, but we have a limited integration with Leankit at this point as well. RTC is going live with a customer this week so we’ll have that one listed pretty soon.

InfoQ: That’s the product; but you are known as the numbers man – why did you join them?

Larry: I am there because at Rally I can just look at Rally data, I could just look at users using Rally and that was a very specific population. But in order to do the next level of metrics, to take it to the next level, I need to be able to look at a variety of different tools so there’s a different culture of users that uses Jira or that uses LeanKit and that uses Rally. And LeanKit and Rally are probably the opposite ends on the spectrum as Kanban versus Scrum, so we have very little Kanban practices in use with Rally so anytime I spoke at a Lean Kanban conference I had to put an asterix in front of my analysis. This allows me access to all of that data. Also, and this is probably the biggest one eventually, is that I pretty much focused on collecting and analyzing data at the team at Rally, because Rally is primarily a team level tool, even though they have some portfolio management program level at least capability now just like VersionOne does, and even Jira lists their Jira portfolio product. But nobody has done any deep research trying to quantify program performance. So we have done team performance analysis, we did at Rally and did it really well, and we are going to duplicate that here at AgileCraft. But we are going to go beyond the team level, we will actually try to identify program level performance and even portfolio and enterprise level performance eventually. And there are different things that seem to matter at the program and portfolio level: the value of alignment divided by or also in addition to the cost of coordination. That’s the single most important formula for the performance of a portfolio, the processes at least that a portfolio is using. So when you have high cluster coordination, you have dependencies that slow you down, you have information not making it to places, if you don’t have continuous integration then coordinating the API call to the caller is very costly, lots of tests and lots of experiments that need to happen for that. But that’s all bad, so that definitely comes into play when you have multiple teams, and then the value of alignment is really you can’t produce a product of any significant size with just a single team, so if they are not all working in the same direction then you have cross purposes happening, so the value of making sure everybody is working towards the same goals and you are trying to all swarm on the first goal that’s going to produce the most value first before you start on these other things to keep your WIP low, that’s a big factor in the value of alignment.

The value of alignment and the cost of coordination are sort of the two biggies.

InfoQ: You’ve also been working on some other analysis – can you tell us about that?

Larry: The thing that I have been working on is the softer side of using data to make decisions and I’ve been focused in two different areas: one - and this is the big talk that got lots of great reviews and saw a crowd in a standing room only at the Agile Conference – I used the title “What? So what? Now What?” to represent the idea. Basically data is just the what but it doesn’t really help you gain any insight unless you know what it compares to, you know the so what, you know the why it matters. And even then visualization is not even valuable until it helps you with decisions and that’s the “now what?” what, so what and now what, and good visualization escalates you through those, if you want to overcome the cognitive biases of your executives or overcome the inertia of a team, or an organization, then you need to have a way to convince them that you are on the right path. And there’s a lot of interesting techniques that are available to help you with these and I used some of the information from Douglas Hubbard’s “How to measure anything” book. In Douglas Hubbard’s work on overcoming cognitive biases there are a couple of really awesome little trainings (I don’t think he invented them, I think he got them from someone else) that help you overcome certain kinds of biases. I do one of these live in one of my talks, and it’s pretty interesting, it’s pretty revealing to folks, you pretend like you are betting money instead of just making a decision. And it switches on different parts of your brain so that you actually make a more rational decision than you would otherwise.

InfoQ: Would this be things like the Buy a Feature Innovation Game?

Larry: Yes, that is definitely the same concept is in play there, when you have money on the line, or even pretend money on the line, you are using a different part of your brain and you actually avoid cognitive biases that way. And that’s essential with Douglas Hubbard training it’s an exercise very similar to the buy a feature exercise that teaches you how to avoid that.

The other thing that is interesting is that I got this concept that every decision is a forecast and the reason I say that is that by picking alternative a for instance, you are going to forecast that alternative a has a better outcome for you than alternatives b, c, d, and e. and you don’t think you are doing forecasting you might be using just a very simple mental model like “the last time we did Scrum it didn’t work out so well for us”, to make your decision. But you really are making a forecast, you are forecasting that it’s not going to work out so well for you this time and that’s why you are not going to do Scrum, that’s why you chose alternative b. But what you really need to do is get good at identifying alternatives, and then get good at evaluating how those alternatives will lead to impact in results and outcomes that you are interested in and there is a framework I got for essentially building both the decision framework as well as the measurement system, so to speak.

And that brings me to the second topic, it’s all about probabilistic and risk based thinking. Because when you make decisions with qualitative information you are perfectly willing to ignore the ethics of that information, the source of that information is basically just your experiences. And you are never going to challenge that data, even when you are going to make a decision you just make the decision. As soon as someone says let’s put some numbers around this then you immediately start to challenge it: “Oh those numbers are not completely representative we got to throw them out”. We work with impartial data in our heads for qualitative information all the time, why can’t we work with somewhat not perfect data maybe even seriously flawed data, and models, they are still frequently much, much better than just winging it on your own. And I get this example of a single data point can improve your chances from fifty fifty to seventy-five twenty-five. And I will give you a perfect example: you have a box, there’s a hundred marbles in the box, and there’s an even probability that there are zero green marble and 100 red marbles, or 100 green marbles and zero red marbles…or any ratio in between. So if I were toa sk you to tell me if the box has more red or more green, you’d have a fifty fifty chance of getting that right if everything was totally random. But if I allow you to pull one marble out, a single bit of information, and you guess that same color, your chances immediately go up from fifty fifty to seventy-five twenty-five, and that’s the sort of improvement that models can give you.

They are not going to give you a hundred percent sure answer, but sort of getting people comfortable with dealing with these, somewhat flawed models and data but still better than your own mental models is what I have been talking about the last couple of years and a lot of it has to do with very simple easy to understand probabilistic concepts. The Monte Carlo for that to prove to you that it improves, is nine lines long. So I write a Monte Carlo simulation live in front of the audience and explained to them that this is how much better off you are with having one bit of information and then I later do another twenty three line Monte Carlo that deals with evaluating two alternative portfolio investments, very similar to buy a feature, in fact it’s very much like buy a feature except that I use a Monte Carlo simulation to guide people through it.

InfoQ: What are some of the bits of data in a portfolio be that people would look for to help them make better decisions?

Larry: Well I propose this very simple probabilistic distribution for evaluating the value of your portfolio investments. Assigning a specific value to a portfolio item is unrealistic, it’s much more feasible to break it down into three different probabilities. So say there’s a fifty percent likely probable outcome in the middle, what’s the sort of medium that you think that portfolio items are likely to bring? What’s the worst case scenario for twenty-five percent probability and what’s the best case scenario for twenty-five percent probability? And when you get these risk profiles essentially so some things that have very low worst case may be negative, lose money on it, and a very high best case, and have you compared one portfolio investment that might be middle of the road guaranteed that you are going to make a little bit of money but there is no way to make a lot of money, to one where there is a twenty five percent chance to make a ton of money but a seventy-five percent chance to make less than if you went with the first alternative, how do you compare those two?

And it’s a very simple formula and I have a lot of examples that show how you do it, in fact one of the examples is a football example, so coaches rarely go for 4th down but there’s a couple of coaches who in the last few years have been using statistical models to sort of help them decide whether to go for on 4th down. Those coaches go for it at least four times as often as coaches who aren’t using the statistical model. And they are better off with this they are some of the best coaches in American football Nick Saban and Bill Belichcik are two examples of coaches you may have heard of.

So, if you use this exact same formula that I was just talking about with the fifty percent likely twenty-five worst case, twenty-five best case, the exact same formula used to calculate the ideal go for on forth down occur. You come up with much better decisions. So it’s not one bit, it’s a little more than one bit. You have to make a little more of an investment in investigation.

InfoQ:  You have to look at the likely revenue, or likely benefit, likely cost, compare them and come up with the three options: worst case, likely case, best case and then what do you do with that?

Larry: This is the two investments strategy, the first is that you know the million that you invest will make no money and you will loose the whole million. The likely case is that you get two million dollars which means you make another million on it. And then the best case scenario if you beat the tiny window and you beat it to market, it’s going to be eight million.

The second investment is much more like there’s no way we’ll make more than three million income which is a two million dollar profit. But there’s also no way we are going to loose on it because we’ve done it before. It’s just not a huge market.

So if you use this very simple formula, the probability of one of the worst case times the value of worse case point two five is a twenty-five percent case, times negative million equals point two five million. And then point five times one million is point five million and then point two five times eight million is two million it sums up to two point five million. But this one down here comes up to only one point seventy five million. So your fear based decision-making would tell you to go with this more conservative approach, strategy two. But most of the time strategy one is going to turn out to be better.

This is a different kind of game theory, in general when people talk about game theory they talk about positions and moves and getting from one position to the other and the cost of the moves and the value of the position.

This is more like strategy game theory. Here is an image of the strategy game of settlers of Catan.

The formula is baked into this tool, basically to evaluate the value of this spot right here on the board you have a ten which has three dots, a five which has four dots and an eight which has five dots. Five plus four plus three is eight plus three, eleven. So the value of this spot is an eleven. The dots represent the probability that you will roll those, and then the value is essentially how valuable for you is brick versus wood versus sheep. Sheep is usually not worth a lot in Catan because everybody usually has more sheep than they can possibly use. Frequently there is a very little brick in the game or very little wood making, so those have more value. All these are important in evaluating the strength of that one position by adding these dots up.

InfoQ: Larry, thanks for taking the time to talk to InfoQ today – some really interesting points about how we make decisions and tradeoffs.

About the Interviewee

Larry Maccherone is an accomplished author and highly-rated speaker. He serves as AgileCraft's Director of Analytics and Research. Prior to that, he led the Insights product line at Rally Software. His core area of expertise is drawing interesting insights from data that allow people to make better decisions.

Rate this Article

Adoption
Style

BT