BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Decision Buy-In Algorithm

The Decision Buy-In Algorithm

Bookmarks
47:08

Summary

John Riviello covers key aspects for success, lessons from early mistakes, signs that the decision-making process used is working effectively, and how to leverage the AHP to make decisions.

Bio

John Riviello created his first hypertext document on the Internet in 1996 and has been obsessed with building for the web ever since. He spends his days as an engineering Fellow at Comcast, where he leads the development of the Xfinity app. He is also a LinkedIn Learning course author and a Google Developer Expert in Web Technologies.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Riviello: The topic of my talk, it's something that I'm sure is applicable to everyone, because we all need to make big decisions. Then, again, how do we get teams to rally around those decisions? We can make a decision, but if we don't actually go ahead and apply it and get others to apply that, then what's the point of making those decisions? I have a lot of experience to share around this. I've been in the industry for over 25 years.

Let's start things off by playing a little game, it's called, Let's Make a Decision. I've been writing code professionally for over 25 years at this point. A question I get asked often by other software engineers that may be new to the industry or thinking about getting involved or going to college or get another degree, you want to learn how to code. What programming language should I learn first? What do you think about this question? How would you respond to that question? What programming language do you recommend someone learns first? I struggle with this because the answer to me really is it depends. The answer for any staff engineer to any question is, it depends, and especially for this question about, what programming language should a new engineer try to learn? That's not really useful, a new engineer, I need to learn something, and it depends, I need to write in some programming language. I just want to try it out. I need to give an answer for what language is the answer to that question. I've had thoughts in the past, but now it's the age of AI. I just can go to ChatGPT and ask ChatGPT, what programming language should I learn first as a new software engineer? ChatGPT of course will give me the correct answer, because that's what it does. The answer was, "There are several programming languages that a new software engineer could learn first, depending on their goals and interests. Here are some of the most popular options." Then it goes on and gave five options. I said, "ChatGPT, I asked for one option, I didn't ask for five, please give me what is the best overall one." Then ChatGPT, the human A bot says, it's difficult to pick just one. I'm like, really? You're supposed to solve all my problems, ChatGPT.

I couldn't use ChatGPT to get this answer, so I went out and talked to other humans. I made my own decision as myself and I have decided that the best programming language for a new programmer to learn first, without a doubt, is Python. Yes, exactly. There's the response. I put a language up there and everyone had a response. You felt something. We got boos. I want you to think about how you just felt when I put Python on the screen. Maybe you felt happier, like Python's a good answer. John, good work. Maybe you're like, it's all right. I'm suspicious of this guy who's standing up there thinking Python's great. Or you're just straight mad, you're booing me, I appreciate that. These are the different kinds of reactions you could give. This actually isn't my answer. This was the answer from an Ivy League professor. I'm just like, you say that, I'll communicate that out. My answer is, it depends.

Background

I'm John Riviello. I've been at Comcast for 16 years at this point. Before that, I was doing other work in engineering, but writing mostly of my work in web development, and recently been doing a lot more work in native development as well, iOS and Android, those kinds of applications. I currently lead the development of the Xfinity app at Comcast, so if you are a Comcast subscriber, that's basically, if you're watching TV, and you're watching stream, that's Leslie's world. Everything else, that's all coming to the Xfinity app, and that's the world that I work in. There is a lot of teams I work with that bring their work into the app that I work on every day, being the Xfinity app.

Decisions, Big and Small

These decisions that I need to make and you need to make could be decisions, both big and small. That's what I want to talk to you about. Whatever these decisions present themselves, there's always the question of how to decide on what the right decision is. What's that process of making the decision? Then, who's the one who gets to decide? There are a few approaches to this. One is, just go ahead and talk to the most senior engineer and give them the power to make that decision. Usually, that's me, so that's good for me. It's probably you. Just because I'm the most senior person in the room, that doesn't mean that I'm always right, though. You might always be right but I certainly know that I'm not always right. Everyone's input is valuable. I was writing a lot more code as a junior engineer, so if the decision is more code oriented, I'm going to trust those that are in the trenches, writing that code day to day, eight hours a day, as opposed to me doing it a few hours here and there a few times a week. There's another option, which is basically we have an equal democratic vote. Everyone gets a say and majority wins. This, of course, means that we're not going to have a unanimous decision likely unless it happens to be that. Some people are going to be upset if their choice that they voted for was not the decision in the end. There's also the question of when you're doing votes, should some people's weights to their votes be weighted more than others, kind of like a meritocracy, but also the meritocracy itself has its own set of issues.

There's also the concept of wisdom of the crowds to make a decision. This has been written about for thousands of years. Aristotle was writing about this 2000 years ago. This seems to actually work really well. Remember, when, "Who Wants to Be a Millionaire" was on TV, perhaps it's still on TV? You'd go ahead and you'd ask the audience, and that was often the right answer. There's been a great deal of research around the wisdom of the crowds' way of making up decisions and determining what is the best way to solve a problem. I came across a number of pieces of research. One that I liked in particular was this one article in Harvard Business Review that was written a few years ago. It basically talked about how the key to this is that you need to have multiple independent diverse judgments, and then merge them to form the best overall decision. When you do that, then that is often found to be more accurate than one individual's opinion, even if it is an expert's opinion. This article went on to go on just to understand, more than just like, do you get the right answer, was more about, what more can we uncover around this idea of wisdom of the crowds? The research they found that there were actually some hidden costs to blindly adopting the approach of just, let's let the crowds decide.

To help demonstrate what they found, we'll do another exercise. This question I'm going to ask, it comes from this blog post on the Google testing blog that was over 10 years ago. I'll caveat this by saying that I'm taking this entirely out of context, the few sentences that I'm going to show you. It actually does a great job of helping to understand this concept. In the post, there's this situation where a programmer asked this wise teacher named Testivus, and says, "I'm ready to write some unit tests for my code. What code coverage should I aim for?" To which Testivus replies, "80% and no less," in this stern voice. He pounds his fist on the table, and like, "The answer is 80%." Now think about what your answer would be if you were asked the same question by an engineer. This actually has come up recently, in my own work, just like they're trying to hit new code coverage numbers, what number should this be? The point of the blog post is the actual number and the actual answer is much more nuanced than just picking a number. What happens when someone else gives an estimate first, especially an expert like this wise Testivus here. How does this impact others' decisions when they see that first? This is what the researchers wanted to understand in this study. They took some participants, they asked them to look at someone's decision first, and then form their own opinion. Then they were also asked to come up with their own decision based on that. When they did that, the participants own estimates were pulled towards that initial estimate that they saw, and this is often referred to as anchoring. This is something that is a very common thing. If you've ever bought a house, or tried to sell a house, you know that basically the initial listing price has a large influence on the final sale price of the home. This is what this study looked at further.

The study states that when someone forms his or her opinion, impacts how they evaluate the opinions of others. If the opinion is presented upfront, then this anchoring occurs. This is not good, because then we know about the wisdom of the crowds and the value that that brings. This is basically going in the opposite direction of that. We know that forming opinions first before evaluating the opinions of others could potentially be one way to go. This is the opposite of anchoring, which we just said is bad. The problem with this approach, this only works if everyone agrees. More than likely, there's going to be some disagreement. While disagreement is not necessarily a bad thing, you need to have diverse judgments, diverse opinions. This is what actually works well for leveraging wisdom of the crowds. In order for disagreement to be effectively leveraged, it has to be correctly interpreted. In this article, I talked about the fact that this means basically think about the fact that when there's disagreement, at least one party is wrong, possibly everyone is wrong if there's a disagreement.

They went ahead and they did another study. They dug into this further and they asked people to make a decision before seeing the decision of another participant, selected at random from the study. They went ahead, they were presented with a question. They made a decision and then they saw other decisions. Some participants saw a peer decision after they made their own decision that was very closely aligned with their own decision. Other participants saw another decision that was very wildly apart from it. Then they were asked to evaluate the quality of that peer decision versus their own decision. What the study found was that as the disagreement level increased, people evaluated the other individual's decisions more harshly, they didn't even know that person. They were thinking, ok, that other decision, that's wrong. They were feeling more harsh about that. Their opinion of their own decision didn't budge whether they saw one that was similar, or one that was wildly different from it. This means that basically, participants interpreted the disagreement to not mean that the person was wrong, that the other person was wrong, but not themselves. This shows that forming opinions first carries these social costs along with it. Participants in the study actually thought less of the other person's estimates, and in some cases, thought the other person was less ethical and intelligent. This is just like everyone that went ahead and booed me when they saw Python in there. You felt something negative about a decision. Think about that. We know that we want to have these multiple diverse, independent judgments, but if you have an opinion presented upfront, you have the anchoring that occurs. If you wait, then there's this social cost where you're feeling differently about these individuals that gave other opinions. What do we do in this case? Neither one is ideal.

Basically, what the study determined was, here's a few ways to actually go ahead and approach this. First, to maximize accuracy, you want to form independent opinions in advance. Again, this allows you to avoid that anchoring issue. Next, the group should pre-commit to a decision-making strategy, this way, it'll help protect the group from negative social consequences when there is going to be disagreement. Hopefully, there will be to get some discussion going. Then the last thing they said was for quantifiable questions, try to remove human judgment as much as possible. That sounds great, but how do you actually do that? I was in a situation a few years ago, the team I was leading, we were building a web app. It was clear at the time that we needed to go ahead and pivot from the Java framework that we were using to a new one. I explained the situation to my manager. I explained why we had to go ahead and do this at the time. He said, John, I trust you and the team to make this decision and make the right one. How do you know that the process you're going to use to get to that decision is effective and the right way to go about this? He asked me to actually go ahead and research how we would go about that process first, before we made the decision ourselves.

Analytic Hierarchy Process (AHP)

I go off into the internet. I go to determine like, how do other groups, how do other teams, how do other companies, how do other industries make decisions? During that process, I came across a number of academic papers that were trying to answer a question that was totally unrelated to tech, and it was much larger than anything that I've ever had to deal with. These papers were focused on what to do when a state or a country needs to build a new power plant. This is critical infrastructure, that when you make this decision, you're going to live with this decision for the next 40 to 60 years. You do not want to mess up this decision. There's lots of options, there is solar power, wind power, hydro, nuclear, fossil fuels. If you go with nuclear, there's different types of nuclear power plants. In all these papers, the tool they used to arrive at the answer was something called the analytic hierarchy process. This was actually created over 40 years ago by Thomas Saaty. Why do I like this enough to go ahead and talk to you about this here? Basically, it gives you a structure to your decision making and it removes the emotion in the analysis, which is what that study was talking about, basically how to get that human aspect out of it. It also reduces the impact of the loudest or most senior person in the room, and instead helps to determine what is actually best for the group. Again, that's been around for over 40 years. If it can help make decisions like picking a power plant, then surely it can help all of us in our tech decisions every day.

When you read about the analytic hierarchy process, which I'm going to refer to as AHP from now on, often the example to help introduce it is this idea of choosing the most suitable leader. The setup is some company for whatever reason, needs to choose a new leader. It doesn't matter why but basically, now there's a need for a new leader. The example works well for that. I'll have a little more fun than just like pick a new leader for a company. Instead, let's pretend we are transported to some mythical world with dragons and our people need to come with a new leader. We have some different people that are stepping up to basically be our next possible leader. In the running here we have Astrid from, How to Train Your Dragon. We have Princess Fiona from Shrek. We have EEP Crood from The Croods. All of these wonderful females have decided we can help lead these people. We're going to determine who is the best one to lead us.

Here's how AHP works. You have three components. The first is the goal, the second is the criteria, and then the third is what AHP refers to as alternatives, which are really just the choices in this case. Our goal is choosing the most suitable leader. Then for our criteria, we're going to use these four options here. Basically, I've chosen experience, strength, charisma, and integrity, as the criteria we use to evaluate our different alternatives. As I mentioned, the alternatives in this group is Astrid, Fiona, and EEP. This is all the data we would be doing analyses on to determine where's the best leader to lead us for our group in the future. Once you've determined all this, you're going to do what are known as pairwise comparisons, which are just taking two things and comparing them to each other with respect to something. This is how we'll determine who is the best for each criteria. In this case, here, we've got initially Astrid and Fiona. We'll compare the two of them with respect to their experience. You'll then do this with Fiona and EEP, Astrid and EEP, continue on this process. Then for each individual criteria, so we do this for strength, for charisma, and for integrity. There's a bunch of pairwise comparisons that we're going to do. Then we'll go ahead and do the same process of pairwise comparisons to evaluate the criteria with respect to the goal. We'll evaluate experience versus strength. What is more important, in this case? Do the same for strength versus charisma, experience versus charisma, strength, integrity, so on and so forth, you get the idea. All these pairwise comparisons we're going to do.

To look at what a pairwise comparison process actually looks like in detail, let's take this example here of experience. We're going to do all the pairwise comparisons for our group of our three potential leaders. For each pairwise comparison, you will use this scale. The important thing to know is that, basically, if two things that are being compared are for all intents and purposes equal, then they both get a value of 1. If the two things being compared are so wildly different, that one is extremely better or more important than the other, then the winner would get a 9 in that case. Then you can also use even numbers, you can use decimals if you want. Basically, this is the scale used for this evaluation. Then let's see what the actual score exercise looks like. There'll be a little bit of math in this. Again, we're going to look at these alternatives with respect to experience. Our first comparison, we'll do Astrid and Fiona. We'll say that Fiona is the more experienced than Astrid. How much more experienced, we're saying on that level of the 1 to 9 scale? She is more experienced at a level of 4. On the table there, we'll give Fiona a 4, for the winner, and that is moderate to strong on that scale that I showed earlier. Then for Astrid the loser in this case, she gets a 1 in this scoring exercise for that. We'll then do this for Astrid versus EEP. We'll say Astrid compared to EEP is a 4, and then EEP is a 1. Then we'll go ahead and do Fiona and EEP, and then clearly Fiona is much more experienced, in this case, because Fiona is more experienced than Astrid, Astrid is more experienced than EEP. We'll give Fiona a 9, in this case, we're saying this is a wide range of experience difference here. Then again, the loser always gets 1, so EEP gets 1 in this case.

Now that we've scored them, we've done this exercise, the first set of pairwise comparisons there, we will put this information into this table here, it's going to look like this. The first thing we will do is fill in some 1s because it's very easy to think about someone compared to themselves is, of course, totally equal. Just on this table, they all get 1s. I'm going to take the previous numbers we gave there. I said Fiona compared to Astrid is a 4. Then the other person gets the reciprocal of Fiona's score in this case, so Astrid is going to get a score of 1/4. We'll then do the same here. We said Astrid compared to EEP, Astrid won that with a 4, in the top right. Then in the bottom left which is also Astrid and EEP, EEP was the loser. EEP, she gets the 1/4. We've got these reciprocals in there, for the ones that lose. We'll fill in the last one there. We've got Fiona versus EEP, Fiona was a 9. EEP versus Fiona, that's the 1/9. Basically, a simple, take the number, get the reciprocals there.

Now we have these values, and we can calculate what is referred to as the priority for each of these candidates with respect to experience, is what we're doing in this case. This is where the math formulas of AHP really come into play. For this dataset, the calculations are what I've shown right here on this slide. These calculations are based on vector math, and in this case, it's the principal right eigenvector. I didn't know what that was before I learned about AHP. It's really not necessary to know, what is that level of detail. You just basically need to know how do I get these initial numbers to fit into the formula there. If you do enjoy vector math, there's actually a separate six-page paper that Thomas Saaty wrote, with a bunch of math proofs that explains all of this. I've tried to read it multiple times, and my main takeaway is one thing, and that said that basically, human judgment is inconsistent and the principal right eigenvector helps to mitigate that impact. Again, four years of peer review, this is what he's saying is correct. It's been working. Let's go ahead and trust him in that case.

We have our numbers here, we're going to take them, and those numbers are going to be in the line diagram we have up here in this graph. We then need to go ahead and repeat this process for strength, for charisma, and for integrity. Then we need to do the same process for our criteria with respect to the overall goal. As we go through that, I'll show an example of what a full table looks like when we've gone through that entire process. Here is basically the magic AHP calculation for the priority value of each of these on the right, which are the numbers that we are going to go ahead and plug into that graph that I showed previously. Now that we have all this data, we can march towards computing our final decision. The math at this point is simple, you don't need to know any proofs, any eigenvectors, any vector math, just multiplication and addition at this point. Again, we're going to worry about first, the experience aspect of this. I've got my priority value for experience. I'm going to put that in the graph between experience and the goal. Then we've got our priority values for individuals, when we're comparing them with respect to experience, I'm going to put them in the graph at the bottom there, so we've got Astrid, Fiona, and EEP's values there. They can see they go up to the graph to experience and there's those numbers that all line up in that case.

We're going to take these values at the bottom, put them in this table here, and we've got those numbers there. We've got the experience with respect to the goal, 547. That one is going to be the same number for each one of those individuals. Then to get our priority with respect to experience, we're just going to go ahead and multiply the priority of each individual with the priority of the goal itself. That's going to give us the priority with respect to experience in this case for these individuals. I can take these numbers here and put them in what is our final table. The first column there is priority with respect to experience. I'm going to put those numbers we just calculated in that table right there. Then we'll go ahead and do the same process for the other different criteria with respect to strength, charisma, and integrity. I've got all the numbers right there. Then all I need to do at the end is add these up, and we get our final answers. In this case, we see that Princess Fiona is going to be the best person to lead us. Congratulations Princess Fiona. We're excited to follow you and look forward to what you do to lead our dragon bearing group, and see what we're going to do in the future there.

The Journey, and the Destination

If you follow AHP by the book, you're supposed to collect these numbers via a survey, so that responses are not influenced by others, which makes a lot of sense. When I went ahead and actually tried to apply this process at Comcast, we took a slightly different approach. We did those analyses upfront and said, think about what your numbers are going to be in advance? Then we came together, and what we did was basically do a similar approach to basically how we do agile story pointing, and the way that we estimate our user stories. We would compare these two items here, and everyone would throw out a number. They would just hold up their hand and say, ok, this one's a 2, a 3, a 5, a 7, whatever it may be. If there were different numbers, we would go ahead, we would just have a discussion of, ok, why is this one a 3? Why are you saying this one is a 5? Why are you saying this is a 2? We found that these discussions were even more valuable than the calculations that this tool did for us.

I find this similar to the Hana Highway in Maui, Hawaii. It's basically this long, winding drive along the north shore of Maui. There are these beautiful places to stop at along the way. It can take you hours to do this drive. Then basically, you go ahead and you get to Hana. The only reason that you know you're there is because there's a sign that says, welcome to Hana. There's a beach there. It's a pretty unassuming beach by Hawaiian standards. When people talked about driving the Hana Highway and going to Hana, there was mentioned that it's the journey, it's not the destination. The great thing about talking about these scores for the AHP pairwise comparisons, while you're trying to determine your decision, is you get the benefits of both the journey, which is those discussions, and the destination, which is the actual decision you're going to make in the end.

How Did Our Team Decide?

Before I went ahead and pressed the button to spit out our final decision on what JS framework we were going to be rewriting our code in at Comcast, we as a group already knew what the answer should be based on those discussions. If the tool I had gave us a different answer, we would basically go ahead and ignore that and just say, these discussions were enough for us to actually come up with the answer ourselves. So far, that one time with the JS framework, and every time we've gone through this process at Comcast, the gut feeling of us after those discussions was the same answer as what the tool actually gave us in the end. You might be wondering, what did we decide when our team had to go ahead and do this evaluation? Here's how we evaluated our JS frameworks at the time. We used these criteria for our evaluations. It's important to note that these are our criteria that were important to our team, yours most likely will be different. Also, with AHP, it's important not to have any more than eight criteria, otherwise, it could take you a very long time. Think about you need to do every one of those pairwise comparisons. If you've got more than eight, you're basically going to be exponentially increasing the number of comparisons you need to do. With us, we had these seven criteria, and we had three different frameworks we were choosing between. It took us a little over 4 hours to basically go through this exercise. We actually ended up initially with more criteria, we had about 10 or so. Then we looked at and said, our criteria should actually be the same across all of them, and we were able to eliminate some in that manner.

We went through the exercise, here's what our weights ended up looking like for the different criteria. You can see that our group, particularly was enjoying community and saying performance, those were the things that are most important to us as our group. It's about productivity, was basically the second most important thing after those. Other things were lesser important than that. Then to go ahead and actually see what our decision was. The thing you can see with this tool is basically, we could see how the different criteria influenced our final decisions. You can see that basically, our option one was the one that won. You see because community and performance and developer productivity were most important, they take up a bigger portion of that final bar in the end. You can actually go ahead and try this exercise with your teams today. We've open sourced the tool I used to go ahead and make these charts, if you want to go ahead and try it out. This is the URL that it's available at, https://comcastsamples.github.io/ahp-tool/. When you go there, you'll see something like this. You're first presented with basically entering in what are your criteria, so you enter in those values. Then you're going to go ahead and enter in your options or your alternatives in this case, and then you go ahead and do those pairwise comparisons.

Then as you're going through the process, I found it was sometimes confusing to determine, ok, which one is the winner when I fill in those values? I basically put in a green bar to show when it is the actual winner and a pink bar for the one that is losing this case. Just in case you're colorblind, we also put a little trophy emoji there, so it's very clear which one of these is the winner when entering in those values. You continue on and do those comparisons for the different options. Then at the end, there is this giant calculate button. You go ahead and press that, and it will run those calculations. It will give you these nice pretty graphs to show you what you did. One thing to note, too, if you go ahead and use this right now, there are some additional adjustments I'd like to make so it could actually save your values during that process. If you're going to go ahead and do this exercise, I would recommend actually doing it in a spreadsheet to have your values saved so if you were to lose them on this page, that you still have them stored somewhere else. Redundancy is always a good thing. Basically, then once you have those values, go ahead and enter in this tool, and it will give you these nice charts you can share with your teams.

AHP Retrospective

Now I'd like to do a retrospective on AHP based on the many times we've actually gone ahead and used this at Comcast. What I'm going to talk about can really apply to any decision-making framework that you put in place. Again, AHP has worked for us, but I believe it can work for you and your teams. If you have a decision-making framework that works for you, and your team believes in it, then go ahead and stick with that. Just think about what I'm going to talk about as far as why this has worked for us, and different things we've learned about while we went through this process. Like any retro, I'm going to start ahead with basically what went well, we'll talk about what could be improved as well. Many things did go well, otherwise, again, I wouldn't be here talking about with you. The most nimble thing is that many teams at Comcast have now used this and found it useful. My team personally has used it many times and other teams I've talked about, I've either helped them through the process, or I basically explained to them, this is how it works, go ahead and try it out. The feedback has always been positive. I always tell groups like again, here's the process. It's worked so far, but I want to hear from you, if it works or if it doesn't work. If it doesn't work, then that's good feedback as well. Everyone has actually come back to me and said, actually, this has been a great process. It was very valuable to us and helped us make a decision and get that buy-in that was so important.

Another useful aspect of this is that it's very useful for capturing pieces of documentation. If your team does ADR processes or architecture decision records, these charts are a great way to go ahead and capture the motivation behind why you made a particular decision. Again, in my role at Comcast, being a fellow there, and having expertise in web development, many times when teams are going ahead and trying to determine, ok, I need to build a new app, they come to me and they say, John, what framework should I use? I always point them to our ADR, where we decide on what framework we were going to use when we were going to rewrite that one app that I mentioned earlier. I always say to them, this is the process we went through, do not trust what our team decided. It might not be the right fit for you. I want you to look at this. Think about what criteria matters to your group, go through that process. Then, again, use that for your team instead of just blindly adopting what my team decided at that time, because that was a point in time that made sense then, it might not make sense today.

Another piece of positive feedback I received, was when I was helping a couple teams use AHP to decide on what they were going to build their next generation of their system. It was basically two totally separate teams. One was in Philly. One was in London. They both had made two separate systems, they were deciding, do we build this new system, do we use one of the two systems as a starting point, or do we go Greenfield? If you ask a bunch of engineers, do we use a legacy system, or do I go Greenfield? The answer is Greenfield. You don't need to go through this process to determine the answer is Greenfield. We wanted to go through the process anyway. What we found was that basically going through this process, these teams that hadn't talked to each other previously learned so much about each other's systems, and about each other as engineers, that they felt that even though that all the engineers went in knowing they want to go Greenfield, it was a tremendously valuable onboarding experience for each other's teams to determine what are the strengths of each system? What are the strengths of the engineers in other teams that they know well, so that when they go ahead, and they're building this new Greenfield system now, they already know what they want to take from the previous systems and bring them in, what they've learned from those previous experiences. All the feedback I was given was, basically this was the best icebreaker possible for this group to go ahead and make this decision.

One other thing to note from this particular decision, is I've done a couple times in the past. We actually had separate groups doing different scoring for this. What I mean by that is basically again, going back to our previous example here, you could have one group do the pairwise comparisons of your criteria versus your goal. If they're not experts in the different alternatives, you have a separate group do the alternatives versus the criteria. This is totally cool. What I've done in the past is basically trying to make a decision for our product, we have product managers define what are the different criteria, and then they do the weighting exercise for their criteria with respect to the goal. Then there's technical options that basically are better or worse for those criteria. The better person and group to do those decisions are the engineers. We've had the engineers do those evaluations for the alternative with respect to the criteria, and then product does them with respect to the goal. I've even had where we had different product owners for different parts of the app, do their own analysis for criteria with respect to the goal. We use different technologies and different parts of an app because of those evaluations. You can use this basically and slice it and tweak it for what makes sense for your group.

AHP Aspects Worth Improving

Now, of course, nothing is perfect. There are definitely things that can be improved. One thing we'll talk about is basically early in the days when I was using this, after this was successful a few times, we had another very big decision that we had to deal with. I was asked to basically help lead the decision-making process to use the AHP process. Basically, it was clear that there was going to be some people upset with the decision, like there was some wildly different groups. All the talking we could do, we were going to have a small group of people make the decision or be involved in the process to make the decision. It was going to influence a large number of engineers, there was going to be some people that will be not thrilled with the final decision. We went through this process, because it was so sensitive they want to make some adjustments to how we actually go ahead and go through the process. What they wanted to do is announce the decision, but they said let them announce it but we don't want to share the data. We just say we went through the process. We say, John helped lead it. We used AHP, and here's the answer. Leadership wanted this because they wanted all the engineers to go ahead and essentially disagree and commit to it, so they felt that by hiding the data that would effectively allow them to do that. This backfired, the opposite happened. Instead, people were upset that they couldn't see the decision making behind it. They're like, show us the data. We want to see the data. Eventually, we did share the data, but it was too late at that point. Why this was the case, I learned later on, it goes back to this concept of Nemawashi. Nemawashi is a Japanese concept. It actually comes from the world of Japanese gardening, where, basically, if you're going to transplant, you need to uncover the roots in a certain way. That's what the name is based on. In business in Japan, Nemawashi is a way of building consensus openly without forcing consensus. This is incredibly powerful when it comes to getting buy-in for your decisions.

We can look at the process with Nemawashi, in this case. Basically, you start with an idea, a concept or a problem statement. You then identify different groups of people you want to target in pre-conversations. These are the deciders, and in general, people that have the power to drive or enforce a decision. They're the makers that will do the work once the idea is decided to move forward. People that might go ahead and block that decision, or the idea from moving forward. Perhaps the largest group, those that are affected by the idea, either directly or indirectly. Then in these many pre-conversations, your objective is to go ahead and inform them, gather feedback, and improve the idea, with the goal of making the idea better for everyone. These conversations can be very open and transparent. Most of the time, it's some mix of informal private discussions, and some open ones, depending on who's involved. The key is you're not trying to make people change their minds forcibly. The cycle continues until everyone is in agreement with the idea. Then this leads to a meeting, where basically, the idea of the meeting is the decision is presented as everyone was already involved and prepared in advance, it's a low-stress meeting. Basically, everyone nods their heads and say, yes, this is a good idea. Let's go ahead and move forward with that. This maps really well to the AHP process. Essentially, you've got the idea. This is, what is the thing we're trying to decide on. Then there are the deciders, makers, blockers, and those that are affected. This is, let's go ahead and do those pairwise comparisons. We're going to inform them, go through that process, and then come up with the decision in the end, which is hitting that button on that tool that I created there.

One other mistake that I made with AHP originally, was the first time we used this for that JS decision again, I was a little too excited about how well the process went. I went to my manager, I was just like, "Great news, we went through this process. We came up with our answer. Answer is option 1." I brought the whole team to share this great news with him. He was like, deer in the headlights, I thought you were going to pick option 2. I was like, ok. Then basically, my lesson I learned from that was, essentially, make sure, again, you communicate this stuff up the chain. You don't want to surprise anyone in a group, especially with individuals there, they're going to be there to witness that experience. The best part of Nemawashi, if you're like me, that I alluded to at the beginning, is I don't spend all day coding. I do spend some time coding but I spend a lot of time also in meetings, talk to other people. This helps to basically make whatever meetings you have, be them small meetings, they're more effective, and then those larger, what were previously perhaps more stressful or more intense meetings, much more low stress, and just basically a formality at that point. I'll conclude this part of the retrospective by saying that, please learn from our mistakes. AHP will not make hard decisions less painful for those involved. Again, going back to that question at the top about, what programming language should you use? Think about that feeling you had. Others are going to have a similar feeling when they are impacted by a decision. Also, basically, have empathy for those that are impacted, and do not hide the data. This hurts the effectiveness of the decision, and does not save time or help the team move forward. Instead, it leads to more time spent on the issue actually.

I do want to note that in my effort to help spread the word about AHP, other companies have tried this out and found success as well. The New York Times team has tried this out, was actually the identity team. The team there were trying out because they needed to pick a user ID format for their centralized identity platform. They went through the process. David Wheeler, who was a staff engineer on that team, he actually blogged about his team's entire process of using AHP. It's available on The New York Times open website here, https://open.nytimes.com/collective-decision-making-with-ahp-3ef819e5bc2a. Welcome to give that a read to see how another team separate from me, went through that process to get another perspective of how to use AHP.

The last thing I'm going to mention, is something that happened a couple years ago, that inspired me to go ahead and actually put together this talk in the first place. That was, I was in a meeting where a small group was presenting a decision that they had made. They talked about the opinions that they had considered, the pros and cons of each of the options. They said that basically we picked this one option, it's what we're going with. In that meeting where they presented that they were going with this decision, after that presentation, they asked for questions, and a coworker of mine raised his hand and said, so what was the process you used to come up with that decision? The leader of the group said, we looked at the pros and cons of each option, we felt that this was the right way to go. We did not use the analytic hierarchy process in this case. The decision actually died pretty much right there and they abandoned the decision. For me, and basically for every decision-making process that you've used in the past, if you see that people don't use the decision-making process, and that kills it, then that basically is a clear indicator that the process is working for you. That people have bought into it and believe that actually can help you make decisions.

The AHP Algorithm, and Human Imprecision

Riviello: I wanted to demonstrate why AHP works well. Again, it's an algorithm. Basically, AHP can see if there is something that is slightly off, because mathematically that makes more sense. If it's 1/4, 1/4, it should be 1/8. I actually wanted to use one that was slightly off from that, because, again, humans are imprecise. The AHP algorithm that it uses actually takes that into account when it does those calculations. Basically, it shows you that you can be imprecise but in the right ballpark and still get the right answer in the end. It worked out perfectly.

Questions and Answers

Participant 1: I was wondering how much or how well decisions hold up. It sounds like there's a lot of buy-in in this process by your last example. What if someone did come to the organization or someone who didn't participate in the process, if they know it, do they just trust it like that last example? Or, do they need to see the process play out or just the data or something like that?

Riviello: The data behind it is really important. That's why, like I mentioned ADRs, architecture decision records, I find those very valuable because it gives the context for someone new coming into that as why that decision was made. I remember I was at a conference, and someone mentioned the fact that they were new to a company, and they just said, why aren't we using Jenkins? Just like they wanted to see the details behind why they were using whatever CI in this than they would use in another case. There's always the story behind the decision that made sense at that time. The point of this is, it helps you make the right decision at the right time. If you need to reevaluate it, you've got a process to do with this. I have said that the records of that and the details behind it help people understand why it makes sense then. Any decision could be reevaluated at any point. That's not an AHP thing, that's basically just a software engineering thing. The things that we said was the best thing to do 10 years ago is different today in most cases.

Participant 2: One of the things that I was wondering what this slide is on, regarding something that has explicit number, with some experience, you can treat the number of years as something that you compare with. If you want to, for example, add 1 year or 4 years, and then as we've had 1 year, and then it had also 1 year. How do you put in this framework numbers that are already absolute, for example, experience or performance? Do you compare them or you just normalize the absolute numbers that you already have?

Riviello: That's where the scale comes into play. Because with absolute numbers, it's easy, in a sense, because it's math. We're engineers, we love math. We can apply that. It's simple. It makes sense in our brains. That's where these numbers and basically the descriptions of the importance level there is useful. That's why again, those discussions I found really valuable because when we were doing, for example, the exercise of developer productivity, it's tough to quantify, and everyone has different opinions on that. Those discussions would lead to those things there. I would basically use the scale as a guide. Then again, the discussions are going to be the most important aspect to that, help you come up with the actual number you plug into the algorithm.

Participant 3: Let's say your alternatives are maybe novel to your group, and you're not sure how to effectively evaluate them against your criteria. Is AHP still a usable framework for you, or when you can't evaluate your alternatives properly does it fall apart?

Riviello: For that, the similar way where I mentioned that you could have different groups do different pairwise comparisons, I would take a similar approach to that, in that case. If you're going to make a decision with those as inputs, you need to evaluate them in some way. Perhaps, if they're novel, you need to come up with some way to do that evaluation, that you can then plug into this. Again, it's a tool to help you have a framework for making those decisions. In the end, it's really the group and those involved in it that need to come together and find either the expertise within their group to make the decision or go elsewhere to seek that to help come up with those evaluations. That has to be done, regardless of the process, in this case. I think actually would fit well, in this.

Participant 4: How do you decide that a decision is big enough to apply this, because I think courage is good for every decision. What's the thought process behind that?

Riviello: We've done this for large decisions. I remember we were once in a conference room with a whiteboard, and we had two choices. We were talking about it for 10 minutes, we're like, we can just do AHP real quick, because it's small, it's 2 choices and like 3 criteria, and we did it in like 10 minutes. I've probably only actually gone through the process 10 times, so it's usually for bigger decisions. Anytime that the group is having a struggle with making a decision, then you can go ahead and apply that and it can be a small thing. That one example is basically just like we were going to pick a JavaScript framework for a tool for like demo pages. We're like, which one should we use? It was a very quick process. It wasn't the 4 hours, it was really like 10 minutes, like I said. Know the tools available for you. It can be small, it can be large, just when it makes sense. Definitely, again, great for the larger decision when you need to get buy-in, in that case, for sure.

 

See more presentations with transcripts

 

Recorded at:

Mar 19, 2024

BT