BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Getting Technical Decision Buy-In Using the Analytic Hierarchy Process

Getting Technical Decision Buy-In Using the Analytic Hierarchy Process

Key Takeaways

  • The Analytical Hierarchy Process (AHP) can be used to make technical decisions, both large and small, and it is particularly beneficial for critical decisions.
  • AHP’s approach to weighting alternatives (options) against criteria and the criteria against a goal helps to remove emotion from the analysis.
  • AHP uses pairwise comparisons of the alternatives and criteria to calculate the final weights. The resulting visual charts clearly demonstrate the impact of each alternative’s strength and each criterion’s weight on the final decision.
  • The results of AHP are valuable to include in Architecture Decision Records (ADRs) to help explain why a decision was reached given the alternatives and criteria valued by the group at the time.
  • AHP maps well to the concept of "nemawashi," which helps to facilitate buy-in.
  • When following AHP, sharing the final analysis graphs to help explain why a decision was reached is essential.

Overview

Making significant, important technical decisions is a critical aspect of a senior individual contributor’s role. Given the broad impact these decisions can have, it is essential to make the correct decision. Ensuring the decision is made and communicated is even more vital so that the team members trust and buy into it. Otherwise, even the best decisions will never realize their full potential when executed.

This article examines how to employ the Analytic Hierarchy Process (AHP), a decision-making framework developed in the 1970s, and adapt it for making technical and non-technical decisions, both large and small.

Where other decision-making processes fail

There are some common approaches to decision-making that you may have used in the past. One is to give the most senior engineer the power to decide, but sometimes, that individual is not close enough to the problem’s impact to provide the best guidance for the team.

Another option is an equal democratic vote: everyone gets a say, and the majority wins. This works well if the decision is unanimous, but that is not the most likely outcome, especially with a larger group. Some people will be upset if the choice they voted for is not the final decision.

A variation on an equal democratic vote is a voting process where those with the most knowledge in the given domain have a larger impact on the final decision, similar to a larger company shareholder having more votes in company matters. It is difficult to accurately weigh the so-called experts’ opinions versus others, though.

There is also the concept of the wisdom of the crowds in making a decision. This has been studied for centuries, and research has shown it is often more accurate than one expert’s opinion.

The article "The Right Way to Use the Wisdom of Crowds" by Brad DeWees and Julia A. Minson highlights three keys to making better decisions:

  1. Independent opinions should be formed in advance (to avoid anchoring to the decisions of others).
  2. Decision-making groups should pre-commit to a decision-making strategy in advance.
  3. For teams facing quantifiable questions, they should aim for decision-making strategies that, as much as possible, remove human judgment from the process.

These are three excellent guidelines, but what is the best way to execute them?

Enter the Analytic Hierarchy Process (AHP)

The Analytic Hierarchy Process (AHP) has three components: a goal, criteria, and alternatives (i.e., the options being evaluated).

For example, let’s assume our goal is choosing the most suitable leader. We’ll be evaluating the potential candidates on four leadership criteria. For AHP, each candidate is considered one of the alternatives. We will do a series of pairwise comparisons, which involve comparing two candidates to each other with respect to each criterion. Then, we will go through the same pairwise comparison exercise with each possible pair of criteria with respect to the goal:

Let’s take the experience criteria for a group of three potential leaders to further detail the pairwise comparison process. Regarding a pairwise comparison, if two things being compared are equal for all intents and purposes, they both are assigned a value of 1. If the two things being compared are so vastly different that one is extremely better or more important than the other, then the "winner" would be assigned a 9. This table helps to guide which number to choose:

It is important to note that even numbers and decimals are valid. However, decimals tend to only be valuable in the range of 1.1 to 1.5, as once you approach 2 and go beyond it, the minute differences are not worth quantifying with decimals.

Once you have all of these values, you fill out a table similar to this one where the "winner" gets the value you chose, and the "loser" is assigned a 1:

At this point, a table matrix is then set up using the winner’s value for each pairwise comparison’s winner and the reciprocal of that for the pairwise comparison’s loser. This is where the first set of AHP calculations can be performed, resulting in what is referred to as the priority of each alternative with respect to the given criteria (i.e., weighted values of each candidate for that given criteria), with all priorities adding up to 1:

We then need to go ahead and repeat this process for strength, for charisma, and for integrity.

Then, we must do the same process for each pair of our criteria with respect to the overall goal. As we go through that, this is an example of what a full criteria priority table looks like when we’ve gone through that process:

These criteria priority values will then be used to properly calculate how each criterion’s weight influences each alternative’s priority with respect to the given criterion. For example, for the criteria of "experience," we have 0.547 as its priority, which you will use in the calculations of how much each candidate’s level of experience is applied to making the overall decision:

To get our priority with respect to experience, we take each individual’s priority with respect to experience and multiply it by experience’s priority with respect to the overall goal:

We’ll repeat this process for each criterion, resulting in the numbers we use for our final table of calculations, where we add each priority with respect to each criterion to determine our final priorities and, subsequently, our definitive answer:

We see that Fiona will be the best person to lead us in this example.

AHP in practice

When following AHP as originally prescribed, it is suggested to collect the numbers from multiple individuals via a survey in advance so that others do not influence responses, and then calculate the mean value for each among all responses. At Comcast, we took a slightly different approach. We did ask people to do their analyses in advance, but we instead came together and discussed our values for each pairwise comparison. When the numbers differed, we discussed them until we reached a consensus on the group’s official number.

We found that these discussions were even more valuable than the calculations that this tool did for us. The first time we went through this approach, we collectively knew what our decision should be before we calculated the AHP results. We went so far as to say we would ignore the AHP calculations if they did not align with our agreed-upon decision (it turned out they were both perfectly in sync).

The decision we were trying to work toward the first time we used AHP was deciding on a new JavaScript framework for a legacy web app we were responsible for. The criteria we used for that decision were:

  • Community
  • Performance
  • Redux compatibility
  • Web Components support
  • Localization features
  • Developer productivity
  • Webview support in a hybrid native mobile app

It’s important to note that these criteria were critical to our team at that time, and if your team were to go through this same exercise today, your team’s criteria would most likely be different. Also, with AHP, it’s essential not to exceed eight criteria or eight alternatives. Otherwise, doing all the necessary pairwise comparisons could take a very long time. We were choosing between three possible JavaScript frameworks, and with these seven criteria, it took us a little over 5 hours to go through this exercise.

Here is what our weights ended up looking like for the different criteria:

Community and performance were most important to us as our group, with developer productivity closely following those two.

Here is what our final AHP decision graph looked like:

You will note I do not mention which framework we chose because this article aims to help enable your team to make the right decisions, not to copy a decision we made years ago. To help your team perform the AHP calculations and visualize the results the same way that I have done in this article, the tool we created to generate those charts has been released as open source software on Github.

An AHP retrospective

AHP has worked well for us and other companies such as The New York Times (see "Collective Decision-Making with AHP" for how the NYT Identity team tried out the Analytic Hierarchy Process to select a user ID format), and I believe it can work for you and your teams. Since our original JavaScript framework AHP exercise, our team and others at Comcast have made many decisions with the help of AHP. Some were quick 10-minute exercises, while others were multi-day discussions. The constant across all of these was the teams found the experience of using AHP valuable.

AHP is very useful for capturing pieces of documentation. If your team generates Architecture Decision Records (ADRs) or other similar documents, the charts from our AHP tool are a great way to capture the motivation behind why you made a particular decision.

Another piece of positive feedback I received about AHP was when I was helping two separate teams working on two continents to decide how they would come together to build the next generation of a system the two teams were going to collaborate on. They each had their own previously existing systems that fulfilled part of the overall requirements but not all, and they were deciding if they would adopt one of those systems as the basis for the project or begin a new greenfield project for the new system.

One could argue you do not need AHP to determine if two groups of engineers want to commit to using an existing legacy system or build a new one together, and you would likely be correct that the greenfield option is always the more attractive one. However, the two teams went through the process, and they found that by doing so, the teams learned a great deal about each other’s existing systems and the strengths of the individual engineers on each team. So, it was a tremendously valuable ice-breaker and team-onboarding experience. They could strategically plan how they would build their greenfield system based on the experiences of the engineers who built the previous two systems.

It is also worth noting that it is certainly valid and, in some cases, more effective to have a separate group handle the pairwise comparison scoring for the alternatives with respect to the criteria and another group score the criteria with respect to the goal. We have done this on multiple occasions, having product managers define the various criteria and do the pairwise comparison exercise of each criterion with respect to the goal. Then, engineers handle the pairwise comparisons for each alternative with respect to each criterion. You can always tweak which individuals or groups make sense to evaluate each aspect of AHP as you see fit, given the decision.

Getting buy-in with nemawashi

Once, we were faced with a decision that would have a rather significant impact that we knew would be met with resistance by some depending on the final decision. Leadership appropriately identified this as a case where AHP would be helpful and asked a small group of us to go through the exercise but asked that, in the end, we only share the final decision, not the data behind the process. The hope was this would help accelerate the overall impacted group’s willingness to "disagree and commit" and not debate the numbers the smaller group used in the AHP calculations.

Instead, this approach backfired, and people were more upset that they were denied access to the decision-making data. We eventually released the full data set, but by that point, it was too late.

The Japanese concept of "nemawashi" helps explain why hiding the data behind a decision is ineffective. In business, nemawashi is a way of building consensus openly without forcing consensus, which is incredibly powerful when it comes to getting buy-in for decisions.

The nemawashi process starts with an idea, a concept, or a problem statement. You then identify different groups of people you want to target in pre-conversations. These groups include:

  • deciders: people who have the power to drive or enforce the idea
  • makers: those who will execute the idea
  • blockers: those who may have the power to stop an idea from moving forward
  • affected individuals: potentially the largest group, and it encompasses those affected by the idea, either directly or indirectly

In the pre-conversations, your objective is to inform people, gather feedback, and improve the idea to make it better for everyone. These conversations can be very open and transparent. Most of the time, it is a mix of informal private discussions and more open discussions, depending on who is involved. The key is that you should not try to make people change their minds forcibly. This cycle of conversations, feedback gathering, and idea improvement continues until everyone agrees with the idea.

This leads to a meeting where the idea is formally presented. As everyone was already involved and prepared in advance, the result is a low-stress meeting in which everyone nods in agreement and decides to move forward with the idea.

Nemawashi maps well to the AHP process. The idea in nemawashi is similar to a decision you are trying to make using AHP. The groups targeted for pre-conversations are the same groups you need to talk to as part of AHP to make sure ideas are properly represented. The smaller group making the pairwise comparisons handles a similar inform, gather, and improve process. Then, the final idea is similar to AHP’s final decision.

About the Author

Rate this Article

Adoption
Style

BT