BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles To Dare or Not to Dare: the MVA Dilemma

To Dare or Not to Dare: the MVA Dilemma

Key Takeaways

  • Technology Radars are a popular way of characterizing the risk of technology adoption.
  • Technology Radars can help teams form experiments about the solution they are building as well as its architecture.
  • Every product release is, or should be, an experiment about both the value that the team is delivering as well as the sustainability of their solution.
  • These experiments must balance both business and technical risks in a way that business stakeholders can understand and support.
  • Releases should be scoped to maximize learning, not the number of features or the depth of technology delivered, but releases whose experiments become too large or too numerous become “too big to fail” and cease to be experiments.

Teams developing a new increment of a product, also known as a Minimum Viable Product (MVP) are typically in a tough spot: they have a short period of time in which they have to develop and deliver what they hope is a valuable product increment. They also need to develop a Minimum Viable Architecture (MVA) for that MVP to meet its quality goals, also known as Quality Attribute Requirements (QARs).

This tension between these two forces creates a dilemma: does the team rely on tried-and-true technologies that may not perfectly meet their needs, or do they explore new and unfamiliar technologies that may be a better fit but may be more risky to implement? Teams, and their organizations, who consistently stick with the tried-and-true tend to minimize risk in the short run but increase their risk in the long run by staying on old technologies that are no longer well-suited to meeting the challenges the organization faces.

Technology Radars are a popular way of characterizing the risk of technology adoption

A Technology Radar (TR) is a way that organizations synthesize and communicate their experiences with various technologies, as shown in Figure 1.

Figure 1: An example Technology Radar from Thoughtworks

In this popular representation, a TR shows 4 different technology areas in quadrants on a circle, although more or fewer technology areas can be presented. Within this circle are rings that recommend, based on the experiences of the preparer, whether teams should:

  • Adopt the technology because it has been indisputably proven to be generally useful;
  • Trial the technology because, although it has been used successfully by some teams, each team will need to make their own decision based on their context;
  • Assess the technology because, while the technology looks interesting, it has not been widely used enough to warrant a recommendation; and
  • Hold using the technology because there are better solutions in the technology area. Products in this ring may have, at one time, even been recommended for adoption but may have slipped in their recommended usage.

Organizational affinity for risk also has an effect on technology decisions, as shown in Figure 2.

Figure 2: Technology adoption varies according to team skills and risk tolerance

This representation maps technology maturity to organizational risk tolerance using Everett Rogers’ diffusion of innovation model. Viewed this way, if your team thinks a particular technology would be useful to use when building an MVA, you need to ask yourself "Does our organization fit the profile of adopters of the technology? For example, if you want to use Automated Machine Learning (AutoML), does your organization really fit the"Innovator" profile? Can your organization attract people knowledgeable in the technology? Is your organization open to experimentation when those experiments may not pay off?

Every product release is, or should be, an experiment

In a previous article, we discussed how each product release is an incremental Minimum Viable Product (MVP) that has an associated Minimum Viable Architecture (MVA) that ensures that the value of the MVP can be sustained over time. In this model, each MVP is an experiment that explores what customers find valuable, and each MVA is an experiment about how that value can be sustainably supported.

These experiments provide opportunities for a team to try out new technologies; to achieve the goals of the release, a team may have to use new technologies. Their challenge is balancing technical experiments with business experiments. The business stakeholders won’t accept a release that consists only of technology experiments, and they may also be nervous about putting their business experiments at risk if the technology experiment fails. Teams have to negotiate their way through this.

BUT… the technology experiment may be necessary to enable the business experiment. Sometimes the business experiment may fail and may drag down the technology experiment, but if a technology experiment does not satisfy a compelling business need, it will also never be successful.

Either way, teams who want to include technology experiments in their releases need to have challenging discussions with business and operations stakeholders. Business stakeholders must understand the benefits of technology experiments in terms they are familiar with, regarding how the technology will better satisfy customer needs.

Operations stakeholders need to be satisfied that the technology is stable and supportable, or at least that stability and supportability are part of the criteria that will be used to evaluate the technology.

Wholly avoiding technology experiments is usually a bad thing because it may miss opportunities to solve business problems in a better way, which can lead to solutions that are less effective than they would be otherwise. Over time, this can increase technical debt.

As we wrote in a recent article, incurring Technical Debt (TD) is a way to learn things, and to avoid over-investing in solutions to problems you may not yet fully understand. Intentionally incurring TD may not be a bad thing when introducing a new, unfamiliar technology as part of an MVA. Let’s assume that the new technology successfully meets the team’s needs and enables the MVA to meet or exceed its QARs. It also may apply to other teams trying to solve similar problems and as a result, the TD increase may not need to be "repaid".

However, if the new technology fails to live up to its promises and does not meet the team’s needs, then the experiment should be quickly terminated, therefore eliminating the TD issue. The danger here is to assume that spending more time and effort on the new technology may turn things around. The experiment should be terminated as soon as it is clear that the MVA won’t meet its QARs, without spending more time and effort on the experiment. Beware falling into the "confirmation bias" trap!

In addition, it’s not an experiment if it’s too big to fail, or if failure of the experiment would be considered a bad thing. Experiments only fail if they don’t provide any useful information; knowing that a technology, or even a feature, does not deliver the desired result is not a failure, it is simply information.

A mistake that teams make in this regard is to invest too much effort in implementing a technology without knowing if it will produce the desired results. When they are not sure, they should break down the release into a smaller release that they can deliver more quickly so that they can get feedback.

Similarly, both the development team and their business stakeholders should avoid making assumptions about value by breaking a complex and costly solution into smaller chunks that they can evaluate more quickly and with less effort. In other words, if an "experiment" is too big to fail then they need to break it into smaller experiments.

Balancing business versus technical risk

Teams and their stakeholders first have to come to at least a preliminary agreement about the business scope of the MVP/release. Once they do, the development team has to forecast how much work they think they will need to do to meet the release goals. This is where they must start making technology choices since different technologies will change the amount and nature of work the team needs to do.

Using the Technology Radar illustrated in Figure 1, a team would first investigate the technologies listed in the "Adopt" category, and assess whether any of these technologies may help them achieve their release goals. Their work is generally simpler and less risky if they only need to use technologies in the "Adopt" category. Still, these more "proven" technologies don’t always do all the things the team needs to do, so they may need to consider technologies in the "Trial" category, and so on.

In making these decisions, teams balance a variety of technical risks, as shown in Figure 3.

Figure 3: Teams balance several kinds of risks with respect to technologies

[Click here to expand image above to full-size]

These trade-offs are constrained by two simple truths: the development team doesn’t have much time to acquire and master new technologies, and they cannot put the business goals of the release at risk by adopting unproven or unsustainable technology. This often leads the team to stick with tried-and-true technologies, but this strategy also has risks, most notably those of the hammer-nail kind in which old technologies that are unsuited to novel problems are used anyway, as in the case where relational databases are used to store graph-like data structures.

Designing effective experiments helps teams balance risk

The starting place for planning any release is to agree on the goals for the release. The most important decision about this is to agree on how fast the team or organization needs feedback on their experiments. To do this they need to understand what experiments they need to run. They always have at least two kinds of experiments that they need to balance:

  • They need to decide what experiments they need to run about business value. These become the focus of the MVP. Put another way, business stakeholders have ideas about what their customers or users need or will find valuable, but these are usually unproven. Many times the only way to test these ideas is to build and release something.
  • They need to decide what technical decisions they need to make to sustainably support the business value experiments, should they succeed. A part of these decisions may involve adopting new technologies. Technology Radars similar to Figures 1 and 2 can help inform their decisions. For each candidate technology, they need to ask themselves
    • Is the organization comfortable with the risks associated with the new technology?
    • Is the business case for the business experiment and the related technology experiments reasonable? How will the organization know?
    • Is technology supportable/sustainable given the organization’s expertise and resources? How will the organization know?
    • Does the technology do what the team needs? How will it know?

For each of these kinds of experiments, the team and their stakeholders need to agree on how they will know if their experiments succeed. Also, as noted above, if the experiments grow too large or numerous, the team and its stakeholders will need to reduce the scope of the release to stay within the feedback cycle-time goals for the release.

Conclusion

Teams developing an MVP have a short period of time to develop and deliver what they hope is a valuable product increment, as well as to create an MVA to support that MVP. They can either rely on tried-and-true technologies that may not perfectly meet their needs, or explore new and unfamiliar technologies that may be a better fit but will add technology risk. Technology Radars are a popular way of characterizing that risk and can help teams form experiments about both the solution that they are building as well as its associated MVA.

Product releases are experiments about both the value that the team is delivering as well as the sustainability of the solution. These releases should be scoped to maximize learning, not the number of features or the depth of technology delivered. The experiments embodied in a release must balance both business and technical risks in a way that business stakeholders and the development team can understand and support.

Releases whose experiments become too large or too numerous become "too big to fail" and cease to be experiments. Fast and early feedback is essential to preventing over-investment in things that customers or users don’t want or need, as well as preventing the product’s architecture from becoming bloated and ineffective and from relying on unsustainable technology.

About the Authors

Rate this Article

Adoption
Style

BT