BT

Service Delivery Review: The Missing DevOps Feedback Loop?

| Posted by Matthew Philip Follow 1 Followers , reviewed by Manuel Pais Follow 9 Followers on Oct 28, 2018. Estimated reading time: 11 minutes |

Key Takeaways

  • Tech organizations need not only the ability to change (agility) but to change in the right direction (fitness for purpose).
  • Organizations need a mechanism to be able to continuously measure how well they are fulfilling the customer's reason for choosing them.
  • Several reliable feedback loops exist for understanding product fitness, but not many exist for service-delivery fitness.
  • In service-driven age, service-delivery fitness is an increasingly important selection criterion for customers.
  • The service-delivery review is a feedback loop that facilitates a quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.

In today’s digital-service economy, IT organizations need not only the ability to change but to change in the right direction. That is, they need to be able to sense and respond to feedback in order to continuously recognize and measure gaps in their understanding of the customer’s view of their fitness for purpose.

Certainly, the standard agile feedback loops -- product demo, team retrospective and automated tests — provide valuable awareness of product health and fitness. Yet many teams and stakeholders struggle to find a reliable way to understand an important area of feedback: the fitness of their service delivery. 

This article introduces the service-delivery review as the forum for this feedback.

Digital organizations need the ability to sense and respond to customers

As Jeff Sussna writes in his book Designing Delivery, service providers must make promises about listening and responding as much as making and delivering. The marketplace changes so rapidly that what worked five years or even five months ago may be obviated without notice. Before you know it, your team or organization is no longer fit for the purpose it once served (see Blockbuster, Kodak, etc.). Above all -- beyond well-intentioned mission and purpose statements -- Sussna notes that organizations exist in order to continue to exist. That is, we’re in business to stay in business. Therefore, organizations must continuously seek to understand why their customers are choosing them. But don’t simply assume our customers -- which can be internal as well as external -- choose us for the quality of our products. How can we assess the less tangible aspects of our work?

What are Services? Seeing your organization with the “Kanban” lens

Part of the difficulty stems from seeing our organizations only in terms of product. But organizations usually have a service component as well. Even something as ordinary as a coffee shop is more than a product: in audience polls during my conference talks, an overwhelming amount of people consider Starbucks not a product or service business but both. 

The same is true of technology organizations. This becomes easier to understand if we apply what Rodrigo Yoshima and Andy Carmichael call the “kanban lens. The kanban lens is a way to “see” your work, specifically:

  • Work as flow
  • Knowledge work as a service
  • Organizations as networks of services

When you “put on” the kanban lens, as if you were wearing a special set of superpowered goggles, you move from seeing the traditional org chart to seeing services everywhere in your organization. You move from seeing people ops to customer-facing delivery work!

Services are everywhere, if we only have the lens to see them. Regrettably, we often notice them only when they are dissatisfying. Not long ago, I “discovered” an internal service in my organization: my team created a presentation to give to leadership, so we wanted it to look polished. Unfortunately, none of us had visual-design chops, so we requested someone from our design team to help. The reply was “Is there a due date?”. We didn’t have a deadline (yet), but we also had no idea when our understandably busy colleagues would be able to turn it around. This is clearly a (design) service for internal customers who have an idea of what makes it fit for their purpose. In this case, it was a reliable turnaround time.

We all make requests of individuals and teams all the time. But without a mutual exchange of information -- for example, expected delivery speed -- we’re going to pad our requests with extra time or fake deadlines. In the absence of any quantitative feedback about the performance of our service delivery, arbitrary due dates and artificial boundaries are always going to persist. In the story of my organization’s design service, our exchange would have been much easier -- and questions of expectations would have been more productive -- if we had transparent and quantitative delivery data. This in turn, would have fostered trust.

Thinking about service delivery in terms of fitness for purpose

What does fitness for purpose of service delivery mean?

First, does the team and stakeholders know what their customer values about their service?

Second, do they know how to measure it?

Third, is there a regular feedback loop to assess service fitness in the eyes of the customer? 

Asking the fitness question once -- why have you chosen us, and what do you value about the service we provide? -- is good but that answer may change over time. More often than not, teams do not have an ongoing way to measure if, and how well, their service aligns to that selection criteria over time. 

Dimensions of Delivery 

I’ve often used the restaurant metaphor to describe the difference between product and service delivery: when you dine out, you care not only about the food and drinks (product) but also about how the meal is delivered to you (service delivery). That “customer” standpoint is one dimension of the quality of these components — we might call it an external view. The other is the internal view — that of the restaurant staff. They, too, care about the product and service delivery, but from a different perspective: is the food fresh, kept in proper containers, cooked at the right temperatures? And do the staff work well together, complement each other’s skills, treat each other respectfully? 

So we have essentially two pairs of dimensions: component (product and service delivery) and viewpoint (external customer and internal team).

In software delivery, we have feedback loops to answer three of these four questions. We also have more colloquial terminology for that internal-external dimension (“build the thing right” and “build the right thing”). Can you guess which one is missing?

Feedback loops like retrospectives and standup meetings provide valuable feedback on the internal workings of our teams. But these often occur without reference to customer’s concerns. I’ve worked with countless teams who got on well with each other and enjoyed working together, but otherwise were clueless as to how the customer expected them to deliver. Retrospectives can tend to turn inward and focus on “not-enough-muffins” problems that concern only the team. Enough muffins is important, to be sure (!), but the customer couldn’t care less. 

The problem is that we typically don’t have a dedicated feedback loop for properly understanding how fit for purpose our service-delivery is. And that’s often equally the most vital concern for our customers — sometimes even more important than the fitness of the product. We may touch on things like the team’s velocity in the course of a demo, but we lack a lightweight structure for having a constructive conversation about this customer’s concerns with the service.

The Service Delivery Review as the missing feedback loop

I first came across the idea of the service delivery review from the Kanban method, which includes it as one of its seven feedback loops to drive evolutionary change. I’ve been incorporating this to help answer teams’ inability to have the conversation around their service-delivery fitness, and it appears to be providing what they need in some contexts.

I define the service delivery review much in the same way as David Anderson did in 2014, with minor tweaks:

A regular (for example, fortnightly) quantitatively-oriented discussion between a customer and delivery team about the fitness for purpose of its service delivery.

During the review, teams and customers might discuss any and all of the following:

  • Delivery speed: How fastare we delivering work items? Scatterplot charts can show the delivery times (aka cycle times, time in process) of recent work. And how predictableare we? Delivery-time distribution can quantify our predictability.

Figure: scatterplot chart example

Figure: delivery-time distribution example

  • Delivery throughput: How muchwork are we delivering? For example, is our typical range of between three and five user stories per week acceptable?
  • Mix of work: Is everyone satisfied with the allocation we’re giving to various work types? For instance, is a 10% allocation of effort to removing technical debt acceptable?
  • Policy changes: What kind of treatment do we give to various types of requests? Do different stakeholders expect differing treatment? What policies are we following that we haven’t made explicit? Are our various classes of service supporting the expected speed and predictability thresholds? 
  • Due-date performance: For those items that are truly deadline-oriented, how well have we done meeting those dates (fixed-date misses)? What is an acceptable success rate, what do we need to do to achieve that and what is the cost of that level of performance?
  • Front-line data: Input from fitness surveys (for example., F4P box score), front-line staff reports, and social media.
  • Obstacles: What things stand in the way of our service-delivery expectations? One way to quantify this is through the practice of blocker clustering, a technique popularized by Klaus Leopold and Troy Magennis that leverages a kanban system to identify and quantify the things that block work from flowing; review the results and remediations.

These are not performance areas that teams typically discuss in existing feedback loops, like retrospectives and demos.  Yet they’re quite powerful and important to achieve a common understanding of what customers value the most. 

In my experience, they will uncover some of the most (unnecessarily) painful misunderstandings. For instance, are we producing the amount of work expected? If it’s too much, we might consider moving some capacity elsewhere. If it’s not enough, why might that be, and is this an opportunity to segment our delivery differently? For example, we might intentionally decide to accept lesser service thresholds as a tradeoff for other business investments, such as capability building, while doing the opposite for high-demand services.

Moreover, because they are simultaneously quantitative and fitness-oriented, service-delivery reviews help teams and customers build trust together and proactively manage toward greater fitness.

Getting Started with Service Delivery Reviews

Service-delivery reviews are relatively easy to do, and in my experience provide a high return on the time invested. The prerequisites are:

  • Know your services
  • Discover or establish service-delivery expectations

Janice Linden-Reed very helpfully outlined in her Kanban cadences presentation the practical aspects of the meeting, including participants, questions to ask and inputs and outputs, which is a fine place to start with the practice.

I’ve also developed a customizable canvas that provides a template for inputs and outputs of the meeting. The specific implementation is less important than being clear about the purpose of the meeting, required audience and facilitator, inputs, outputs and outcomes. In my experience, the canvas can also include probabilistic forecasts of completion times, risks and blockers. 

If you’re starting at a more fundamental level -- discovering those fitness criteria, for instance -- you might even try a “Yelp review,” a fun activity that I’ve conducted with customers to enable them to think in both product and service terms by asking them to write a Yelp-style review from the future based on their experience with the team. For instance, one stakeholder discovered and shared his own unspoken interest in being contacted and brought in when work took longer than expected. In the same way that a futurespective helps teams by visualizing possible scenarios, by writing his “review” in advance, he gave the team an understanding of his unvoiced expectations of their fitness, which they then managed in service delivery reviews.

The benefits of service-delivery reviews

I’ve worked with many high-performing teams who deliver amazing digital products and yet are surprised when their customer is dissatisfied. It’s often because they had either a) no sense of what made their service delivery fit in the eyes of their customer or b) no feedback loop to regularly and quantifiably measure that fitness. One executive that I worked with even noted that he would rather attend a service delivery review than a product demo, because the service delivery was something that he and the team could more directly improve through team composition and other organizational changes.

Specifically, the service delivery review benefits organizations because it:

  • Forces you to focus on customersand become fit for the purpose for which they chose you. Story points aren’t representative of customer’s fitness, or selection criterion. No one hired your team because of their amazing velocity
  • Sets clear standards and achievements
  • Generates feedback with (meaningful) data
  • Helps you understand why you fail and then align improvement efforts
  • Builds customer trust and loyalty

Additionally, many organizations are undergoing some kind of so-called “agile transformation”, sometimes simply adhering to ceremonies. Andy Carmichael encourages organizations to measure agility by fitness for purpose rather than practices adoption. The service delivery review is a feedback loop that explicitly looks at this. Multiple service delivery reviews then feed upward into a regular operations review, which takes service delivery input and gives managers a higher-order decision-making viewpoint: Based on our organizational (or departmental) goals, do some services need more capacity, and if so which need less and can provide supply? What system-level patterns are we seeing that we can resolve for multiple services? Some organizations have answered the scaling question through installing frameworks like SAFe; the combination of service-delivery reviews, operations reviews and fitness-for-purpose thinking is an alternative that allows organizations to continually improve each service toward greater fitness while creating the mechanism for ongoing sensing of the customer’s fitness expectations.

Ultimately, organizations and teams need some way to sense and respond to their customers, both external and internal. In their book Fit for Purpose, David Anderson and Alexei Zheglov assert that:

“The tighter you make your feedback loops, the greater agility you can exhibit as a business, the faster you can sense and respond.”

About the Author

As a capability cultivator, organizational fitness coach and workplace activist, Matt helps organizations and teams continuously become fit for their purpose. He is especially passionate about building learning organizations and creating humanizing and engaging work environments. You can follow him at @mattphilip.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss
BT