# Stop Measuring Turn Around Time

| Posted by Wes Higbee 0 Followers on Aug 04, 2016. Estimated reading time: 10 minutes |

## Key takeaways

• Timeliness rarely matters
• Turn around time is often used as a surrogate measure of success
• If you focus on time you'll neglect outcomes
• People measure time when they don't know what really matters
• Satisfaction is subjective

Recently I had trouble with an order that I placed on Amazon. I emailed the seller to let them know that they only sent one of the two items that I had ordered. Wasn’t that big of a deal to me, I assumed that they would fix things eventually.

I got the standard automated response that they would be in touch soon to help me with my problem.

And then an hour later I received an email with profuse apologies for not getting back to me fast enough. That felt odd. An hour is not a long time to wait for the problem that I had. I figured that I’d be happy if they resolved my problem within a couple of weeks.

I’m sure there are people that would be incensed at waiting an hour, but not me. And there are plenty of people just like me that would be perfectly happy with a two-week turnaround time.

So, I ignored the email and went about my day. About an hour later I received another email from them. This time apologetic about my missing item. But, the email had an odd request, something I wasn’t expecting.

They asked me to take a picture of the product that I did receive, the package that came in, and the packaging slip.

This was a $5 purchase! They wanted me to take the time to take three separate pictures and attach them to an email before they could assess my situation further. By this time I had already thrown away the packaging and wasn’t about to dig it out of the trash. I was flabbergasted. Why in the world would they ask me to do this? What exactly would the picture help them do? Help determine if I’m lying? To me this was simply a matter of trust. They had all the information on their end to lookup my order and verify it. So they either believe me and send me the missing item. Or, they need to call me a liar. I followed up asking why they needed multiple pictures. Point-blank asked them if they thought I was lying. And told them that taking all these pictures and filling out the paperwork is a waste of my time. I’d rather just go on Amazon and order another one from another seller, it’s only$5.

We went back and forth over several emails. They insisted that they needed the pictures. Refused to explain why. They kept sending the same inquiry, time and again, almost as if a robot was on the other end. They never even acknowledged my question “why?”

Ultimately they closed my case without doing anything for me. All within a few hours of my initial complaint.

All along the way they made sure that they sent emails informing me that they would be in touch quickly. That, I wouldn’t have to wait long.

It seems to me that they were more worried about how fast they could resolve my problem. And by resolve I mean how fast they could close my complaint.

## Is expediency expected?

Common sense seems to suggest that people want responses quickly. And in some cases people do care about timeliness.

But there are many cases where people don’t care about the turnaround time. And regardless the expectation of turnaround time, the one thing that people do want is to legitimately have their problem resolved.

People want people that care.

It’s tempting to fall into the trap of believing that speed matters, universally. But not always, and much less than you would think.

How many times have you had a request for an urgent change to the software that you support? How many times do you get things updated quickly, only to find that the change goes unnoticed, unused?

What’s the last feature you developed where turn-around time was a deal breaker?

How many times have you sacrificed something to be expedient?

## Turn around == results?

Unfortunately, turnaround time is something that’s easy to measure. Results on the other hand aren’t. In many environments it’s tempting to measure turnaround time and use it as a substitute for measuring results. Results that are often intangible.

Turnaround time becomes a surrogate measure of success.

It seems that the company I was working with measured turnaround time. It’s very possible that they treat many of their customers exactly the same way that they treated me. Expeditiously but without a care about the outcome. I’m sure they’ve got great turnaround time.

But obviously turnaround time doesn’t tell the whole picture. According to their issue tracking system, they had amazing turnaround time resolving my problem.

How do you judge the success of your software development projects? What do you measure? What do you display in your radiators? What do you pat yourself on the back for?

Chances are you measure turn-around time. We see it extoled in many of the “modern” development practices. Just to name a few: velocity tracking, burn down charts, story points, planning poker, sprint planning, time-boxing and continuous everything. It’s all about time, and often about minimizing time.

It’s even in the Agile manifesto: “Delivering working software frequently” and “working software is the primary measure of progress.” These two ideas combined are a recipe for negligence.

We may quickly develop software, and we may quickly release it as working software. But, what impact does that software have? Have we simply delivered working software, quickly, that doesn’t provide much value?

Are our customers stuck wondering why we don’t believe them and just ship them the \$5 replacement?

Now you might be thinking it’s still important to roll up turnaround time. To measure it on an individual case-by-case basis. And then to aggregate the data and use that as a metric to pat yourself on the back. And that you’ll be safe so long as you find other metrics to put in place to complement turnaround time.

Unfortunately, there aren’t many metrics that can tell you what you would need to know. What you need to know is if people are satisfied. It’s not easy to meaningfully measure how people feel.

You could ask customers how they feel. I’ve seen the automated emails that ask me if I was happy with an inquiry. A simple yes or no answer, often a link to click to reply. It’s easy to set this up.

But then a problem arises when you aggregate the responses. How do you roll up yes and no responses? How do you aggregate how many people feel, and tie it back to anything meaningful?

A yes/no questions is often lacking context. Often, customers are irrational. An unhappy customer isn’t necessarily because of something you did wrong. So you have to figure out how you incorporate this into your measurements.

On the flip side of things, some customers won’t tell you when they’re unhappy, or even happy. They may simply not respond to your request. Or they may be intimidated for whatever reason, not wanting to upset somebody, so they don’t reply honestly.

Last year I remember calling the phone company about a problem I had with my bill. I received a survey after the call. I wasn’t happy, so I gave low ratings.

Within minutes I had a phone call from a supervisor, asking me about the situation. That’s not necessarily a bad thing. But, what was problematic was that the supervisor told me that the support rep was in tears.

I felt bad. The support rep that I talked to couldn’t do anything about my problem. My rating had nothing to do with the support rep’s performance. It had to do with the fact that I was unhappy about what had happened with my bill. And I was also unhappy that I had to call multiple times to get the problem resolved.

I answered honestly and that backfired.

And if you think this—being told that you made someone cry—wouldn’t affect how most customers respond to surveys, you might want to think again. This type of interaction does affect how people give ratings in the future.

It makes me angry that a company would not be able to factor in that I was upset with the result and not the person that was trying to help.

So, another problem is how you react to these responses.

These problems are inherent with measuring and aggregating statistics about performance. It’s so often disconnected from reality that it’s useless at best. And more likely than not leading to undesirable consequences.

Measuring turnaround time is more likely to result in a support rep being berated, for trying to help, than it is to make a customer happy.

What else do you measure as an indication of progress in developing software? Is it sufficient to remove the problems inherent in measuring time, to close the gap between measuring time and judging results?

What’s the biggest project you’re working on right now? What’s the value of the software to users? To the business? To you? Do you know? How could you find out? Will measuring speed of delivery have an impact on this value?

## You Get What You Aim For

No matter what industry you’re in, no matter what type of work you’re doing, you shouldn’t measure turnaround time. Don’t put it up on the wall, as tempting as that might be.

You can calculate it if you want, on a case by case basis, perhaps use it to find things that have been neglected. But when you’re dealing with human beings you need to understand what matters to individual human beings.

Turnaround time is often not that important. If you prioritize it, it’s what you’ll focus on. You’ll end up thinking you’re doing well when you’re probably not.

When you put it up on walls in front of everybody, you foster the mentality that speed is universally important. You’ll likely find yourself incentivizing fast turnaround time at the expense of results.

It’s far more worthwhile to develop the ability to understand what matters to individuals. To develop individual relationships with individual customers. If you focus on this, you’re much more likely to make your customers happy and be successful as a business.

In the process you’re going to find out that making this a reality requires decentralizing the responsibility, and authority, to understand what customers value.

No number on the wall will outperform the mentality that value is subjective. And in business, success is predicated upon creating things that people appreciate. Do the math.

When it comes to software, do you really care how long it takes to make? How fast you roll out features? Or would you rather know that people are satisfied with the software? Software that’s providing tremendous value to the organization. Value that you’re aware of, focused on, and working to maximize.

As a consultant, Wes helps people eradicate emotional blind spots with technology and produce rapid results. His career has been a journey. He started out as a software developer. In working closely with customers, he realized there are many needs beyond the software itself that nobody was taking care of. Those are the needs he addresses today, whether or not technology is involved. Along the journey Wes has had a passion for sharing knowledge. With 15 courses and counting, he has helped thousands of people improve. He works with both Pluralsight and O’Reilly. He’s been a speaker at countless local meetups, community organizations, webinars and conferences. And he speaks professionally to help organizations improve.

Style

## Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

## Get the most out of the InfoQ experience.

### Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Agile frequent delivery != fast delivery per se

You made the statement that when the Agile manifesto states that frequent deliveries and delivering working software are a recipe for disaster when combined. I find that a bit hard to match with your statement that turn around time is overrated (or even the root of may evils).

Frequent deliveries is not equal to fast deliveries. Working software means just that: working software (with the emphasis on "working"). The latter statement gives the development team the power to say "NO" when the business urges that the software is to be deployed NOW. Frequent deliveries means that the team can demand room in the schedule of the service provider to bring something into production (because they have declared it working and ready for production). Declaring something ready for production is not something that the development team decides on its own, the business has to agree with that (and that is done through the single representative - the product owner).

All this is to avoid all kinds of issues of business representatives having all kinds of non-technical, non-development agendas and political grounds not to want to have something in production. That is speeding up things, but not for the sake of fast turn around times, but for the sake of optimizing the business value of the product being placed in production.
That is measured in return on investment, not in turn around time.

All that being said, I do agree with your sentiment that too much stress is put on turn around times (and perhaps other SLA type measurements) and too little on satisfaction.

But we should not forget that there is some value in a reasonable turn around time. Nobody wants to wait 5 weeks on a response for a request and 6 more weeks for an answer on that one. As you state: it should be done on a case by case basis.

Process, process, process

You really touch upon multiple subjects in the process here - though it all does boil down to process.

First, agile sprint cycles are not the same as support ticket turn-around. The former insures that the software is in working order on a frequent basis, whereas the latter insists that all new features should be completed on a frequent basis.

Routines. Receiving your support ticket is not a defined process, because you are not following one when you post your ticket. So the first stage of dealing with your ticket is to convert it into an existing and defined process. Each process, of course, has its own input parameters.

In this, the turn-over rate can do a lot for the first line support agent. So part of the "package did not contain all books"-process might requires some documentations, such as tracking ID of the delivery you actually received. And this is where things seem to go wrong.

They might have legit reasons for wanting these photos, such as matching what you got with the information they received from the book store you bought it from. Or in the case of robot packaging, information about the format of the packaging or your book. This is stuff that helps THEM find out where their own packaging and delivery process fails.

That said, the support agent failed at THE most important criteria for support: customer satisfaction. In this case, you had a legit request for information about the process, and you met a wall instead. The agent could have answered honestly that "I don't know why they need these photos, but it's part of the process". The vendor could have put a generic page about the process at hand. Instead, the agent walled you with repeated requests to comply with their processes, forgetting that the agent's job is to be the bridge between customer and process.

I am certain the agent was never trained to think like this, however.

For photo delivery, it would make sense if this was part of the Kindle app. So even if you buy a physical book, you could get your order information, create your support ticket directly from the app and add photos. All the information you wanted from the agent about why these photos are necessary could have been there right in the app. Both you and the agent would have saved a lot of time and suffering.
Close

#### by

on

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

2 Discuss