BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Stop and Refactor?

Stop and Refactor?

Leia em Português

This item in japanese

Joshua Kerievsky started a discussion on the Refactoring Yahoo! Group with the following post:

In recent years I've heard some folks say that one should only refactor when one is working on a User Story. I've never agreed with that notion, since I think there are times when you simply need to pay down technical debt. For several days now, my colleagues and I have been refactoring our eLearning code. There is no User Story driving this. We have simply incurred more technical debt than we would like and now is a good time to pay it down. There has been a pernicious Singleton that has played a central role in our code and is now being killed off, because that will open up our code for many more design improvements. This work feels good. It will position us to have a much easier time working on the User Stories that are coming.

Meanwhile, we continue to ship each week -- little fixes and lots of refactorings. As always, our automated microtests and storytests give us a great deal of confidence and courage.

Anyway, I thought I'd share this as it may lead to an interesting discussion.

Dale Emery tried to get clarity on Josh's context:

Dale: I suspect that the general advice exists to discourage technical folks from making business decisions. The decision to pay down technical debt requires a solid understanding of both the business and technical impacts. In the deciders lack that solid understanding, the decision becomes unreliable. Yours is a special case that reduces the danger of that.

Josh: Yes, our case is special. However, I'd say it is generally a very good thing for business and technical folks to work closely together so that these kinds of technical debt decisions are shared by everyone. And yes, we'd not want developers to just decide to spend several weeks refactoring without any real collaborative decision making happening within the larger project community.

Dale: It seems to me (correct me if my assumptions are mistaken) that your Customer is highly technical, and many if not all of your technical folks intimately understand your business. Further, your Customer is you. When you make the decision to pay down technical debt, you're doing that with full knowledge of the business impact, and probably / because/ of your full knowledge of the business impact.

Josh: Yes, the decision to refactor away the technical debt now is driven by

  • timing -- we just finished a very important release to our biggest client and it's time to step off the "feature train" for a bit.
  • future -- we have more features coming and hands-on experience with this code says that the technical debt will only slow us down.
  • ubiquitous language -- we have a wonderful music metaphor in our code and...a few remnants of our older metaphor (books) still lingering around.

With this understanding of Josh's context (and even more throughout this very long thread), Adam Sokra suggested:

So. You aren't working from fixed iterations. You develop incrementally and release as often as possible. Sometimes you have a set of user stories you are working on and you are trying to release those to customers quickly. Other times you are just trying to improve the design of what you have to make it better. You are both the Customer and a programmer.

This sounds exactly like every good open source project I have ever encountered, and very little like any Scrum or XP project. I don't think there is anything wrong with what you are doing, but I'm not sure it is terribly useful to people who are trying to understand how to do refactoring in an Agile context.

One of the core concepts in Scrum and XP is that we are gaming the needs of non-technical business folks against the needs of the technical team. We want to make sure that technical things are done proficiently, but we also want to maximize the control that business has over what is produced and when it is produced.

What you are describing is an environment where this dichotomy doesn't exist. You are free to decide what features you want to add, when you want to add them, and when to take a break from delivering features and focus on purely technical issues.

So Adam suggested that Josh's context is not applicable to most projects; the most important distinction being the lack of the technical/non-technical struggles in communication, understanding, and priorities.

Ron Jeffries suggested that the amount of refactoring we should do is a business decision.  That refactoring is an investment that has no value immediately.  He also objects to the binary decision of "no refactoring" or "stop and refactor":

There is an assumption here, that needs to be made explicit. That assumption is that it is somehow, sometimes better to stop or slow "forward" progress and clean up.

It seems obvious to a lot of people that such a thing can happen, that the code can get so bad that the only thing to do is to clean it up while halting or reducing feature progress.

It does not seem obvious to me. The numbers just don't hang together. When we clean up the code, the cleanup we make will only pay off at some future time. Some may pay off tomorrow, and some may not pay off for weeks or months. None of it pays off now.

All refactoring that slows feature progress is an investment in the future. What needs to be figured out is whether, how, and when, such an investment is really worth making.

Ron then continues to suggest a way to determine when refactoring is an investment worth making:

When is it better [to refactor], and why? There are many possible paths to the future, showing feature value growing over time. We can describe two of them:

1. Don't refactor. Feature value grows more and more slowly, levels off, perhaps even starts to decline as error injection swamps feature injection.

2. Stop feature development and refactor. Feature value DOES NOT GROW, for a while. After a while, it starts to go up again. We assume that since the code is now better, it will climb more rapidly than it did before. However, feature value has grown, and there will be some interval before we catch up. After that, we assume, we'll start pulling ahead.

So what can we conclude, comparing just these two? First, not refactoring provides MORE feature value until some time after refactoring ceases. Second, to know when refactoring starts to win we need to know some numbers: how long will refactoring take, and what will be its impact on velocity? And, how long will it be until the code gets crufty again, which will bring the numbers back down?

It is entirely possible to stop, refactor, be a little behind on features, slam in a few features, not catch up but screw up the code, and loop forever, never getting benefit. We hope that's unlikely ... and it is, if people are sufficiently skilled ... which is part of my point above, that your advice is good for experts.

However, these two end points just show that the stop and refactor strategy can itself fail. Is there another strategy that might work better? There is.

Let's imagine a kind of "Refactoring Accelerator", RA. In case 1 above, it was at 0.0, no refactoring. In case 2, we have set it to 1.0, flat out. What happens with settings in between?

First of all, what happens to feature value as a function of RA? There is some number 0 < x < 1 such that if RA = x, feature value growth always declines. We are not refactoring enough, the code deteriorates, we lose more and more. With RA = x, feature value grows stays constant. We don't speed up, but we don't slow down either. We are holding our own at whatever velocity we then have.

Now, if we set the Refactoring Accelerator above x, RA > x, what happens? Can we go faster, or do we always slow down? We know that at one point, RA = 1.0, we do slow down, to zero (but we can speed up later).

The answer depends on a kind of response curve of velocity in relation to code cleanness. We know that higher cleanness leads to higher velocity, and (I think) we know that the earliest bits of cleanness have a disproportionate good effect, while the last bits of polishing don't add much at all.

What I /believe/ can happen ... and there is absolutely no proof that it cannot happen ... is that by pushing RA just a bit above x, we can make faster and faster feature progress. If that's true, then this strategy will deliver value uniformly faster than the stop to refactor strategy.

Therefore, if this is true -- and I'm quite sure it is true -- the stop to refactor strategy is never the best one for a team skilled enough in refactoring.

The key point Ron makes here is that the "stop and refactor" strategy is NEVER the best choice for a team competent in refactoring.

This lengthy report is but a glimpse of a very engaging discussion. This is not a new question as Josh explains at the start of the conversation. In fact, this reporter wrote an editorial on the subject called Refactoring is a Necessary Waste two years ago, and the topic of refactoring has been covered by InfoQ on multiple occasions.  The community has not reached consensus on this topic.

Rate this Article

Adoption
Style

BT