Kent Beck Suggests Skipping Testing for Very Short Term Projects

| by Mark Levison Follow 0 Followers on Jun 18, 2009. Estimated reading time: 4 minutes |

Kent Beck, author of “Extreme Programming Explained” and “Test Driven Development: By Example” suggests software, like golf, is both a long and short game. JUnit is an example of a long game project – lots of users, stable revenue (at $0 is sadly for all involved), where the key goal is to just stay ahead of the needs of the users.

When I started JUnit Max it slowly dawned on me that the rules had changed. The killer question was (is), “What features will attract paying customers?” By definition this is an unanswered question. If JUnit (or any other free-as-in-beer package) implements a feature, no one will pay for it in Max.

Success in JUnit Max is defined by bootstrap revenue: more paying users, more revenue per users, and/or a higher viral coefficient. Since, per definition, the means to achieve success are unknown, what maximizes the chance for success is trying lots of experiments and incorporating feedback from actual use and adoption.

JUnit Max reports all internal errors to a central server so that Kent can be aware of problems as they come up. This error log helped find two issues. For the first he was able to write a simple test that reproduced the problem and verified the fix. The second problem was easily fixed, but Kent estimated it would take several hours to write a test for it. In this case he just fixed it and shipped.

Kent goes on say:

When I started Max I didn’t have any automated tests for the first month. I did all of my testing manually. After I got the first few subscribers I went back and wrote tests for the existing functionality. Again, I think this sequence maximized the number of validated experiments I could perform per unit time. With little or no code, no tests let me start faster (the first test I wrote took me almost a week). Once the first bit of code was proved valuable (in the sense that a few of my friends would pay for it), tests let me experiment quickly with that code with confidence.

Whether or not to write automated tests requires balancing a range of factors. Even in Max I write a fair number of tests. If I can think of a cheap way to write a test, I develop every feature acceptance-test-first. Especially if I am not sure how to implement the feature, writing a test gives me good ideas. When working on Max, the question of whether or not to write a test boils down to whether a test helps me validate more experiments per unit time. It does, I write it. If not, damn the torpedoes. I am trying to maximize the chance that I’ll achieve wheels-up revenue for Max. The reasoning around design investment is similarly complicated, but again that’s the topic for a future post.

Ron Jeffries, author of Extreme Programming Installed, replies: “I trust you, and about three other people, to make good short game decisions. My long experience suggests that there is a sort of knee in the curve of impact for short-game-focused decisions. Make too many and suddenly reliability and the ability to progress drop substantially.”

Johannes Link, Agile Software Coach, says: “I have seen a couple of developers who were able to make reasonable short-term / long-term decisions. I am yet to see a single team, though; let alone an organization.”

Michael O'Brien by contrast commented: “A great article and the right decision, I think. It’s too easy to get caught up in beauty and consistency when you’re writing code, and forget what you’re writing code for. I write tests because it makes writing code easier and gives me confidence the code does what I think it does. If writing a test isn’t going to help me achieve that, I say skip it.”

Olof Bjarnason, thinks that: “one relevant idea Kent brings up is feedback flow. If we focus on getting that flow-per-unit-time high, we are heading in the right  direction. For example, he mentions short-term untested-features-adding being a maximizer of feedback-flow in the beginning of the JUnitMax project, since writing the first test was so darn hard to write (took him over a week). He got a higher feedback-flow by just hacking it together and releasing; his ‘red tests’ were the first few users and their feedback.”

Guilherme Chapiewski, raises the concern sometimes you think its a short game but it’s not. In Guilherme’s case that he decided to write a project without any tests as a proof of concept. It flew and people started to use it, quickly finding a few bugs that couldn’t be fixed. In the end he concluded the code was rotten and untestable. He threw it away and started again from scratch.

Kent replies to many of the comments saying: “I agree that confusing the practices and the principles leads to problems. And that tests lead to better designs. That’s why I have ~30 functional tests and ~25 unit tests (odd balance because Eclipse apps are so hard to test). I do almost all of my new feature work acceptance-test-first. It helps reduce the cycle time.”

Does this idea safely scale beyond one or two people? Aside from Kent Beck, do many people have the judgment to pull this off?

Rate this Article

Adoption Stage

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

policy vs. practice / prototype vs. production by will gage

it doesn't scale as a policy beyond 1 or 2 people, though i would say that it's a valid approach for any 1 or 2 people to use when prototyping an idea. in the prototyping phase, you're better off not interfering with your flow too much. if you feel like it's natural to write a test while prototyping, do it. if not, keep doing what you're doing. but keep yourself honest when it comes to bringing that code into a production build by enforcing some coverage thresholds in your build process. these days, production code for me is very well covered by tests, but that doesn't mean i always write tests first.

Re: policy vs. practice / prototype vs. production by jon kern

It's like many things in software -- a judgement call. If you are in a highly exploratory phase... if you do excellent design without requiring TDD (which makes the assumption that TDD somehow always yields better design)... then be pragmatic, NOT dogmatic. Use your brain, not a blind allegiance to a process for process' sake. More here

Re: policy vs. practice / prototype vs. production by Mike Bria

Generally, agree - pragmatism rules. That said, I've seen (incl. at my own very own hands) instances where an "experiment"/"spike" yeilded code that was way more robust than the an experiment should've led to.

Then ya sit, with a bunch of (apparently) working code - but no tests.

Advertised law (for TDD practitioners anyhoo) is to throw out that code and start fresh, driving with tests. But, somewhere along the way in the spiking, my "god this code repulses me, I simply must refactor else I'll shrivel and die" bone spasmed and alas the code ends up being much prettier (and well-factored) than "throw-away/experiment" code deserves. Wrong to do? Prolly, but it is what it is.

So, what do you do? Really throw it all out? I usually can't bring myself to and end up burning a ton of time backfilling tests around this code, which (for me anyway) is a rather unpleasant endeavor.

In fact, I literally just found myself in such a predicament this week.

I suppose my point is that I agree going forward test-less has it's place in certain cases, but it does not come then without a certain cost and without an inherent risk of falling back to old bad habits, even for disciplined folks, and especially for those in sad pressure-cooker environments demanding "deliver, deliver, deliver".


ps// My longer rants here

Re: policy vs. practice / prototype vs. production by jon kern

yea, it's all in the setup/context. if you do production code when you are supposed to be doing exploratory stuff that is not to be used, well, then you may suffer.

i am careful not to let exploratory go too far... needs to be clear goals that you are seeking to compare and contrast competing ideas against. or even if you are exploring one idea to see if it meets your needs. point is, do the minimum required and NO MORE.

take your knowledge from the exercise and apply it to the project, which generally means from scratch.

if it were easy and cut-n-dried, a machine would do it. or google.

-- jon

Re: policy vs. practice / prototype vs. production by Peter Williams


Agreed. Process for process' sake has never worked in my experience. Steve McConnell warned about it 10 years ago with his Cargo Cult Software Engineering articles.

The same rigorous sense of empiricism that leads people to develop processes where progress is measured by tests should naturally lead them to validate the processes they develop.

Re: policy vs. practice / prototype vs. production by Mark Levison

Jon - I agree with a big but. My concern as Kent wrote this article is that he choose a very poor headline that can do a lot of damage. Many people will read the article and not get the subtle point he was trying to get across. My fear is that people will read the headline and a little bit of the article and see "Kent gave me permission not to test". Also missing from this conversation is the context in which it occurs. If you've followed Kent on twitter you will know that much of the problem in testing was the Eclipse framework. Eclipse is wonderful IDE, but the packaging framework makes it very difficult to test the outside edges. Context is everything.

Let's hear it for cowboy programming by Will C

Let's hear it for cowboy programming.

'Good enough' really can be good enough. _So long as_ whatever it is is simple or short-lived.
For example; a stick is good enough to dig a small hole in the ground, and saves me walking back to my garage to get the spade. But if I want to dig a big hole, or several, the spade is the way to go.

Another factor is how well you can cowboy. My limit is a lot lower than Mr Beck's, because he has spent so long using good analysis and design practices that he thinks that way as a learned reflex. I can cowboy for a day, perhaps three, before I sink in the quagmire. I know people who can't even get that far. But intuitive good design can be learnt, and forgotten, in the same way as an intuitive grasp of trigonometry or matrices.

Reference? by Frank Calfo

Mark - you seem to be quoting from some kind of article posted by Kent Beck. Can you provide a link to the original source so we can see Kent's complete discussion?

Skip tests? Really? by Frank Calfo

The title of this post does not seem to be consistent with the quotes taken from Kent Beck. Kent does not say that he did not ever write any tests for a short term project. He says he did not write tests initially but did write them later. I think the title of this post is misleading.

Are we talking about production software or beta? by Frank Calfo

Kent says "what maximizes the chance for success is trying lots of experiments and incorporating feedback from actual use and adoption."

He seems to be saying that he needs to skimp on initial quality and crank out a product as soon as possible so he can get user feedback.

If the users have agreed to accept a lot of bugs initially, report them, and patiently wait for a fix (i.e., they've signed up for beta software) then I see no problem with this practice.

But if they did not agree to a beta program, then this approach sounds like it's just pushing the QA responsibility on to the users which I don't think is right.

Again, it's difficult to make universal statements about software development. Every idea requires a context.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

10 Discuss