BT

InfoQ Article: Using Logging Seams for Legacy Code Unit Testing

by Floyd Marinescu on Aug 03, 2006 |

In his book "Working Effectively with Legacy Code," Michael Feathers talks about finding seams in legacy code to facilitate testing. The definition he provides for a seam is "a place where you can alter behavior in your program without editing in that place." Types of seams vary among languages, and for Java he defines two categories that are relevant, "link seams" - whereby ordering the class path differently, different classes can be substituted for testing; and "object seams" - where calls to constructors or methods are made using a sub-class or mock implementation, rather than the originally expected class.

Ian Roughley, who is also a committer on the WebWork project, found that the logging seam can be quite useful; using this seam you can easily create unobtrusive unit tests around classes, without needing to edit class logic as well as avoiding behavior changes. Read more on InfoQ's latest article, Utilizing Logging Seams to Confidently Create Unit Tests around Legacy Code.

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Are they really "unit tests"? by Eric Torreborre

Hi Ian,

Having worked today on both legacy code and logging (and a nasty bug,...), I like your idea a lot!

I can see 2 use cases for this technique: refactoring and bug-fixing. The first use case is the one you indicate: setting a safety net prior to refactoring.

Refactoring

My question is: "how do you know it's enough?". How many probes should be set in the code to be confident with refactoring? I feel that instrumenting one class alone cannot make you confident enough to refactor it? Did you have to instrument a lot of classes around the one you wanted to refactor before actually doing it? Was the result satisfactory?

The second use case is bug fixing.

Bug fixing

This can be a nice alternative to "not doing anything", when you encounter a bug. You fix it, you need to create a test for it and you may not have time to break dependencies to write a unit test for it (or you can't take the risk) [I have posted a blog entry regarding opportunities to refactor legacy code in a agile process and will update it to include your wonderful idea (etorreborre.blogspot.com)]

How to name those tests?

In any case, I think those tests would better be considered as "integration tests" and can serve as non-regression tests, but I won't call them "unit tests", since they indeed don't remove the dependencies the class under test relies on.

I am especially sensitive to the question of "integration tests" since they are often slow and fragile. I tend to prefer a mix of the followings tests: acceptance, unit, smoke and exploratory. Those 4 testing approaches looks like they are the most cost effective way of testing our application.


Thanks for your fine article,

Eric.

Why? by Srivaths Sankaran

There are couple of prefacing statements that are the basis for this technique:
But, as you start modifying the code to break dependencies and enable testability, this can lead you into more trouble

and
Examples are large methods and classes, complex branching (...) and objects that are instantiated directly in the code when they are needed. Under these circumstances (...) the usual testing toolkit doesn't work well.

I question this basic premise. Why won't the "usual testing toolkit" work? I don't see why one needs to "open" the code under test simply in order to test it. The one instance I can see this technique being of value is when the code under test is highly coupled.

Re: Are they really "unit tests"? by Ian Roughley

You're right, both these use cases are valid.

What I do is refactor the code using a logging seam. Once I know the refactoring is sucessfull, and nothing is broken, I use more common junit or mock object testing techniques to test the code to fix the defect.


My question is: "how do you know it's enough?". How many probes should be set in the code to be confident with refactoring? I feel that instrumenting one class alone cannot make you confident enough to refactor it? Did you have to instrument a lot of classes around the one you wanted to refactor before actually doing it? Was the result satisfactory?

I find when I use this technique, the code is usually in one large method of one class. I do as much instrumenting as necessary to feel confident that the changes I make are working correctly - sometimes this is 2 or 3 logging statements, sometimes is 20.

I was conflicted on whether to call these unit or integration tests. As I mentioned above, usually it's one large method that I am working within, so unit tests make more sense. Perhaps if I used the technique across multiple classes I would have called it integration testing.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

3 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT