Beck introduces the method he's dubbed the "Saff Squeeze" by drawing a metaphor to an American Football occurrence known as "The Sandwich", where the ball carrier is hit simultaneously by two people, one hitting him "high" (up near his shoulders) and another hitting him "low" (at his waist or legs). He explains the "Saff Squeeze" is similar to this in that one approaches addressing a failing high level unit test (the "high tackler") by recursively replacing it with more and more specific unit tests ("low tacklers") until a test exists that directly identifies the problem code (ie. until the defect can be "tackled").
Beck's summary description of the method:
The Saff Squeeze, as I call it, works by taking a failing test and progressively inlining parts of it until you can't inline further without losing sight of the defect. Here's the cycle:In the brief article Beck walks you through this process showing the test code at different steps, finally showing a "squeezed" test that spotlights the actual defect in all its glory.
- Inline a non-working method in the test.
- Place a (failing) assertion earlier in the test than the existing assertions.
- Prune away parts of the test that are no longer relevant.
- Repeat.
He compares this approach to the more traditional approach of walking through code with a debugger, concluding the following:
One key difference between the two processes was that after debugging I knew where the defect was, but after squeezing I had a minimal unit test for the defect as well. That concise test is a handy by-product of the process.Beck clarifies that he doesn't see this as an addition or change to the TDD development cycle for new code, but rather as a tool to be used for defect resolution:
It would work as the heart of a disciplined approach to identifying and fixing defects:Read through the article to see what this looks like in a real example, more on Beck's opinion of this technique's applicability, as well as a gripe about Eclipse's inlining capabilities.
- Reproduce the defect with a system-level test.
- Squeeze.
- Make both tests work.
- Analyze and eliminate the root cause of the defect.
Do you see this helping you free yourself from your debugger? Do you think the approach runs any risks? Do you have stories about how you've done something similar, or taken a different approach altogether? Add to the discussion here.
Community comments
Debugging BiProducts
by Tobin Harris,
DeltaDebugging
by Werner Schuster,
Of course, ya still need to watch your "Behavior-meter"
by Mike Bria,
Can be scary in ruby / NetBeans
by Kevin Rutherford,
Re: Can be scary in ruby / NetBeans
by Niclas Lindgren,
Co-lead developer
by David Saff,
Re: Co-lead developer
by Mike Bria,
Debugging BiProducts
by Tobin Harris,
Your message is awaiting moderation. Thank you for participating in the discussion.
This reminds me of something I was thinking of this morning as I was listening to a few developers debugging on the desk next to me. I could here them talking: "Add a watch to that variable there..." and "just step back to that line of code there...".
I was thinking how you really don't get anything repeatable from a debugging session. If their debugger bombed, they'd have to manually re-create the steps and watches again.
So, any TDD practices that help move away from debugger are welcome IMHO :)
DeltaDebugging
by Werner Schuster,
Your message is awaiting moderation. Thank you for participating in the discussion.
Reminds me of Andreas Zeller's Delta Debugging idea:
www.st.cs.uni-saarland.de/dd/
Not quite the same, but basically both methods of homing in on the bug.
Of course, ya still need to watch your "Behavior-meter"
by Mike Bria,
Your message is awaiting moderation. Thank you for participating in the discussion.
I've actually personally taken this approach in the past myself. One word of caution: don't let it fool you into writing (or least persisting) "implementation tests".
In other words, one often missed not-so-explicit-but-ever-so-important rule of good TDD (and more explicitly of "BDD") is to keep your tests invoking and checking for only the observable behavior of the objects under test - not the testing the internal's of your object's implementation.
Assuming your object-under-test is already factored appropriately (cohesively), taking this "squeeze" approach is in essence going to break this "stay outta your implemention" rule - and that's largely the point of it.
So, the warning is to make sure that at most the only test that is kept around once the squeeze is completed is the final iteration that is the direct test of the defect (and, of course, the original high level test).
More to the point, if this test is not adhering to the "observable behavior only" litmus test, then that is a flag that maybe your code is not factored well. More specifically, that this micro-behavior shouldn't be an "implementation detail" of the class its now in, but rather should be the observable behavior of another new class.
This in fact is often the thought process I find fundamental to TDD of new code, the one which most allows me to use TDD as my micro-design tool of choice. So while this approach is not a "new code" tool but is rather a "defect resolution" tool, that does not mean we can't still follow and benefit from the core rules of "good TDD".
Cheers
MB
Can be scary in ruby / NetBeans
by Kevin Rutherford,
Your message is awaiting moderation. Thank you for participating in the discussion.
I like to encourage the teams I coach to try to live without the debugger; the Saff Squeeze gives them a tool to help with that shift. I just tried it for real on a bug in reek and it definitely works better than any alternative I have.
I'm working in NetBeans, which means I have no automated support for method inlining. So some of the moves were slow and risky, and a few times I made mistakes. But in the end I had a neat 4-line test of a case that hadn't been covered before, and finding the fix at that point was trivial.
Thumbs up from me!
Re: Can be scary in ruby / NetBeans
by Niclas Lindgren,
Your message is awaiting moderation. Thank you for participating in the discussion.
I really don't understand the dislike for debuggers. They are great tools and should be used. Proper techniques will find the bug you are looking for faster than any other approach in almost any instance and they will be much less intrusive on your code. And while observing the inner workings of your application looking for the bug you might actually discover more problems (that has no test case attached to it obvisouly) or you might find dead or unclean code.
Newly developed code should in my opinion always be stepped through in the debugger (after being TDD developed!), checking the intention of every and each line of code, as a final code review. It forces you to study your code, think about all the state changes and also make sure it is clean and minimal and has no smells. Many people writing failing unit tests then making them work too often forget to refactor the code until it is actually readable and clean, they are too happy all is green and move on.
Proper development techniques with a debugger will also lead get rid of all unnecessary logs that often crop up because the developer wants to observe the code path. This is what the debugger is meant for. Using the debugger for this inspection will Instead make the developer focus on logs that will actually have a use in a diagnostic scenario in a live application.
That is instead of e.g. "distance too small" it will instead say "distance too small (%d < %d)". The first log is obvious when it happens for the developer but totally useless when looking for a bug in production code.
I find it odd that anyone would discourage the use of a debugger. Use it to find the offending code (which is a fairly fast process), devise the minimal test case with this knowledge, put into the test suite (as you should for _any_ and all bugs you find) and be done with it.
Sometimes you will need to script up new test cases to find a fault but you still employ the debugger to observe the application, you might actually find more faults while observing the code than you will just minimizing or squeezing a test case.
TDD is not about unit tests IMHO it is just a side effect, it is more about thinking ahead of time of the expected outcome whatever that means and how to test that. So in this case, as soon as you find the problem in the debugger your next thought should not be, how do I fix this, it should be, how do I test this now that I know what is wrong. Once that is in place you start fixing it (still observing the code path in the debugger as you run your test case).
The debugger isn't evil, much like unit tests aren't evil (provocative). Both can be abused and missused (and unit tests more than the debugger!).
I can't count the number of times stepping my code has shaved off a critical bug even though the test coverage is good and the test cases test the intention. It can be "simple" things such as thread safety issues, reentrant state problem etc, that is easily forgotten when unit tests are run. But sometimes it is actually embarrassingly obvious you have a bug in the code once your state of mind is at that particular place in the code just because the the state of the world (through the debugger) is thrown in your face and you can see the forest despite all the trees. Such bugs usually goes undetected by unit tests. Such as resource leaks, races, temporal problems (timers start, stopped restarted as they should), wrong algorithm (linear search instead of binary), wrong data structure, wrong hash keys (leading to linear hash table!) and so on. Many of these problems are only manifested with enough input data, unit tests tend to test the 1 case and not so often the 1+n case even though the of course should. But specifically data structure problems are obvious in the debugger.
Stepping through the code will also make sure your capture all corner cases of your code since stepping each ling forces your to think about its testability and coverage (is this line fully test covered, not just passed!).
Why was there no unit test for the bottom leaf class in the example above already? There should have been if it was the intention of that class to handle this. Using a debugger to find the fault should have no impact on the result. Which should be a test case that captures that specific problem in the lowest possible class.
Also a unit test (the upper one that fail) that digs down so deeply into the code base is likely not a suitable unit test as what you are doing is integration testing, junit is not the natural tool for this although it can be (ab)used but that is a never ending debate...
So in my world you devise a automated integration test (or on a even higher level if possible an automated acceptance test case) that fails and then figure out what is wrong and add a unit test to the offending class(es).
Co-lead developer
by David Saff,
Your message is awaiting moderation. Thank you for participating in the discussion.
David Saff is at best co-lead developer of JUnit, Kent Beck remaining an active lead.
Re: Co-lead developer
by Mike Bria,
Your message is awaiting moderation. Thank you for participating in the discussion.
Got it. Sorry if I misrep'ed!