BT

Presentation: Steve Freeman about Test Driven Development

by Abel Avram on May 31, 2008 | NOTICE: The next QCon is in San Francisco Nov 3-7, Join us!

In this presentation filmed during QCon 2007, Steve Freeman, an independent consultant, talks about TDD, why is it helpful and gives an example on doing it. Steve says a team has used TDD during a non-trivial project, reducing the software defects by 50% compared with an ad-hoc unit testing approach, and completed the product in time. A research paper shows that students who write tests first end up writing more code and being more productive. TDD is also credited for reducing product delivery time and raises the developers' confidence in their work.

TDD is a software development method. The developer starts with a set of features to be implemented, and works progressively in small steps, testing every step he is making. This gives him confidence that he is doing the right thing, and also knowledge and complete control over the code. He is no longer afraid to introduce a change in the code because the tests will tell him right away if he broke the program or not. TDD is repetitive going through the following iterations: the test result is firstly red (not OK), then is made green (OK), then the code is refactored as desired making sure the test stays green.

TDD is also a design technique according to Steve, because the developer makes design decisions while writing the code and its corresponding tests. The final code should reflect the initial domain set of features.

The entire presentation is 54 minutes long.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Question at end about user stories and architecture by David Karr

I listened to the entire presentation and through most of Steve's answer to the question about user stories and architecture, when I hit the Reply button, not realizing it would abort the rest of the presentation (I guess that comes from thinking Ajax :) ), so I'm not sure how he finished answering the question (there appears to be no way to fast-forward to the end (using the slider hangs the presentation)). However, it seems to me that that question addresses the crux of any reservations I have with TDD, and I believe many other people feel the same way.

Sometimes I wish presentations about TDD would avoid spending the first 40 minutes talking about the usual trivial standalone simple class test case and focus on how you do this in real life.

Steve may disagree with this, but the issue is that this is likely the perception of most non-TDDers, so it doesn't matter if you think I'm wrong, the issue is figuring out how to present TDD to convince people it can actually work. Frankly, testimonials don't do that for me.

Re: Question at end about user stories and architecture by Romano Silva

I think the whole evangelism over making the tests fail before and fix later kind of blinds people from having real answers from real problems. Agile came to fill in a hole we had in development in which the new tools and practices did not fit the waterfall approach.

I think TDD is very good, but the relevance of the last question is of high importance. Bringing TDD to a company will not force the end of architecture and design. You can work out a great design beforehand and do the whole development based on that. That's how I view a perfect component based development methodology. A component design is a rough rock and the tests (with the component implementation) are the final polish touch. The design can be done focusing on testing scenarios (which I believe is a test driven design - I do not think that defining the design while coding the tests is the correct definition of test driven design). I would avoid, for example, designing singletons which cannot be reset in a tearDown() method because I know I will give developers a hard time to test, for example, failure scenarios, or loading different configurations (if that's the singleton's scope).

Bottom line, I think it is not so important to see your tests failing first, but reaching a good coverage with well documented testing scenarios.
The title of the presentation is 'How Do We Know When We’re Done?', but the focus was TDD for beginners. I was expecting to see something related to having a TDD team and how this team would progress on construction of the application during an iteration and have its 'Features Done' and approved using TDD.

Tests are a great measure of quality, but 'How Do We Know When We’re Done?' is a question a PM would like answered at every status meeting. How the PM (which sometimes is not so technical) can take advantage of test results and test scenarios on his day to day? I think the best this presentation could get from that was showing the test names are actually use case scenarios.

Thanks,
Romano Silva
EDS Brazil

Re: Question at end about user stories and architecture by Kenrick Chien

I think the whole evangelism over making the tests fail before and fix later kind of blinds people from having real answers from real problems.


I have to disagree; at the macro level, often the tests are written according to specific requirements, and at the micro level, they are written to make sure classes/methods behave as expected.


Bringing TDD to a company will not force the end of architecture and design. You can work out a great design beforehand and do the whole development based on that. That's how I view a perfect component based development methodology.


This goes against agile practices and thinking; having a "big up-front design" (BUFD) usually, if not always, will require refactoring further down the road, unless one truly codes a perfect design the first-time around, and honestly, how often does that happen? Sure, it may be a good first attempt, but more often than not, it is not the ideal approach. After a while, you may see some code and think "Wow, I could move this method here, or this algorithm could be broken up like this, etc.?"

Many times, coworkers will then reply "No, don't change it -- if it's not broke, don't try to fix it!"

Like the quote from Beck and Gamma, shown in the presentation, TDD reduces fear and allows you to refactor mercilessly. Often, a clearer design becomes apparent only after coding the entire solution.



A component design is a rough rock and the tests (with the component implementation) are the final polish touch. The design can be done focusing on testing scenarios (which I believe is a test driven design - I do not think that defining the design while coding the tests is the correct definition of test driven design).


Adding the tests as a "final polish touch" would not be TDD -- wouldn't that be adding tests at the end? If so, most programmers would not add tests afterward, because it's too boring, or they would say there's no time, especially with pressure from management to just deliver the product already.

Not only that, adding them as a "final" polish touch would not result in a testable design, in most, if not all cases. Writing the tests before you code ensures a testable design -- you are coding to your tests, not the other way around. How many times have you tried to write tests after the fact and simply given up? One class (let's call it "A") is often coupled to another class, called "B", and "B" will require access to a database, file system, or other external resource, making unit-testing "A" extremely difficult. On the other hand, if the tests were written before the design of "A", "A" would be easy to test because the tests are already in place! :)


Bottom line, I think it is not so important to see your tests failing first, but reaching a good coverage with well documented testing scenarios.


I think having failing tests helps design -- the reason is that the code is not in place yet, but your expectations of the code are already "documented", in a sense, by your assertions in the code. The failing part comes from having an empty, or non-existent method that doesn't do what your assertions state. Until they are coded correctly, your tests remain "red". Also, coverage usually happens at the same time you are writing your tests and code. With tools such as Clover and EclEmma, you get constant feedback that your tests are covering all code paths. The tests themselves serve as great documentation to show typical usage of code, and that your code handles a variety of inputs (boundary conditions), and errors/exceptions.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

3 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT