BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Steve Freeman On TDD: How Do We Know When We’re Done?

Steve Freeman On TDD: How Do We Know When We’re Done?

Bookmarks
54:28

Summary

Writing a test makes you clarify your ideas about what needs to be done, and making the test pass means that you know that you've added a little more functionality today. Having a comprehensive suite of tests gives you the confidence to get on with things because you can tell when you've broken the system, and tests that are difficult to write show you where you need to improve.

Bio

Steve was a pioneer of Agile software development in the UK, he has given training courses in Europe, America, and Asia. Previously, he worked in research labs, software houses, earned a PhD, and wrote shrink-wrap software for IBM. Steve also teaches in the Computer Science department at University College London.

About the conference

QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community.QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.

Recorded at:

May 31, 2008

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Question at end about user stories and architecture

    by David Karr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I listened to the entire presentation and through most of Steve's answer to the question about user stories and architecture, when I hit the Reply button, not realizing it would abort the rest of the presentation (I guess that comes from thinking Ajax :) ), so I'm not sure how he finished answering the question (there appears to be no way to fast-forward to the end (using the slider hangs the presentation)). However, it seems to me that that question addresses the crux of any reservations I have with TDD, and I believe many other people feel the same way.

    Sometimes I wish presentations about TDD would avoid spending the first 40 minutes talking about the usual trivial standalone simple class test case and focus on how you do this in real life.

    Steve may disagree with this, but the issue is that this is likely the perception of most non-TDDers, so it doesn't matter if you think I'm wrong, the issue is figuring out how to present TDD to convince people it can actually work. Frankly, testimonials don't do that for me.

  • Re: Question at end about user stories and architecture

    by Romano Silva,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I think the whole evangelism over making the tests fail before and fix later kind of blinds people from having real answers from real problems. Agile came to fill in a hole we had in development in which the new tools and practices did not fit the waterfall approach.

    I think TDD is very good, but the relevance of the last question is of high importance. Bringing TDD to a company will not force the end of architecture and design. You can work out a great design beforehand and do the whole development based on that. That's how I view a perfect component based development methodology. A component design is a rough rock and the tests (with the component implementation) are the final polish touch. The design can be done focusing on testing scenarios (which I believe is a test driven design - I do not think that defining the design while coding the tests is the correct definition of test driven design). I would avoid, for example, designing singletons which cannot be reset in a tearDown() method because I know I will give developers a hard time to test, for example, failure scenarios, or loading different configurations (if that's the singleton's scope).

    Bottom line, I think it is not so important to see your tests failing first, but reaching a good coverage with well documented testing scenarios.
    The title of the presentation is 'How Do We Know When We’re Done?', but the focus was TDD for beginners. I was expecting to see something related to having a TDD team and how this team would progress on construction of the application during an iteration and have its 'Features Done' and approved using TDD.

    Tests are a great measure of quality, but 'How Do We Know When We’re Done?' is a question a PM would like answered at every status meeting. How the PM (which sometimes is not so technical) can take advantage of test results and test scenarios on his day to day? I think the best this presentation could get from that was showing the test names are actually use case scenarios.

    Thanks,
    Romano Silva
    EDS Brazil

  • Re: Question at end about user stories and architecture

    by Kenrick Chien,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I think the whole evangelism over making the tests fail before and fix later kind of blinds people from having real answers from real problems.


    I have to disagree; at the macro level, often the tests are written according to specific requirements, and at the micro level, they are written to make sure classes/methods behave as expected.


    Bringing TDD to a company will not force the end of architecture and design. You can work out a great design beforehand and do the whole development based on that. That's how I view a perfect component based development methodology.


    This goes against agile practices and thinking; having a "big up-front design" (BUFD) usually, if not always, will require refactoring further down the road, unless one truly codes a perfect design the first-time around, and honestly, how often does that happen? Sure, it may be a good first attempt, but more often than not, it is not the ideal approach. After a while, you may see some code and think "Wow, I could move this method here, or this algorithm could be broken up like this, etc.?"

    Many times, coworkers will then reply "No, don't change it -- if it's not broke, don't try to fix it!"

    Like the quote from Beck and Gamma, shown in the presentation, TDD reduces fear and allows you to refactor mercilessly. Often, a clearer design becomes apparent only after coding the entire solution.



    A component design is a rough rock and the tests (with the component implementation) are the final polish touch. The design can be done focusing on testing scenarios (which I believe is a test driven design - I do not think that defining the design while coding the tests is the correct definition of test driven design).


    Adding the tests as a "final polish touch" would not be TDD -- wouldn't that be adding tests at the end? If so, most programmers would not add tests afterward, because it's too boring, or they would say there's no time, especially with pressure from management to just deliver the product already.

    Not only that, adding them as a "final" polish touch would not result in a testable design, in most, if not all cases. Writing the tests before you code ensures a testable design -- you are coding to your tests, not the other way around. How many times have you tried to write tests after the fact and simply given up? One class (let's call it "A") is often coupled to another class, called "B", and "B" will require access to a database, file system, or other external resource, making unit-testing "A" extremely difficult. On the other hand, if the tests were written before the design of "A", "A" would be easy to test because the tests are already in place! :)


    Bottom line, I think it is not so important to see your tests failing first, but reaching a good coverage with well documented testing scenarios.


    I think having failing tests helps design -- the reason is that the code is not in place yet, but your expectations of the code are already "documented", in a sense, by your assertions in the code. The failing part comes from having an empty, or non-existent method that doesn't do what your assertions state. Until they are coded correctly, your tests remain "red". Also, coverage usually happens at the same time you are writing your tests and code. With tools such as Clover and EclEmma, you get constant feedback that your tests are covering all code paths. The tests themselves serve as great documentation to show typical usage of code, and that your code handles a variety of inputs (boundary conditions), and errors/exceptions.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT