00:20:57 video length
Bio Hakan Erdogmus is a senior research officer at the National Research Council in Ottawa, Canada. He’s also an adjunct professor in the University of Calgary’s Computer Science Department. Hakan is a member of the IEEE, the IEEE Computer Society, and the ACM. He holds an MSc in computer science from McGill University, and a BSc in computer engineering from Boğaziçi University, Istanbul.
1. So here at Agile 2007 we received in our conference package a copy of IEEE Software and I noticed that you are the editor in chief. I noticed that this edition is called Test Driven Development. Why did you focus a whole edition on Test Driven Development in particular?
That's a good question. One of the reasons is that it is extremely timely that we addressed this more or less under covered topic, and you will see if you look at the program of the current conference, you'll see that it is so prominent and it's been getting increasingly more prominent with no serious in depth coverage in professional magazines. Even in the research literature it's very sparse that people would talk about TDD, so we thought that it would actually make up an issue out of solid articles looking into what TDD is about, how it is applied and what are some emerging evidence about its effectiveness, that sort of things. And that's how it came out to be.
That's right, and they are two great leaders very knowledgeable about Test Driven Development in the Agile community. And we are very happy to have recruited them for this project; they have done a great job. What they did was they solicited articles from prominent people in the industry, with pure review publications, they went through several iterations of revisions, and they received something like forty papers, accepted only six of them, and they also stitched together a very nice introduction talking about what TDD is about, and also summarizing the literature on its effectiveness, specially coming from academia, looking at experiments done in controlled settings, case studies in the industry and what the results were.
That's a very good question because it is one of the most confused practices because it has the word “Testing” in it. And it has also the word “Development” in it. But it has a specific meaning and it doesn't spend the whole world of testing obviously. TDD is basically an incremental in-process development technique that relies on tests to stir all development activity. And in-process, incremental and stir are the key words here. When you look at how testing is applied and what kind of different testing strategies and techniques exist, you can have the testing that you do during the process, integrated into the process and you have the after the fact testing, for example the routine QA type of testing, or the deep testing that we have often used as exploratory testing techniques to reveal extraordinary behavior. That is not what we are talking about, what we are talking about when we talk about TDD is the testing that you do integrated into the process and done incrementally. And that has a lot of variations, so we go one level down, then if you are looking at in-process testing, you also have testing that is done by developers, and people sometimes call it developer testing, and testing that is done or fulfilled by other roles in the team, for example acceptance testing, sometimes written by customers, sometimes written by business analysts in collaboration with the developers. So, TDD can be applied at both levels and if you go one level down, then you have the specific dynamics. The textbook “Vanilla TDD” is the one that relies on a tests first dynamics. And when Agile gurus talk about TDD, they specifically refer to that variety.
But there is also the incremental variety, there are a lot of people that are applying, but they are not saying, that doesn't feel they are relying on a tests first dynamics that is a little bit of coding and a little bit of testing, to make sure that what you wrote is what you intended. And then there is the other variety that takes bigger chunks, that's a lot of coding and a bunch of tests, a lot of coding and a bunch of tests. So they are both incremental, but they diverge from their originally intended tests first dynamics. Now whether the dynamics is important or not is another question, and maybe it is not, but what matters is that there is some in-process testing done incrementally throughout the process and we can talk about that a little bit more later.
That's exactly true because a lot of people when they mention Test Driven Development and they will say “Well, we use TDD” and if you inquire a little bit deeper, a lot of the times they mean they do some form of testing, whether it's kind of in-process testing, more looking like after the fact, or that is some kind of QA activity that is done completely after the fact. And a lot of the times they refer to it as a QA testing practice per se whereas its original intention is a development practice, where quality assurance is kind of a side effect. Because the idea is to be able to see your activities relying on tests, not to have a focus quality assurance program in place.
Well, when you apply it at that acceptance test level, that is precisely that. So, that is about understanding the customer requirements and being able to express them in terms of things that you can execute. When you apply it at the unit testing level, as a developer testing practice, you could call them micro-requirements, basically formalizations of the tasks you're executing, so it gives you task focus: “What am I going to do in this next little bit of activity? And how will I know that I fulfilled my goal?” I suppose you can look at those as micro-requirements if you want, but I would prefer to reserve the word requirements for higher level activities that relate more to needs and desires of customers rather than how those needs and desires are implemented by the developers, by splitting them into small implementation tasks. Perhaps I can summarize it in one phrase by saying that TDD is meant to be a development practice rather than a quality assurance practice, and it is not meant to be a substitute for separate quality assurance activity.
I think it really does make a difference. And we can talk about the empirical evidence about it later, that is not absolutely conclusive, but it is emergent. So the theory behind TDD whether you are using the Vanilla form or you are using some kind of variation of it, it's aiming at three things: productivity, quality and adaptability, meaning ease of change. So quality is supposed to be a side effect rather than the main purpose. I have the feeling that the original intention was aiming at productivity and it was aiming at productivity through two things: task focus and visible progress. So if you could have both of these things then the implication or the assumption was that you could go actually faster. Because you would know where you are exactly and you would know the next task that you are going to execute. The quality thing is a side effect because by having test assets in place you're automatically increasing the quality because you are leaving out certain defects that you would not be able to probably detect during the development phase. So the product that you are shipping supposedly then has fewer defects, if you hadn't had the developer test in place from the beginning. The other aspect is, of course, adaptability or ease of change and obviously when you have an arsenal of tests assets then you are less fearful about changing the code base, and you are less fearful about accommodating customer needs when requirements churn and change. So those were the three things that originally addressed, but my feeling from the early literature on TDD, from the initial proposes of TDD was that it actually aimed productivity more than quality, as well as adaptability and ease of change.
Of course it has pitfalls and one of the major pitfalls is that you have extra baggage. Now you don't only have the source code, you have in addition to that a bunch of tests that you have to carry around and maintain, so you have to know about good code design because you are a developer, but now you have to also know about good test design, and about good test patterns, test practices and so on, so that you can maintain your tests as well, while you are maintaining your code. So that is one of the major pitfalls. And sometimes even a lot of studies show that when you apply Test Driven Development faithfully your test code is actually forty percent larger than your production code, so if you are writing a hundred lines of production code you may write a hundred forty, a hundred sixty lines of test code.
There is the research answer and there is the personal answer. Of course the personal answer is that that is the only way I can write code right now. That's the way that I program. But the research answer is that there is some emerging evidence that it has moderate to significant quality benefits. The verdict on productivity is still out there. And you have to qualify these statements very, very, carefully. Ron and Gregory and intro piece that they did for IEEE software for this special issue on TDD, basically looked at the literature in terms of what kind of evidence is emerging and they've found eighteen relevant studies on evaluating TDD in various different environments from control settings to industrial settings, with case studies, and of those eighteen studies separately eleven reported medium to significant quality increases and the production code that came out of the process. With a caveat and that was, there were also medium to significant productivity loss. Now, two studies out of that eighteen found that there was a moderate productivity advantage for employing TDD and there was not a significant quality advantage. So why would you get that kind of discrepancy? Because if you look at the eleven studies that found as a side effect quality advantage for TDD and they mostly treated TDD as a testing practice. So, most of those studies compared TDD to optional testing, which in a lot if instances meant no testing. The effect is that you are comparing some kind of in-process testing with no in-process testing.
And what would you expect out of that, of course when you have tests you will increase quality and you will get some pay offs, but you will create some overhead because writing those tests take some time. Now, the two studies that found productivity advantage compared the tests first variety, the Vanilla variety of TDD with the original intended dynamics, with the alternative of writing the tests after the fact, where testing was not optional. So, the code had to ship with developer produced unit tests. So those studies found that the tests first dynamics actually helped productivity but you can't generalize from just two studies and claim that actually the tests first dynamics lead to productivity gains. So the verdict is not very clear, the empirical evidence is not strong and there are a number of studies that are in progress, but it looks like if you are doing some form of in-process testing versus no in-process testing then you will get quality advantage. Whether that is ultimately economically viable is yet to be determined. But my feeling is that that quality advantage compensates for the productivity loss in terms of reduction of downstream rework.
TDD is a technical practice and that poses problems because if you look at for example Scrum, it has overtaken XP. And they are not exactly addressing the same types of problems, because XP has a lot of technical practice and Scrum is a wrapper team project management kind of process. And it is the low hanging fruit, and it's the low hanging fruit that gives you a really steep step function in the beginning because you get a lot of pay off by putting those practices in place early, and it is visible. And because it is a team practice, it's a management practice, it's easier to implement. If you look at technical practices, individual practices, you have less control; because somebody learns TDD you don't know really exactly what they are doing. One person may be writing few tests, the other person may be writing a lot of tests. One may be applying in one way, the other may be applying in another way. And also the dynamics that is required for the Vanilla variety of TDD, is difficult to teach and master and for some it's very counterintuitive because I suspect that developers have different approaches to problem solving, it just doesn't agree with them. It doesn't mean that they are inferior developers. I know people who are excellent developers and they just can't work with it. And they still write excellent code. So when that is the case then it becomes very problematic, because you can't really enforce it especially in the Agile environment, and it then boils down to an all-or-non proposition. Management also has misunderstandings about TDD they think of it as an overhead process, and optional, they don't know or are not sure about the return on investment, so they don't have an incentive to shovel it down people's throats. And that is not the right approach, of course. So that makes it different from something like Scrum, and when it is not applied by everyone it is problematic.
Well I didn't want to put it that way actually, you can still apply it as a personal practice, but it's not going to be as effective if you can't carry your test forward with the project, because if you are the only one doing that you'd have to use your test for your own purpose and throw them out, and that is problematic. An anecdote, former student of mine went to a government department very enthusiastic about TDD, very skilled developer. He was in a team who was supposed to be using the extreme programming and they didn't use TDD. He was the only one writing these wonderful tests, checking them in with his code, but people were breaking them left and right, left and right. So eventually he gave up, he didn't want to do it because it was not useful for him anymore, even he couldn't apply them as a personal practice, because he couldn't use his tests to regress the system when other people were changing things underneath him, without paying attention to his tests. So it didn't work and he unfortunately abandoned it. So in an environment like that it becomes problematic. When you have people who “click”, who buy into the same philosophy of developing code then, of course, it works and you can have perhaps wonderful results. But that environment is very difficult to achieve.
11. So you are talking about a technical practice, Test Driven Development, and still I have noticed that some of the obstacles that you have mentioned are human obstacles: perception of management, personal style of working and things like resistance to change and also team work. Do you have any suggestions for approaching that?
I don't have a silver bullet for that. I think training is probably the most important aspect that will improve the penetration of TDD into the main stream. However I also accept that training may not be sufficient and management support is also going to be really important because if management thinks that in-process testing is optional, then it is going to definitely fall through. And when it falls through it falls through badly, because it is basically all or none proposition. Everybody does it and benefit from it, a few people don't do it and it may spoil the whole process.
Absolutely I strongly recommend it for everyone.