BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Cédric Beust discusses Designing for Testability

Cédric Beust discusses Designing for Testability

Bookmarks
   

1. This is Floyd Marinescu at QCon here with Cedric Beust. Cedric, would you tell us a bit about yourself and what you are up to?

Sure. My name is Cedric Beust, I work at Google. I work on Mobile Software at Google, but I am here to talk about TestNG and testing and just finished a presentation at QCon, called "Designing for Testability".

   

2. How do you design for testability?

It's a lot harder than people think and it took me a while to realize that. The first thing that I found is a bit sad is that it's hardly ever covered even whether it's in schools or just at work or books or articles. A lot of people say it's very important to test but they don't really give you practical answers on how to do it and most of all they don't really teach you how to think, how to do it. They teach us how to write code, they teach us what Object-Oriented programming is, but they don't tell you how you write good Object-Oriented programming code, that is also easy to test and that can also be automatically tested which is another important thing. So testing is good, but automatic testing is even better, but making your code easy to test automatically is a big challenge, especially if you started writing your code without thinking about it. So in my experience it takes 4 thoughts: you need to start thinking about it at the very beginning, you need to keep testing in mind all the way through as you are writing your code and then when you're done writing you code, when it's feature complete and everything you have plenty of other things that you can think of to add to the test coverage. It's really a process that just never ends.

   

3. What does an architecture design for testability look like?

I would say there are a few things: if you want to write your code to be easy to test, there are a few things that you need to realize. There are also some preconceptions and things that we've taken for granted, that we need to question. In my presentation I mentioned a couple of those: for example one of them is the use of statics. Statics in code have a very heavy penalty. They're a bit like the new global variables. And not only do they make other things like multithread testing hard, but on top of that every static that you put in your code makes it hard to test. So in this presentation and in the book that Hani and I wrote we give several ways that you can replace those static approaches with other things like dependency injection.

Some of the things that I've learned also over the past few years is that we need to question some of the principles that we've taken for granted. For example the Design Patterns book that came out 10 years ago, that even now is still the foundation on which a lot of the code that we write is based, there are quite a few things in there that are maybe good from a design perspective, but that impact adversely testing. A couple of those are the singleton or the abstract factory; the typical solution for those again is to use statics or to use other means that actually don't really make it easy to test. If I had a shot at rewriting such a book or some of the other books that already show you how to write object oriented programming, I would also add requirements that your code needs to be easy to test and I am sure we would see very different fragments of code if we started considering this now.

   

4. You mentioned replacing static member variables?

Static members and also static methods if you can.

   

5. So you mentioned replacing static members and methods with dependency injection. Tell us a bit more about that.

Dependency injection is a fancy name for something that has been lurking in programmers' minds for the past 5 or 10 years. It's a bit like design patterns which means it's just something that was in the general conscience of everyone but nobody had really put a finger on it. The general idea is that instead of having all these hardwired dependencies inside your code we're going to pass them to the methods. So there are various ways you can do this: the easiest one is to do it through parameters, but you can also do them with reflection and a bunch of other techniques which are not very interesting to cover right now, but what's interesting about this is that we've seen a few frameworks emerge over the past 3 or 4 years that make this dependency injection very very easy.

And I think the two most important ones are Bob Lee's Guice which is a purely dependency injection framework. It's the only thing that it does, it's very small but it's also very very clever in many ways; and there is also Spring, which has been doing dependency injection for a little while with a slightly different philosophy. So I would recommend to anyone who's finding they are having a hard time testing their code because they have either these statics or these dependencies to start looking at the idea of dependency injection and then depending on which they are using, on which framework they're using; if they are using Spring then by all means they should be using Spring to do dependency injection as well because Spring does many things.

So maybe they're just using a subset of Spring, but not dependency injection. If they are not using Spring or any other one, I would suggest to them to look at Guice and start thinking about writing their code with having all those dependencies injected by Guice. And once you have this, your code becomes completely abstracted from the hard dependencies and on your end what makes it easier is that your product code will pass parameters that are actually maybe ties to real databases or real websites or real IP addresses or data centers, whereas when you're testing them you're injecting mocks or pseudo-classes or empty classes that are just going to fill the contract for you. But as an application programmer you will be shielded from all those dependencies and you will make your code a lot more robust.

   

6. In the spectrum of TDD zealousness we have people like Bob Martin who say "you're not a good developer if you're not doing TDD" and we have Jim Coplien saying "if you're doing TDD something's wrong with you". Where do you fit on that spectrum?

Pretty much in the same place that you'll find me whenever you ask me a question like this which is probably a bit in between, fairly skeptical overall, but kind of, I want to hope, constructive skeptical. I am certainly a bit turned off by all the hype and all the absolute statements that have been thrown around for the past few years about whether you should be doing TDD or not and the answer seems to be "yes" and if you don't do it there is something wrong with you and TDD is the next way of testing and everyone will be doing that eventually.

My experience is a bit different. I have tried to take a look at the way I program and I found that while I have done TDD for a certain things most of the time I don't and this kind of conflicts with everything that I'm hearing around me, so it kind of puts me in a position where I am wondering: "Am I really doing as well as I think I do or am I mistaken?" So looking back I started putting together a little list of reasons why I think TDD might be a problem or TDD might adversely impact your productivity. And I've come up with a few reasons, or at least a few justifications why I didn't do it and maybe it will make a few people who are not doing TDD and are feeling bad about it, a bit better about it.

   

7. Why are you skeptical of TDD? Give us some reasons.

I've had a hard time personally applying TDD and here are some of the examples where I was not finding it a good fit for my way of programming. When you do TDD you tend to promote micro-design over macro-design. Micro-design means that you are focusing heavily at a very fine level which is the method. You are writing test for a method that doesn't exist yet; the test doesn't compile, it doesn't build and you implement the method, then you make it pass. The problem with that is that you're really building from the ground up.

You're not even starting with a class, you're starting with a method and then a second method and then maybe they're going to turn into a class and maybe in turn that's going to turn into several classes, but what I found is that after a few years programming you start getting a good feeling with intuition, right away when you're solving a problem of what these extra classes are going to be, what the architecture, the design, the inheritance, how they interact with each other. But TDD is preventing you from going too far ahead. It really wants you to focus on the simplest thing that could possibly work.

And I think that is a bit dangerous, it's a bit myopic because sometimes it leads you to create things that are so small that you end up throwing them away because they are really a first iteration and then you modify them and then you rewrite your test. So to me that's micro-design. Macro-design is more when you start thinking: "I'm going to need this interface, I might as well write it now, I'm going to need this extra class, I'm going to write it now; it's going to be empty right now". And those are things that are frowned upon by test-driven development practices and that bothered me a little bit.

Another thing that I found a bit annoying was that basically when you're doing TDD you're dealing with code that doesn't compile, code that doesn't run and by doing so, you're negating the importance and the benefits of an IDE. Most Java developers these days use IDEs and they use things like auto-completion and browsing and quick fixes and all these things but with TDD you spend a good part of your time with code that just doesn't compile and has plenty of errors in it. I found that a bit counter-intuitive; it's not really the normal mental process and it feels like we're back to using Emacs or Notepad to write code.

That's a bit extreme but I just want to counterbalance what we've been hearing, saying that TDD is the only way to write code. So I'm not completely hostile to TDD. I think it's great for junior programmers or people who just are just fresh out of school and they haven't been exposed to methods and practices that can make you code more testable, but I also think that when you start having a little bit of experience with code, you should follow your intuition and if your intuition tells you to go in a certain direction or tells you it's OK to write the code first and then the test after that, if you're a conscientious, a professional developer you will write those tests anyway and whether you're writing them first or writing them last it is not going to make a huge difference on the quality of your code.

   

8. In your opinion what is the best level to apply your testing, like how much testing?

That's a really tough question. What I like about TDD, after saying bad things, is that I like the idea of an exit criterion. When you write your code you never really know for sure when it's going to stop, when you're done and I think TDD gives you a nice milestone that you need to hit and when you reach there you know you've made good progress, you've crossed a certain level of functionality. So I like this idea; again I don't think you really need to write the test first. You can have this in your mind, you can have this on a piece of paper, but trying to set all this criteria as you move forward and having them in small chunks so that you don't see too far ahead in the future, I think is a good way to plan your testing. But again it depends if you're all by yourself on a little project by yourself, if you're in open source, how many programmers are there, do you use source control, do you use code reviews... All these factors mean a lot of different things depending on how you're going to approach testing.

   

9. You wrote a book called: "Next Generation Testing" and I assume the framework TestNG also stands for Testing, the Next Generation, like Star Trek?

Yes, it was inspired by Star Trek. I am obviously not very good at picking names and you'd be surprised actually how long it took me to come up with a name and even this one I ended up with out of frustration and said: "I'm done with it, it's going to be called this" and don't even remember why. Star Trek was clearly the inspiration which is kind of strange because I am not really a Star Trek freak. Anyway, Next Generation Testing, the implication is that there is a current generation testing; that's what I would call JUnit created; the JUnit generation is the current generation and as I mentioned earlier there are a few shortcomings that I found with JUnit and so Hani Suleiman and myself, we're both authors of this book, decided to capture what we have learned over the past few years on the TestNG mailing list and everything that the users have asked us to implement or creative usages of TestNG and we thought we would put this in a book and call it: "Next Generation Testing" because we are really trying to take testing to the next level which is not just focusing on unit testing, which is what JUnit does, but first of all encompassing everything from functional, system and enterprise-testing big software.

We're talking hundreds of thousands, millions of lines of code like financial banks and all that. These people are doing testing on extremely hard conditions, but we also want to provide with this book a set of testing design patterns which is something that we haven't found anywhere. We've seen micro design patterns on how to architect your class and all that, but not really bigger things that will tell you exactly: "Well, if you are going to be testing a servlet or if you're going to be testing a Spring application, here are a few ways you can go about doing this." That's what we tried to address in this book.

   

10. What is some knowledge in testing that your book gives that no one can find anywhere else?

Yes. We try to focus on giving a few recipes for things that we haven't found any concrete or formal way to do. Here's an example: performance testing. A lot of applications have very strict performance requirements and it doesn't mean just that this method should return within 10 milliseconds, it just means that over time the response of the system should never go over a hundred milliseconds so when we call this servlet it should grow linearly. This is something that is extremely prevalent in the industry and I've never found any really concrete way to test this and it is very important.

If somebody at some point submits in Subversion, or whatever source code repository you are using, a change which doesn't break anything, all the tests are still passing but suddenly your entire software is 10% slower. It's a big deal. It's the kind of change that shouldn't make it in the first place or at least that should be flagged right away and the only way you can flag it is if you have the relevant test for this kind of thing. And it's really hard to do it; you can't really measure how your performance evolves depending on how the data is changing. So we have a section for this in the book which we've heard a lot of good things about; people were really surprised to find this kind of advice and there are things they hadn't thought about.

A lot of the book is also basically a repository of everything that we've heard from users, all the things that they've come up with that I would never have thought, as much as I like to think that I'm creative in the way I architected TestNG. I suffer also myself from short-sightedness, I see what TestNG can do, but when it comes to applying it to the real world one can tell that TestNG is not really the real world, it's up to the users to do that and some of them have come up with very interesting ways to solve problems that really made me look at their message and say: "Wow, that's pretty amazing.

Do you mind if I include this in the book because I think there is a whole discussion here that we have and that would be fantastic for everyone" and most of them have been happy to contribute and help us out by providing this kind of kernel for the contents and from that Hani and I started analyzing those various approaches and extending and judging the pros and cons and trying to give basically a response to every possible problem that you might be faced with when you're trying to test big enterprise software.

   

11. When you think of testing of big enterprise applications and small to medium-sized applications, how are they different? What are some of the different practices that you do?

I think the main challenge with these applications is that most of them were designed not to be tested, by accident. I don't think they meant to make them not testable, it's just that testing was not on the forefront of their thoughts when the code was written. This is a lot due to legacy. Some of them might be running on very old systems that really had no way of doing this, others maybe on mainframes you can't really test what's there, you can just test what you put in and what you get out.

But the challenges there are, if you're lucky enough to be able to modify that code, you're still going to have a very hard time refactoring it because you don't have tests to cover everything that you might break. The purpose of the book is not really to show you that. There is, by the way, an excellent book written by Michael Feathers that touches on these things, how to refactor hard code. Our approach was more, assuming that you have some freedom and that you can write those tests or possibly modify this code, here is how you can easily test or you can just apply this recipe for enterprise testing and not just enterprise, I keep mentioning enterprise, but it's not just about that. We cover a lot of different things.

   

12. The last chapter of the book is called "Digressions" where it seems that you and Hani go into personal rants. And Hani is famous for being on the BileBlog. What are some of the rants you go into and how did that chapter come to be?

Yes, that chapter was a bit controversial even within ourselves. We definitely felt that we wanted to write it, we were just wondering if we should. But after a quick informal survey with readers and reviewers and all that they said: "Absolutely you need to keep this chapter in." So we decided to keep it in. What this chapter is, is basically… so like you said Hani has a tendency to be vocal about things that he doesn't like. I also have this tendency, although I'm I think a bit better at keeping it in, and we tried to play with those strengths and weaknesses that we have and create this chapter where basically we cover a lot of the issues and rants and problems that we went through as we were writing the book and also that cover our experience for the past few years and we broach on topic such as test-driven development and test coverage and all kinds of things where we just basically say what our opinion is. This chapter comes with a big disclaimer at the very beginning where we say it's clearly personal opinion as opposed to the rest of the book where we tried to be more authoritative and a bit more objective, but it was a fun chapter to write and I think so far reviewers and readers are enjoying it and most of it is still kind of relevant to testing. We really tried to still stay on focus, but it's kind of more of a fun chapter to wind down from the rest of the content which is pretty heavy and doesn't have lots of pictures.

   

13. So what are some of your opinions?

I'll just refer you to the book, but if you want to know what we think as I said of test-driven development or Maven or test coverage and all these things just browse through the book and you will see some interesting and colorful positions on all these topics.

Apr 28, 2008

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Conclusions about TDD a bit questionable?

    by Mike Bria,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Watching this interview with Cedric, I found myself somewhat thrown off by the some, certainly not all, of Cedric's statements. My viewpoint/experience differs from at least 3 of the points made.



    First, Cedric appears to assert that [paraphrased] "TDD frowns upon thinking about the 'macro-design'". Not so. TDD never explicitly prohibits nor prescribes that programmers <em>think</em> in terms of the big picture up-front - what TDD stresses is to not get too hung up on the "predicted" designs and to let the tests <em>prove</em> to you what the design <em>really</em> wants to be. Other words, go ahead and get an idea up-front, just don't spend terribly long, be flexible to it changing, and then let the red-green-clean cycle take you the rest of the way.



    Second, Cedric states that "with TDD you spend a good part of your time with code that just doesn't compile and has plenty of errors in it" implying further that "TDD negates the benefits of modern IDE's". This is absolutely false. TDD allows for only <strong>one</strong> failure (compiler or assertion based) at any given time - as soon as there's a single failure, eliminate it. Further, (for many languages) its the IDE that tells me immediately of a compiler failure and gives me great tools (auto-XYZ) to easily fix it. It's the existence then of the passing unit test(s) that allows me to leverage the IDE's refactoring tools to clean up the design.



    Third, Cedric then states that "TDD is great for junior programmers", implying TDD is less necessary for experienced programmers. TDD is a method of specifying behavior and thus driving design - it's a <em>cognitive</em> activity that helps drive you not only to write testable code (which it does, I agree with Cedric there), but to drive out expressive, well-factored, intention-revealing, use-driven code. I'm an experienced programmer and I find that 99 times out of 100 if I "design, code, then test", I simply don't achieve the same level of cleanliness as if I had "tested (behavior design), coded, then refactored (structure design)".



    A good debate occurred between Jim Coplien and Bob Martin along the same lines as this discussion ("TDD a must or not?"), it can be viewed here.



    I hope my remarks are seen as constructive, they're intended that way.



    Best,

    --MB

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT