BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Virtual Panel: Code-to-Test Ratios, TDD and BDD

Virtual Panel: Code-to-Test Ratios, TDD and BDD

This item in japanese

Lire ce contenu en français

In the last couple of months several online discussions took place about test first vs test last, code-to-test ratios or whether BDD is really just TDD. InfoQ asked renown TDD and BDD experts to give us their take on the usage of TDD, BDD and testing ratios.

The panelists:

  • J. B. Rainsberg - Consultant and TDD expert, blogs at The Code Whisperer
  • Dan North - Lean Technology Specialist at DRW Trading Group, coined the term Behaviour Driven Development (BDD)
  • Gojko Adzic - Consultant, author of 'Specification by Example' and 'Bridging the Communication Gap'
  • Ron Jeffries - Independent consultant in XP and Agile methods, coached the original XP project
  • Steve Freeman - Agile trainer and consultant, author of 'Growing Object Oriented Software, Guided by Tests'

The questions:

  1. What criteria do you think should guide the decision to follow TDD and/or BDD (or none) for the development of a given project?
  2. There seems to be a generalized understanding that TDD = unit testing and BDD = acceptance testing (regardless of tools). To which extent is this correct and where do other types of testing such as component and system integration fit in this picture?
  3. Over the years TDD has been referred to as a (code) design discipline, a testing discipline or a communication tool. In which ways do these distinct goals affect the design and the present vs future value of tests in TDD?
  4. Do you recommend some upfront criteria for unit vs integration vs acceptance testing ratios taking into considering automation and maintenance costs?
  5. Life-critical systems aside, there seems to be a generalized agreement today that 100% test coverage should not be a goal or ideal by itself when developing software. Do you think code-to-test size and effort ratios can promote more focused and/or effective testing?
  6. TDD, BDD, ATDD, test-first, specification by example and so forth mean different things to different people and confuse the rest. Are we missing an ubiquitous language for describing our software development methodologies and growing a shared understanding of context-driven good practices?
  7. Any last words about these topics?

InfoQ: What criteria do you think should guide the decision to follow TDD and/or BDD (or none) for the development of a given project?

JB: I feel uncomfortable making generalisations on this topic, so instead, let me describe why and when I use TDD/BDD.

I first encountered TDD because I wanted a way to stop swimming in mistakes (or defects or bugs, whichever you want to call them). The number of mistakes I made in my programs robbed me of any sense of completion: no matter how much I had done, I never felt closer to completing my work. I guessed at the time that if I tested my own code, I would find most of the silly, simple mistakes, and fix them myself. I wanted this, not to avoid looking stupid, but rather to avoid a false sense of completing my own work. It helped me, and after a few years I began to realise that TDD helped me not only avoid mistakes in behavior, but also mistakes in design. When I learned about BDD years later, I eventually realised that it helped me avoid mistakes in choosing and completing features. Over time, I came to believe that mistakes cost me more time, effort and energy than doing good work, so TDD and BDD became core practices for me.

To return to the question, I encourage people to articulate why they want to do TDD, and do it for that reason. Beyond the typical reasons to practise TDD/BDD -- better designs, fewer defects, more valuable features, less wasted effort -- I encourage people to look at the personal reasons to practise TDD/BDD and focus on those. I wanted to feel more confident in my claims that I had finished a given piece of work, because the alternative caused me too much anxiety. I think these reasons differ wildly from person to person.

Dan: Firstly let me provide a couple of loose definitions. TDD is a programming technique that causes a programmer to think about their code from the perspective of other code that will use it, which promotes emergent design. You write a test to describe how the next piece of code will be used, then you write the code to make that test pass. This technique is the exclusive domain of programmers and when it is applied properly (I'll expand on that in a moment) it has a number of subtle benefits. Writing the tests help you understand the domain and can help with naming things. A test can expose a gap in understanding ("What do you think it is supposed to do in this case?") and of course a suite of automated tests can help identify regression defects.

I don't think there should ever be a decision to "follow or not follow TDD for a given project." You can use TDD in almost any project. However I would suggest only using it where the programmers find it helpful. There are other ways than TDD of doing design, of exploring and modelling a domain, and of reducing regression risks. Sometimes it is the most effective way of doing each of these things and sometimes it isn't. Having said that, I think TDD is such a useful technique that I believe all programmers should be aware of it and know how to do it. I'd say the same thing about refactoring: there is no excuse not to know about it, but experience can help you choose when and where to apply it, and when to do something else instead.

BDD is a development methodology, closer in form to something like XP than just the programming practise of TDD. It is a means of getting stakeholders and delivery folks with different perspectives onto the same page, valuing the same things and having the same expectations. If you don't have that problem to solve it might not be the right thing for you. Certainly I've not been needing it much recently because I've been on very small teams with very rapid stakeholder feedback, so the impact of delivering the wrong thing is greatly reduced. It usually comes down to "Can you change that thing please?" or "That's not what I meant. Here's an example."

BDD starts with a business goal and describes how that goal gives rise to features and stories. It has an opinion about how you structure your acceptance criteria and how you turn those into automated tests that define code behaviour. The decision to invest in BDD is therefore a project-level one (although arguably you could apply BDD to just a subset of a project), and should involve the whole team.

I would argue that any programmer, or pair of programmers, has the right to deliver software however they want, provided they respect the team and the codebase. If they want to use TDD then they should be allowed to and in fact encouraged to. If they want to do something different that should be OK too, so long as they have the buy-in of their teammates.

Gojko: That depends on your definition of those things. It seems that over the last few years the meaning of TDD was narrowed down to only include driving technical design with unit tests, and that BDD became the catch-all phrase for driving business functionality with examples and business oriented-tests. Using Brian Marick's Agile Testing Quadrants terminology as published in Agile Testing by Lisa Crispin and Janet Gregory, this would make TDD in Quadrant 1 and BDD in Quadrant 2. I don't really agree with this, but from the way you asked the question I assume that your definition of TDD excludes BDD and the other way around as well. Assuming that relationship, I would look for three things:

- is the project a one-off throw-away? Is it a technical spike aimed at derisking technical uncertainty, and helping the team discover from a technical perspective what they really want to do? In such case having a large body of automated tests in any way is probably waste, and there is most likely a small number of relatively simple use cases that a team is actually aiming for. Writing even technical tests upfront might be an issue because we don't really know the underlying technical constraints. I would probably write a few guiding technical tests, but not be dogmatic about driving every design decision with technical/unit level tests. This goes in the realm of large system tests that Steve Freeman and Nat Pryce wrote about in Growing Object Oriented Software.

- is the key complexity of a project technical? do we know what we want to achieve technically but there is not a lot of business risk or uncertainty? are all people working on a project technical, can read code? examples of this would be building a web framework, a database platform, a queuing system, or a cloud deployment platform. Most opensource projects fall into this category. If so, technical TDD is probably sufficient - I would drive business scenarios and technical design using a unit testing tool. I might still use examples on whiteboards to ensure a shared understanding (core part of BDD) but I wouldn't waste time on putting those examples in an executable specification/non-technical automation tool such as cucumber.

- is there also business complexity that needs to be addressed, risk of unclear requirements that has to be communicated to people who can't read programming language code? if so, I would divide & conquer. Use examples to explore business requirements and ensure a shared understanding across business and delivery, build up specifications out of those examples and automate those examples into a living documentation using Cucumber, FitNesse or similar, then drive the design of my code using technical unit-level tests.

Ron: I would use these approaches on every project I've ever done in my half-century of software development. Keep in mind that I didn't invent them: I'm just an early student. They provide me with a confidence that no other approach ever has. They reduce my defect count profoundly. And they let me improve the design freely as I learn better what it should be.

That's pretty hard to beat.

Steve: Do you need to write tests for your system? If so, then why not write them first so it actually gets done, and so that you can see what hooks you need to make testing easier? You're unlikely to know all the tests that need to be written, so it's easier to do that as you go along.

InfoQ: There seems to be a generalized understanding that TDD = unit testing and BDD = acceptance testing (regardless of tools). To which extent is this correct and where do other types of testing such as component and system integration fit in this picture?

JB: I prefer to divide the two this way: TDD gives me feedback about my design, while BDD gives me feedback about my understanding of the product we want to build. While I started practising TDD as a testing technique, I have grown to see TDD next as a design technique, and after that as a technique for learning deeply the principles of good, modular design. I started practising BDD as a way to remind me also to see the product from the point of view of the real users and business stakeholders, but I have grown to see BDD as a technique for improving communication between business and technical workers. As you can see, testing plays just a small, though still significant, part in all this.

Since I don't see TDD and BDD as testing techniques, I see testing strategies as orthogonal to TDD and BDD. Whether I do TDD or BDD or none, I expect to think about microtesting, system testing and usability testing on the average project.

Dan: This is an unfortunate artefact of history. In fact they are both both, and more. The way Kent Beck describes TDD in Extreme Programming Explained - and later in his TDD book - is the same way Nat Pryce and Steve Freeman describe it in Growing Object-Oriented Software, namely that it works at multiple levels of granularity. You write user-level functional tests or low-level unit tests for the same reason, which is to illustrate how you want the code to behave. TDD is in the motivation rather than the activity: You can write automated tests for no other reason than to increase your automated test coverage. That isn't TDD. Similarly you can test-drive the design of non-functional requirements like concurrency, latency, failover or throughput.

The distinction between user- and code-level tests in BDD is more explicit. User-level tests take the form of scenarios that are structured in a particular way that lends itself to clarity and automation. Also their component steps can be reused in other scenarios. Code-level tests are called examples (or specs, but I'm not a fan of that) and are closer to what people think of as TDD tests. Over time this has led to different tooling emerging, with the notable examples being language-agnostic tools like Cucumber for user-level scenarios and language-specific tools like RSpec, NSpec and others for code-level examples. My own experience has been that I tend to use whatever the team is comfortable with, so most of the BDD code I've written uses the venerable JUnit along with the Hamcrest matcher library from JMock. I'm currently using py.test for Python and nodeunit for node.js, which are similar in style to JUnit, even though "BDD-style" frameworks exist for both. An example is just code, it's up to you how you structure it.

Gojko: Again this depends on your definition of TDD, BDD and the related concepts. My understanding of Kent Beck's work is that customer tests and unit tests all belong to TDD. My understanding of Nat Pryce and Steve Freeman's approach in Growing Object Oriented Software is that TDD includes system tests, component tests and unit tests. The way I explain things in Specification by Example is that good living documentation decomposes business concepts, and where you automate it is a matter of risk coverage - an example can be validated against a single java method if that is where most of the risk is, or run against a live system with 30 web services and 100 databases if that is where the risk is.

Ron: BDD started out to be a variant description of TDD. Now, in the hands of people like Chris Matts and Liz Keogh and others, it has morphed in the direction of feature description and acceptance testing, to the extent that I really understand what they're doing.

As for other forms of testing, they can of course still be valuable. I would add user experience testing to your list, in particular. As for component and integration testing, the best Agile projects use continuous integration, so that one rarely sees separate components all by themselves, and the integrated system therefore bears more of the testing weight. Often these old distinctions blur, and what one has instead is various suites of tests that are run at various intervals and in response to various events, such as arrival of a new library or component, or a new build. These suites may contain all kind of tests that would be described a couple of decades ago as unit tests, acceptance tests, and so on.

The essence is to test what needs testing, and to test it as close to the moment of its creation, or its changing, as possible. That's how we prevent most defects and discover the bulk of the rest promptly.

Steve: This is not a division I recognise. It appears to have been imposed after the fact, based on misunderstandings about TDD. A fundamental question in TDD is "If this worked, how would I know?"--that applies at all levels, including for the business/organisation.

I'm also not convinced that slicing up testing into silos really helps, the idea is to build a continuum of testing that together gives the team confidence that the system works.

InfoQ: Over the years TDD has been referred to as a (code) design discipline, a testing discipline or a communication tool. In which ways do these distinct goals affect the design and the present vs future value of tests in TDD?

JB: I think this depends heavily on the practitioner. When I started practising TDD, I found most value in the testing discipline aspect, probably because I wanted to improve in that specific way. Only after I made drastically fewer mistakes did I start to notice how TDD gave me feedback to guide improving my designs. All this points to a highly personal perception of the present value of the tests one writes while practising TDD: one will probably get the benefit one seeks from those tests.

I find the present value of the tests much higher than the future value. I have even toyed with the idea of throwing the tests away after a few months, and rewriting them only if I needed to change something, although I never tried that.

I haven't benefited much from tests that I haven't written, and I don't know how much to attribute that to the projects I've worked on and how much to attribute to the general level of discipline among TDD practitioners. I find this troubling, as it reminds me of contractors walking into a house and starting off by bad-mouthing whoever did the previous renovation.

I have claimed, and read claims, that TDD-style tests act as a pool of change detectors (to use Cem Kaner's term) to reduce the cost of changing code later. I have seen that benefit, although I've never measured it carefully. I have also seen the claim that these tests can describe the system/API well to people who haven't seen it before. Alas, this benefit remains mostly hypothetical for me.

Dan: TDD is a design discipline. Everything else is a side benefit. The word "test" in "test-driven" is unfortunate. The examples-you-write-to-describe-behaviour aren't tests until after the fact. All the time you're writing the code they are simply examples of desired usage. In conjunction with practises like Continuous Integration these examples become a suite of regression tests once the code is written. But they don't replace the need for testing, especially the kind of skilled, directed exploratory testing advocated by the likes of Brian Marick and James Marcus Bach. Another characteristic of TDD tests is their determinism. This is a strength in a regression suite but a weakness with regard to discovering dark corners. Randomised testing techniques can flush out myriad subtleties in complex systems, which you can use TDD to then eliminate. A great example of this is the Haskell and Scala tool QuickCheck.

As for communication, that's one of the primary purposes of TDD, specifically communication to yourself and other programmers about the intent of the code. In my article Introducing BDD I describe how helpful it is to give tests intention-revealing names, otherwise when a test fails you don't know what it's telling you. You should be able to read TDD tests as a narrative, and the test names should provide a degree of functional documentation.

Gojko: I think the answer, as often, lies in the balance of those forces. To get the most of TDD as a discipline we have to find a way to do all of those things. Good unit tests drive design, but also help us derisk critical technical issues and communicate what the code is supposed to do.

Ron: TDD uses tests, but it is not about tests. It is a way of developing a system such that we build up, at the same time, a scaffold of tests and the program itself. TDD and the associated practices permit us to build the system incrementally, feature by feature, safely and keeping the code alive and malleable from the beginning of the project to the end. That lets us be more clear about how much is really done, which lets the Product Owner or management make the best possible decisions about what to do next. It greatly limits the amount of bad news we get at the end of conventional projects, whether by discovering that the code is rife with defects, or that its design has deteriorated and we cannot make rapid improvements.

I don't think of these goals as "distinct". Software development done well requires the integration of many ideas and requires that we keep many goals in balance. We do not want to trade these off. Rather, we want to work in such a way that all the goals are served, because this lets us create the best possible product in the fastest possible way.

Steve: I try to emphasise the communication aspect of tests (at all levels) as I find that leads to better results with the other aspects. For example, if I focus on getting a test to read well, it really stands out when I try to jam an inappropriate responsibility into an object.

Many teams that I see have tests suites that have decayed into an unmaintainable swamp that just slows down progress. Test code needs care and attention just like (more than?) production code, especially as new understanding and concepts arise and need folding into the system.

InfoQ: Do you recommend some upfront criteria for unit vs integration vs acceptance testing ratios taking into considering automation and maintenance costs?

JB: Teams contact me from time to time when they have spent 1-2 years practising TDD and have seen the cost/benefit of the tests fall out of balance. In every case, this happens when they try to use bigger tests (integrated tests, system tests, end-to-end tests) to check smaller things (the detailed behavior of individual objects). This leads to longer test suites, more brittle test suites (one failure means 23 test failures), and generally discourages programmers from maintaining the tests. The tests rot, and hurt more than they help.

In this situation, I generally counsel learning to write microtests to check microbehavior, combining collaboration and contract tests to check a layer by connecting it only to the interfaces in the adjoining layer, and no deeper. This means moving away from integrated and system tests to check what I call "basic correctness" -- that is, given infinite space and time, does this object compute the correct answer? When I say that "integrated tests are a scam", I mean this.

Although the details differ from project to project and from team to team, I do find myself quite frequently recommending that programmers move away from integrated and system testing towards microtesting.

Dan: I don't think recommendations about ratios are useful. For me it's about risk. If the likelihood of a mistake is high or the impact of a mistake is costly I will apply more diligence. For instance I tend to test-drive code that transforms data because I know how easily I can screw up transforming data, and how hard it can be to detect. Similarly when I'm working on software at the edge of a system that talks to the outside world I'm careful about the data I send and what I accept. If I see a bug in an application I'll sometimes start by writing a test that isolates the bug, and test-drive the fix from that. Other times I'll exercise it in a REPL (a language command-line interface) and figure out the bug from there.

Gojko: I think this is an overly generalised question, I can't give you an answer without considering a particular project.

Ron: Other than "do these things", no. It costs less to build code using TDD and the related practices, and the result is better code. It's true that some people and some teams feel they will go too slowly if they use TDD. There may be a few niches of development where that is actually true, though I have not found any. More likely they are simply not yet very good at TDD and the rest. The result of that is that they think they are going fast, but they are building up an increasing pile of defects that will have to be taken out, and they are letting their design deteriorate, which will increase defects still further, make defects harder to find and fix, and slow down progress when the going gets tough. They will get bad news when they least need it, namely in the final weeks of their project.

That's how you get to a death march. People and products do survive death marches, of course. The pity is that having come together for a mighty effort and survived, teams come to believe that that's the only way they could have accomplished whatever they accomplished. That's just not true. They came near to killing themselves when there was a simpler path leading to a better place, within reach all the time.

Steve: Not unless you've built a nearly identical system before with the same team. The whole point of agile techniques are to respond to situations as they arise. Not least, because those proportions will change over the life of the project.

InfoQ: Life-critical systems aside, there seems to be a generalized agreement today that 100% test coverage should not be a goal or ideal by itself when developing software. Do you think code-to-test size and effort ratios can promote more focused and/or effective testing?

JB: On the contrary, when organisations focus on these goals, they create these blasted "maturity models" that I find a scourge on our profession. You know the ones: level 1 means "we write tests", level 2 means "we write tests for all new code", level 3 means "we have 50% coverage of the whole system", and I suppose level 5 means "we dream in tests". I find it so tiresome and pointless. I can do all these things well and still deliver crap products. I see it from time to time, and it makes a mockery of what I teach and practise.

I began to care about these things when I had my "Network moment" -- you know, "I'm as mad as hell, and I'm not going to take this anymore!" I try to push people to have their own Network moments, then give them ideas about how to fix the problem. I believe that this leads much more effectively to disciplined work than any goals about test coverage.

Dan: I think prescribing code-to-test ratios has exactly the opposite effect. It suggests all code has equal value and is equally risky, which is manifestly untrue. Instead I advocate applying the appropriate level of diligence and scrutiny depending on the kind of code. The opportunity cost of trying to achieve a test coverage threshold uniformly across a codebase can be insane, especially with things like user interface testing. That time can be much better applied improving the quality of the code that matters.

Gojko: Aiming at code coverage is ridiculous. 10% coverage of the riskiest stuff can give us a ton more benefit than 99% coverage that ignores risk areas. I think risk coverage is a much better metric than test coverage. I like to use the Attribute Component Capability Matrix to assess risk and then decide what to cover and how. (See the book How Google Tests Software by James Whittaker)

Ron: Test coverage should never be a goal. It should be obvious that if our testing is good there will be very few defects where we test. Where will the defects remain? Where we do not test. So it is not productive to think in terms of some coverage value less than perfection being "good enough". Instead, we need to do two things:

First, we need to continually up our skills at testing, so that less and less of the system remains untested. It's worth noting that if we really do what TDD teaches: "Never write a line of code other than in response to a broken test", we will automatically get complete line coverage, and very good path coverage as well.

Second, I would certainly recommend that the team analyze test coverage and other such information, so that they can better decide where to beef things up. No one can do these things perfectly, so we have to stay vigilant, and when a defect does show up, we need to review what happened, write the missing tests, and up our game to prevent that kind of thing from happening again.

Steve: Again, that's prejudging before there's data, and often the divisions aren't clear. Code coverage can be useful as a hint for areas of code to look at, but can distort the team focus if treated as an externally imposed target.

InfoQ: TDD, BDD, ATDD, test-first, specification by example and so forth mean different things to different people and confuse the rest. Are we missing an ubiquitous language for describing our software development methodologies and growing a shared understanding of context-driven good practices?

JB: No, I don't think so. I think we have good-enough terms for the various practices. I found that I got better results when I stopped worrying about how to define them, and instead, shared how I understand them, and encouraged others to share with me how they understand them. I had one of the most influential discussions of my life arguing the definitions, meaning, and purpose of TDD and BDD in a hotel room at 4:30 am with the likes of Dan North and Chris Matts. It would be a damn shame to push a single set of definitions on the community and discourage that kind of lively, drunken, critical debate.

Dan: This is just an indication that our understanding of this space is evolving. I originally proposed BDD to help teach (my understanding of) TDD. I like Gojko Adzic's phrase "Specification by example" because it is clear and unambiguous. I've struggled for a long time with the vocabulary around test vs. example vs. spec and I still haven't chosen one over the others. The phrase "ubiquitous language" is itself misleading. "Ubiquitous" means universal within a bounded context. In other words we expect to describe the same thing differently depending on context. One person's test is another's example, is yet another's specification. It's about whether you can clearly convey your intent in that context.

Gojko: I tried defining Specification by Example as a clear, context-bounded and narrow thing exactly for that reason, to avoid the whole confusion on is it TDD, BDD, ATDD or something else. I think that name makes a lot of sense for the practice of exploring and nailing down what we want to build from a business perspective using examples and building living documentation to support us. There are certain ideas and practices that are useful for that, separate from the ones that are useful when we drive design with technical tests, so it is useful to look at that as a practice on its own.

I do not like the name ATDD or Acceptance TDD, because it creates a wrong mental picture in people's heads and sets teams up for failure because they focus on the wrong things. I wish people would stop using that, but there are unfortunately books that came out recently which promote that name.

The way I understand BDD is that it includes a lot more than just spec by example and driving technical design with unit tests. For example, I consider Feature Injection, requirements pull, outside in design, defining models for business value and things like that part of BDD. This is where Liz Keogh is taking it and it is very exciting. There is a lot more to this whole thing than spec by example or unit testing. For example, Effect Mapping is a new exciting planning and roadmap technique that fits in perfectly into the whole system of values of BDD overall, and takes the test-driven thing even higher to business objectives, but has no connection to any kind of automation and no need to do that.

Ron: Well, I think that's the way of human communication. There are no words that are so unambiguous that everyone fully agrees upon hearing them. And in a business like ours, where we are learning as we go, differences are inevitable. The most important differences, in my opinion, are in the people who have made little or no effort at all to understand what all this stuff is. Instead, they either denigrate the ideas without understanding, or claim to be doing the practices without really doing them, again without understanding.

This has two serious ill effects. First, many projects, and many individuals, do not do nearly as well as they might. This leads to human suffering, failed projects, and inferior results. Second, misunderstanding, often seeming almost willful, slows down the uptake of these good ideas.

Steve: When we have perfect knowledge of the discipline, we can sort out the terminology :) I think it's still too early to be clear about what fits where. I also think there should be more scope for different "schools" that acknowledge their differences without having to take over the world.

InfoQ: Any last words about these topics?

JB: Nothing new. Practise these techniques because you hope they will guide you to improve your work. Do it because it gives you a "personal win". Do it because it helps you enjoy your work more, and to hell with any other reasons to do it.

Dan: Anything I say here about these topics will almost certainly not be my last words on them!

Ron: Although I would use these approaches on any project, because among all the approaches I've used in a half century, they serve me the best, I would not command everyone to use them.

What I would do, however, is to suggest that everyone who cares about this profession should learn these techniques well enough to do them ... let's say "pretty darn well" ... and only then decide when and where to apply them. It doesn't make much sense to reject a technique that might help before learning enough to assess it in a true light.

So what I like to do is to show people what I do in software development, give them a safe place in which to try it for themselves, and try to leave them with a good enough taste in their mouths that they'll push forward long enough to make a well-informed decision.

For me, the well-informed decision is to use these practices in concert, all the time. I hope that others find the ideas equally valuable and find great benefit in them.

Steve: Often, the main problem I see when talking to teams about TDD is not testing issues, but weakness in basic design skills; the reason people are struggling with a test is because the code has the wrong structure. Similarly, I see code which just isn't expressed clearly enough. I increasingly think that coder interviews should include making sure that the candidate can write a readable paragraph.

About the Panelists

J. B. Rainsberger helps software companies better satisfy their customers and the businesses they support (jbrains.ca). Over the years, he has learned to write valuable software, overcome many of his social deficiencies, and built a life that he loves. He travels the world sharing what he's learned, hoping to help other people get what they want out of work and out of their lives. Even though he's traveled Europe most of the past two years, he lives in Atlantic Canada with his wife, Sarah, and three cats. J.B. blogs at The Code Whisperer.

Dan North writes software and coaches organizations and teams in Agile and Lean methods. He believes in putting people first and writing simple, pragmatic software. He believes that most problems that teams face are about communication, that is why he puts so much emphasis on "getting the words right", and why he is so passionate about BDD, communication and how people learn. He has been working in the IT industry since he graduated in 1991, and he occasionally blogs at dannorth.net.

Gojko Adzic is a strategic software delivery consultant who works with ambitious teams to improve the quality of their software products and processes. He specialises in agile and lean quality improvement, in particular agile testing, specification by example and behaviour driven development. Gojko is a frequent speaker at leading software development and testing conferences and runs the UK agile testing user group. Over the last eleven years, he has worked as a developer, architect, technical director and consultant on projects delivering financial and energy trading platforms, mobile positioning and e-commerce applications, online gaming and complex configuration management systems. He is the author of Specification by Example, Bridging the Communication Gap, Test Driven .NET Development with FitNesse and The Secret Ninja Cucumber Scrolls.

Ron Jeffries is an independent consultant in XP and Agile methods (XProgramming.com) has been developing software longer than most people have been alive. Ron was the on-site coach for the original XP project, authored Extreme Programming Adventures in C#, and Extreme Programming Installed, and co-created Object Mentor's popular XP Immersion course.

Steve Freeman, author of 'Growing Object Oriented Software, Guided by Tests' (Addison-Wesley), was a pioneer of Agile software development in the UK. He has developed software for a range of institutions, from small vendors to multinational investment banks. Steve trains and consults for software teams around the world. Previously, he has worked in research labs and software houses, earned a PhD (Cambridge), written shrink-wrap applications for IBM, and taught at University College London. Steve is a presenter and organiser at international industry conferences, and was chair of the first London XpDay.

Rate this Article

Adoption
Style

BT