BT

InfoQ Homepage Presentations Integration Tests Are a Scam

Integration Tests Are a Scam

Bookmarks

Bio

J. B. (Joe) Rainsberger helps software organizations better satisfy their customers and the businesses they support. Expert at delivering successful software, he writes, teaches and speaks about why delivering better software is important, but not enough. He helps clients improve their bottom line by coaching teams as well as leading change programs.

About the conference

Agile 2009 is an exciting international industry conference that presents the latest techniques, technologies, attitudes and first-hand experience, from both a management and development perspective, for successful Agile software development.

Recorded at:

Sep 10, 2009

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • video doesn't work for me

    by Scott White /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    video doesn't work for me

  • Re: video doesn't work for me

    by Diana Baciu /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hello Scott,

    i just tested this both in Firefox and IE and it seems to be working fine. Could you please try again?
    thanks

    Diana (InfoQ)

  • Integration tests are needed

    by FirstName LastName /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    A Mars rover mission failed because of a lack of integration tests. The parachute subsystem was successfully tested. The subsystem that detaches the parachute after the landing was also successfully (but independently) tested.

    On Mars when the actual parachute successfully opened the deceleration "jerked" the lander, then the detachment subsystem interpreted the jerking as a landing and successfully (but prematurely) detached the parachute. Oops.

    Integration tests may be costly but they are necessary. Sometimes there aint no substitute for end-to-end testing. These testing concepts were developed 50+ years ago in the fields of quality control and industrial engineering. I wish more "Agile" community was aware of this existing body of knowledge.

  • Re: Integration tests are needed

    by Amr Elssamadisy /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Sounds like a great subject of an article or 2 or 3 or.... Amr

  • Re: Integration tests are needed

    by Declan Whelan /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I believe that J.B. was focusing on "software" integration tests rather than "system" integration tests.

  • Contract tests

    by Sune Simonsen /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Joe Rainsberger, (I hope you see this message)

    First off, this is one of the best presentations I've seen in a long time - very inspiring.

    I was thinking about what you said about contract tests. In the system you describe every interaction has a mocked call to an interface if I understand it correctly. Each of these interactions must be verified to be supported on the interface for the contract to be fulfilled. If we could in some way describe the mock expectation as a data structure that also could be used by the contract test, wouldn't we be home safe? Then we should just make an expectation on the mock object using the data structure and use the same data structure in our contract test to make the assertion. In a way our expectations on the mock is the contract on the interface.

    Sorry if I'm just totally wrong.

    Kind regards Sune Simonsen (Jayway)

  • Re: Contract tests

    by FirstName LastName /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Unit-tested modules can still misbehave when they are combined. Integration testing ensures that modules operate correctly when they are combined, and finds unexpected dependencies or interaction among modules. JR's presentation correctly notes that exhaustive combinatorial testing is too expensive for integration testing, but that doesn't mean it can be ignored. Huge numbers of input combinations can be avoided by statistically testing based on usage models.

  • Re: Integration tests are needed

    by John Donaldson /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    To be fair, JB is only claiming to establsih "basic correctness". So, for example with the method that returns a collection, he proposes "0, 1, many, lots". It may be that you also need to add the returns-something-that-will-cause-a-jerk. His scheme isn't talking about how you choose the good set of tests out of all possible tests.
    John D.

  • When an integration test fails, who knows what’s broken?

    by Jonathan Allen /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    If a test fails I use this new-fangled invention called a "debugger". It allows me to look at a running program and quickly determine why a test failed.

    I can't understand why the so-called "Agile Comunitity" has such a hard time with this concept. Perhaps you should hire some college kids to teach you about debuggers in your bootcamps.

  • Re: Contract tests

    by Esko Luontola /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I had the following idea after watching that presentation:

    Instead of creating the mocks and setting the expectations in the tests of a client class, move all the mocking code to static helper methods in the contract test class. For example the contract test class would have a method 'mockAnEmptyRepository()' and whenever some other test needs to collaborate with an empty repository, they would create the mock using 'RepositoryContract.mockAnEmptyRepository()'.

    The goal of this approach would be that when the contracts and the mocks are in the same class (or at least very close to each other), it will be easier for the programmer to check that all the mocks obey the contracts.

    Still another idea that I had:

    In the contract tests, wrap the object under test into a proxy which implements the same interface as is being tested. When the contract tests are executed, the proxy will collect data about the method parameters and return values. Then this same data would be used to automatically verify that given the same initial state* and parameters, the mocks will return the same return values.

    * By initial state I mean for example, that in the contract test class there is an abstract method 'createAnEmptyRepository()' which creates the object under test. The same contract test also has a similarly named method 'mockAnEmptyRepository()'. Here the "initial state" is "an empty repository". The analyzer will group those two together, and check that the object returned by 'mockAnEmptyRepository()' behaves the same way as the one returned by 'createAnEmptyRepository()' does.

  • Re: When an integration test fails, who knows what’s broken?

    by Esko Luontola /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Because of using TDD, we need a debugger so seldom, that our debugger usage skills have deteriorated. ;)

  • Re: When an integration test fails, who knows what’s broken?

    by Frank Smith /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    TDD (and specifically test first programming) is not particularly effective at testing certain types of software. Some examples are GUIs and multithreaded applications, both of which are quite common. I'd claim that TDD has deteriorated your software development skills in general by causing you to believe the myth that you don't need to know how to use a debugger.

  • Re: When an integration test fails, who knows what’s broken?

    by Esko Luontola /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    1. TDD is primarily a design technique. Thinking of it as a testing technique gives a very narrow view of testing.

    2. GUIs are best implemented so, that their logic is decoupled from their presentation. The logic can be driven with TDD, after which the presentation will be mostly declarative and simple glue code. The presentation will anyways need to be tested manually, to make sure that it looks right.

    3. Multithreading is best when avoided. Design the application so that the need for multithreaded code is restricted to only a few classes (for example using message-passing).

    4. I did not claim that there is no need to know how to use a debugger. I said that there is very rarely need for them (maybe once a month), because with TDD you find the cause for a failure much faster - if you changed one line and it caused the tests to fail, the probability is very high that the problem is on that one line of code.

  • Re: When an integration test fails, who knows what’s broken?

    by Frank Smith /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I agree that appropriate software structuring can increase the effectiveness of testing. My point was that even with that higher effectiveness there are common cases where TDD cannot ensure the system is operating correctly. Multithreaded applications are common and cannot simply be avoided in most cases. Even message passing techniques alone do not imply the lack of shared state between multiple threads.

    To slightly rephrase my original statement for clarification purposes, I was referring to the "myth that you don't need to know how to use a debugger [well]" because you are using TDD.

    I like agile (even most Agile) techniques and I've been using TDD for almost 10 years. I seldom, if ever, use a debugger to diagnose a problem in my unit tested code. However, I use a debugger for other purposes and I believe it's important to not allow those skills to deteriorate.

  • Re: Contract tests

    by Frank Lee /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Would it be possible to let the data structure that Sune mentions simply be the interface marked up with 'design by contract' annotations (pre, post, and invariant conditions)? Then, perhaps some unit testing frame work could generate the proxy that Esko mentions (I was thinking it might be an AOP aspect). The proxy would assert that both the consumer and supplier are abiding by the contract.

    I enjoyed JB's presentation.

  • Re: Integration tests are needed

    by Mike Bria /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Indeed, J.B. does NOT claim integration tests are totally evil (as the title incorrectly implies), but rather that much of what is commonly "checked" (aka tested) via integration tests would be better verified with more isolated micro-(aka unit-)tests.

    Cheers
    MB

  • Re: When an integration test fails, who knows what’s broken?

    by Mike Bria /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Frank,

    I think what good TDD allows me to do is (for the grand majority of my time) ignore usage of a debugger to determine anything about the code I'm now developing.

    Do I still need to fire it up (and be effective with it) to diagnose bugs sometimes - absolutely.

    Cheers
    MB

  • Re: When an integration test fails, who knows what’s broken?

    by deepak shetty /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I suppose you've never actually come across a case where the system only fails on the production environment? Im sure your administrators let you point this new fangled invention of yours to this environment

  • Excellent presentation

    by bruce b /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The presenter really knows his business, explained clearly and convincingly the practical differences between focused tests and broader integration tests. Would have liked to hear more on the challenges of using test doubles in what are sometimes called white box testing. The mock code can end up mirroring the real code to closely, causing brittleness in the tests during maintenance. I've found the key to mocking is not over-specifying the expectations.

  • Re: video doesn't work for me

    by Diego Santos Leao /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Well, taking shameless advantage of this off topic, I would like to report that roughly at 32 minutes of this talk it stops. If I try to play again from the buffer, it will also stop when I pull the bar beyond 32 minutes. This happens not only with this video, but many others, for some weeks now. I'm watching from Brazil with Firefox 3.5.30729.

  • Re: video doesn't work for me

    by Kirsten Tay /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Video stops for me at ~49:30. Replay stopped at same spot.

  • Question about Contract Tests

    by Stefan Hendriks /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi. Very nice presentation, learned from it and also saw familiar things.

    One question about Contract Tests though; it is said earlier that you should not test a platform. I assume this also means you should not test a Webservice (client code) you generate (lets say from a WSDL) your client code from. (makes sense)

    Instead , the Contract Tests should test the interface of the service that you define (that will use the webservice eventually).

    Do I understand this correctly?

    Ie, to clarify, lets say I have a Webservice, that has an implementation using SOAP. I have my own service (interface) that will answer questions for my code / features using my service. I would end up with:

    MyCode <-> IMyService (with implementation MyConcreteService using IWebservice->SOAPWebservice)

    The Contract Test would test only IMyService (and has a test class for every implementation of that interface) as it is unwise (and especially slow) to test the actual webservice.

    Correct?</->

  • Unit Tests vs. Functional/Integration Tests

    by Filipe Esperandio /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I agree that we must mock classes dependencies and also do real Unit Tests.

    I also think we need to automate some (not all, impossible, but some) of the functional tests (customer tests, end-to-end tests)in order to quickly see if some functional behavior breaks when some change is done on our code, even UT is passing.

    An example is if you want to perform a re-factory on your code, probably you will lose UTs and you need an automated way that test the entire behavior from the interfaces signatures. What should be a good practice for this scenario?

    Regards,
    Filipe

  • Re: Question about Contract Tests

    by J. B. Rainsberger /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "One question about Contract Tests though; it is said earlier that you should not test a platform. I assume this also means you should not test a Webservice (client code) you generate (lets say from a WSDL) your client code from. (makes sense)"

    I don't generate web services from WSDL often enough to have formed an opinion on that specific case. I guess I would either have to trust the generator or not trust the generator, then act accordingly. If I consume the WSDL, then I would probably writing Learning Tests to document how to use that web service correctly, rather than writing tests to prove its basic correctness

    Looking at your specific example, I would use Learning Tests to discover or confirm the behavior of SOAPWebService, then use that information to implement MyConcreteService correctly by stubbing/mocking methods on IWebService. The more I trusted SOAPWebService, or the product that generated it, the less I'd worry about writing exhaustive Learning Tests for it.

    I certainly wouldn't test the web service client generator which, in this case, represents the platform.

  • Re: video doesn't work for me

    by You Tian /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    me, too.
    who can fix this?

  • Re: Unit Tests vs. Functional/Integration Tests

    by J. B. Rainsberger /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "I also think we need to automate some (not all, impossible, but some) of the functional tests (customer tests, end-to-end tests)in order to quickly see if some functional behavior breaks when some change is done on our code, even UT is passing."

    You equate functional test with customer tests with end-to-end tests. I use them differently. Everything I've said about integration tests apply to tests programmers write to gain confidence in the correctness of their code.

    I usually automate all the routine customer-level checks in a system. Many of those end up as end-to-end tests, but in a highly modular system we can easily check a lot of business value without resorting to end-to-end tests.

    "An example is if you want to perform a re-factory on your code, probably you will lose UTs and you need an automated way that test the entire behavior from the interfaces signatures. What should be a good practice for this scenario?"

    I use a combination of collaboration tests and contract tests. To read more about those, go to thecodewhisperer.com.

  • Re: Unit Tests vs. Functional/Integration Tests

    by M PD /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi, J.B.. Great inspiring presentation. Congratulations!


    There's a thing about contract tests that I think can be misleading.. Can you really rely on them to integrate things? When you a run a integration test, the method's client can get an outcome that you didn't test on your collaboration tests. Worse, you're mock can return a outcome that is possible (contracts test won't complain) but simply never happens in a real scenario.

    thx, congrats again!

  • Re: Unit Tests vs. Functional/Integration Tests

    by J. B. Rainsberger /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We still need to match stubs/mocks in collaboration tests to assertions in contract tests, but at least we have a systematic way to match them, rather than relying on people to do their best.

  • Why Integration isn't a Scam... the missing part of your Induction

    by Chris Desmarais /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    First, thanks for the presentation. Learned a heck of a lot about how to explain ideas I've had.

    However... you missed something when you said you could almost prove your contract based testing by induction. In order to prove by induction you need to do two things:
    - prove that if something works for n then it works for n+1
    - prove that it works for one initial value of n

    I can see a proof of how contract/collaboration testing would work for a new case if it worked for the first case. That definitely seems provable. You can't stop there though, you need a proof that it works for a first case.

    Imagine a web service. You can test the client to make sure that it will use the contract properly, and you can test that the service fulfils that contract without ever actually testing that the client can actually reach the web service (maybe the service is on the other side of a firewall).

    So while I totally believe in what you've written about using collaboration/contracts as the bulk of testing. There is one small test (likely set of tests) that you need to make sure that the system actually works: integration tests. In my mind integration tests should be a small set of tests that test that the client and service can talk to each other at all using the contract. Probably just one test for each reasonable type of output. Generally I'd expect one test to make sure that the client could get any results from the service, and one test to make sure the client could get exceptions thrown from the service. That's probably enough. You can rely on the contract for the vast bulk of testing but there is a small set of integration tests that I think are critical (and I wouldn't rely on them being in the user testing).

    Again, love the topic and it helped clarify something I've been trying to explain, so that's probably just a small nit but an important one.

    Thanks!

  • Re: Why Integration isn't a Scam... the missing part of your Induction

    by J. B. Rainsberger /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I thought I'd discussed the P(0) case, but I guess I didn't. Let me do that now. At the deepest part of the call stack, you have objects that only talk to the platform. Either you trust the platform or you write Learning Tests to verify your assumptions about how the platform behaves. If you trust the platform, then there's no collaboration/contract testing to do, and if you write Learning Tests, then those will be tests to discover and clarify the contract of the platform. Either way, you can test those "leaf" classes thoroughly.

    I know that's not tremendously rigorous, but you get the idea.

    You point out the issue of a web service failing because of, for example, a firewall in the way. Let's assume you wrote no automated programmer test for that; how would you learn about it? I assume that you'd discover it with an end-to-end test somehow, in system testing, in manual/exploratory testing just before shipping the feature outside the programming team. Someone will say "I didn't think of that". It happens. I ask: which tests did you miss in this case?

    You could use integrated tests, but you don't need them. Instead, when you design the client, you ask yourself, "What could go wrong in trying to use the server?" You list a few things. Now you need to find out how the system will notify you of those things: timeouts, exceptions, messages, callbacks, violent crashing, infinite loop, whatever. When you explore how the system notifies you of things gone wrong, you're discovering the contract, so you now have enough information to write a contract test for the web service server/platform, and then the corresponding collaboration tests for the web service client.

    Now you're going to miss stuff, and so yes, someone will have to put the pieces together eventually and try them out. I don't dispute that, nor do I recommend against it, but I absolutely do recommend against using end-to-end tests as a reason not to write better isolated tests. One doesn't /need/ integrated tests to deal with the problem you've described. On the contrary, I find that when I focus on isolated, collaboration and contract tests, I miss fewer of the kinds of problems you described, as I consider more detailed parts of the contract between client and server.

    Thank you!

  • Contract tests look like a scam

    by Dimitry Polivaev /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    If I understand the basic idea of the talk right, for any service a contract test should check that the client's expectations are satisfied not just in one special case but always. It means that the contract tests should cover all possible configurations of the service itself and all responses from the services it calls. And it should be done for any client side test case where I mock it. Does it sound feasible?

  • Re: Contract tests look like a scam

    by J. B. Rainsberger /

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Contracts describe the behavior that all clients can depend on. In fact, they describe the *only* behavior that a client should depend on. Clients should not care about the implementation details of the service; indeed that's the point. The application will care about the choice of implementation, but only because today it wants to (for example) forward interesting events by SMS and next week it wants to forward them by email. The contract for this thing includes deciding which events are interesting and sending them to a PostOffice (an abstraction that sends messages). Clients don't care whether this thing sends the events to an SmsPostOffice, an SmtpPostOffice, a SendGridPostOffice, a TwilioPostOffice... that's for the application to decide, and as long as all the PostOffice implementations pass their contract tests, the application can freely put together any choice of implementations and it will just work.

    In the case of the PostOffice, the contract will probably mostly describe how the abstraction reports errors, in addition to how clients will describe the message and its destination, then each implementation will verify that indeed the message is sent over the appropriate transport to the specified destination.

    However you mock a method goes into the contract for that method. So mock the minimum you need, and when it seems like you have to mock too much, and especially when it seems like you have to mock the same pointless details in many tests, then you're almost certainly missing an abstraction in the middle.

    I hope this helps.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT

Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.