BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Testing Techniques for Applications With Zero Tests

Testing Techniques for Applications With Zero Tests

Leia em Português

This item in japanese

Bookmarks

Agile techniques recommend having adequate unit and acceptance tests to build a robust test harness around the application. However, in the real world, not all applications are fortunate enough to have a test harness. In an interesting discussion on the Agile Testing group, members suggested ways to test applications which do not have any automated tests.

Asad Safari started the discussion when he mentioned that his application did not have any tests, the developers on the team were not familiar with unit testing and the team was running against a 3 week deadline to test the application. He was seeking suggestions to test under the given constraints.

Phlip responded that he had been in this situation several times and recommended the following,

  • Add an optional module that drives your application with random or canned input.
  • Add logging to your program which spits out the error messages and assertions in a log file
  • Write one big fat unit test that calls all of your program, feeds it that input, and scrapes the log.
  • Write one big fat assertion that says the log shall have no errors.
  • Add an exception to that assertion for each error the log does have.
  • Now start burning-down the errors. Each time one goes away, remove its exception.

Phlip recommended that under the umbrella of this huge test, the team could start writing smaller focused tests, as and when the time permits. He also suggested that, though the team could just sit over the spills for the next 3 weeks however, the time to start writing and executing small unit tests is now.

Adam Sroka agreed to the suggestion and added,

Yep, most teams will respond to poor quality by slowing down and producing less value, which doesn't actually do anything for quality. We need a more pragmatic solution. … The fallacy is that testing now isn't valuable because we can't do it as completely as if we had done it from the start. We can't, but it is still valuable.

Unconvinced with the thoughts, Brian Spears added that Agile is not magic and it might not be possible to come up with a solution in a matter of 3 weeks. He said,

Agile it not magic. The solution to this kind of emergency situation, when there is one, is a whole lot of long hours, which is clearly not an Agile solution.

Adam countered this argument by suggesting that most teams adopt Agile once they get into a situation like this. This is the best chance for teams to be pragmatic and take the first steps towards making the software better.

Annette suggested that the current situation is ripe for doing hours and hours of manual testing as automated testing at this stage would be time consuming. The recommendation was to start with the high profile and revenue dependent features of the application. Annette also recommended the book, Agile Testing by Lisa Crispin and Janet Gregory.

Charles Bradley made a similar suggestion with a conditional promise in advance. He suggested,

Your time is limited, so maximize the ROI in whatever way seems best from a business perspective. Manually test the hell out of it, and try to get the decision makers above you to agree that they will NEVER EVER DO THIS TO YOUR TEAM again... and instead, they will PLAN IN time(and money) to automate tests... like AS SOON AS work on the next release begins or maybe even as post-release bug fixing begins.

Thus, the current situation might not be the best for writing an entire test harness and the team might be better off with manual testing, This however does not undermine the importance of having a proper test harness at the first given opportunity. As Jonathan Rasmusson put it,

All you can do is fix the bugs, and then manually test as best you can before you go live. That's about all you can do at this point. The bigger and much more important question is what you do the day AFTER you hit your three week deadline.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Testing legacy web applications

    by Adam Nemeth,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Recently, a whole module of a legacy web application written in PHP4 around 10 years ago (and constantly "maintained" since) was needed to have new features.

    We're talking about 1000s of lines of code within one function, or even case of a switch case.

    Most of the time, people don't refactor maintained legacy applications, as somebody told me "the first rule of support development is: don't change anything other than requested, just add your stuff."

    I haven't been able to track the application back to its beginnings but its imminent to me that if-else branches don't grow to 600 lines by themselves, without human intervention. Somebody has to mess these up, and someone has to have such kind of thinking. This is pretty general in enterprise programmng:
    - most of the tasks are about legacy applications
    - people fear to clean up things
    - it's not about development, but adding requested features and fixing bugs.

    Also, PHP is a dynamic language, and therefore formal refactoring tools are usually unavailable. For example, PHP refactoring support in netbeans is basically non-existing.

    So, what would you do here?

    I decided that a system's answer is dependent only from its input and its context. This seems pretty straightforward

    System ( input, context ) -> output

    OK, what is the input of a web application? Of course, its HTTP request! In PHP, it's hard to think about any other input.

    What's the context? Context is given by two components basically: the underlying platform, whatever it is (no matter you have a framework or just common libraries, we call these platform together), and the persistent data layer. So:

    Web app (request, persistent data) -> answer

    What's the answer? First, it's an HTML (or XML, JSON, etc) output. We didn't have to care about it in this particular case. The other output is: changes to the persistence layer. It's unusual for web applications to change anything other than their database and cache layers.So:

    Web app(request, persistent data) -> (written-out response, persistent data' )

    OK, what to do? We have an old system and we want to refactor it to a system, and the question was: are they equal in functionality?

    Question is: Web app == Web app' ?

    Let's see what I did:
    - Ask a manual tester to go through every possible combination on the user interface
    - Recorded these into files (serialize($_REQUEST)), or, even better, (serialize($GLOBALS))
    - Ask the DB layer to NOT write anything to the DB (ugly global variable hack, if it is present, only select queries are executed), this way, we ensure that we keep a consistet state
    - Record every writing operation (so, instead of executing them, take note of them)
    - An algorhythm:
    1) load the serialized request,
    2) start recording db,
    3) run the original controller,
    4) collect db recordings,
    5) re-load request (in case it was modified by the original controller - we could never know)
    6) run the new controller
    7) collect db recordings
    8) see if the two are equal

    This way, I could be sure, that in all of the scenarios a manual tester could come up with, both of the controllers behave the same way.

    After the original recordings, I did two last points:
    9) re-load the request again
    10) enable db writing
    11) run the new controller
    12) display result.

    This way I could create an - albeit slower - but seemingly normally functioning version of the software, which did everything it did previously, and it was verified that functionality haven't changed with the new controller.

    I called this blackbox-harness test.

    What do you think?

  • Re: Testing legacy web applications

    by Eran Harel,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This is nonsense.

    You can't just throw the blame on the team, and the legacy code, and then look for a magical solution that will give you proper test coverage and find your bugs.

    Instead, I suggest you start becoming more responsible for your monstrous creation - refactor, unit test, and improve your code bit by bit.

    Manual tests and auto-generated tests will lead you nowhere.

  • Re: Testing legacy web applications

    by Assaf Stone,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    There is no blame here - not on the development team (who didn't know any better) and not on the managers (who don't understand any of this to begin with); Just a solution to a problem.

    As for your suggestion - you're right, but before you can refactor, you need some kind of safety net, otherwise you don't know if your refactorings will work, or break the code; this is exactly what he did.

    Finally, manual tests are a must; not everything can be automated. And generated tests are much better than none.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT