Interview and Book Excerpt: "Model Based Software Testing and Analysis with C#"
InfoQ was given the opportunity to speak with the authors: Jonathan Jacky, Margus Veanes, Colin Campbell and Wolfram Schulte who decided to respond to the questions as a collective. Their book, Model Based Software Testing and Analysis with C#, was recently published by Cambridge University Press whom provided to InfoQ Chapter 1, "Describe, Analyze, Test".
InfoQ: What were your goals in writing this book?
Authors: We want to provide up-to-date information on a particular, practical approach for model-based testing that is used successfully inside Microsoft. We also wanted to provide educators with resources so that they can start teaching model-based testing, since we believe that's not done enough.
The idea of using of model programs and state space exploration is a natural extension of finite state machine based techniques that fits well with black-box testing of software.
The book makes this idea accessible to a broad audience.
InfoQ: Why Model Based testing vs. Unit testing?
Authors: Unit testing is, as the name implies, for testing one unit at a time. Model-based testing, however, is often used for testing the interaction of components. For example, model-based testing is particularly strong when you test protocols for interoperability. For protocol testing, you don't describe the structure of the implementation, but rather, in black-box style, how a sender and a receiver communicate.
Model based testing is complementary to unit testing. It helps to expose errors that only emerge when several units are put together.
InfoQ: What is Model Based Analysis?
Authors: Model-based analysis uses a model program --- a kind of executable specification --- to check specifications or designs, including communication protocols for example. Since the model is executable it can be checked before the implementation is available, which can save time, frustration and costly rework.
Model-based analysis includes safety analysis (checking that bad things never happen) and liveness analysis (checking that good things eventually do happen). This is similar to what is known as model checking.
InfoQ: From your experience, what is the most difficult scenario to test for?
Authors: Nondeterministic (distributed) systems, when the same sequence of controllable inputs may lead to many possible valid observable outputs. In this case a particular test run is hard to reproduce and the same test case may expose a bug only occasionally. Moreover, it may be hard to measure coverage and hard to know when to stop testing.
On-the-fly testing with a model provides a way to handle this situation.
InfoQ: What mistake do developers make most often when testing code?
Authors: Neglecting to test for the borderline/unexpected cases, and neglecting to test deep method calls are common mistakes. Another common mistake neglects to test interaction with other code, where the interaction with other code causes emergent behavior that is not present when the code is tested in isolation. This is related to not having clear rules of responsibility, and is a mistake of a set of developers and testers as a whole rather than an individual developer or tester.
Yet another common mistake is incorporating bias into the test suite, by neglecting to test for cases that the tester did not consider, or did not think were important. Generating tests automatically from a model can overcome some biases.
Best parctice to Test MVC abed apps.
Ian Culling, Andy Powell & Lee Cunningham Dec 11, 2013