New Early adopter or innovator? InfoQ has been working on some new features for you. Learn more

Will Machiel van der Bijl make manual Software Testing obsolete?

| by Michael Stal on Jun 11, 2011. Estimated reading time: 1 minute |

Machiel van der Bijl from the University of Twente in the Netherlands recently introduced a approach which is supposed to automate software testing. According to van der Bijl

Software testing easily accounts for a third to half of total development costs. Our automated method can improve product quality and significantly shorten the testing phase, thereby greatly reducing the cost of software development.

In software engineering the testers are responsible to obtain information about a system so that architects and developers can assess quality aspects. Thus, testing represents an important safety net for architecture and design activities. Unfortunately, development projects need to cope with the tradeoff between quality assurance and the amount of resources required for sufficient testing. One of the main reasons is that many tests have to be created manually. Testing activities are often reduced to save money and time. Or as a test expert once said “products are not released. They escape!”.

Van der Bijl claims to have found a way to automate testing and make it less expensive. His approach leverages Model-based Testing. According to Wikipedia

Model-based testing is the application of Model based design for designing and executing the necessary artifacts to perform software testing. This is achieved by having a model that describes all aspects of the testing data, mainly the test cases and the test execution environment. Usually, the testing model is derived in whole or in part from a model that describes some (usually functional) aspects of the system under development.

It needs to be mentioned, however, that many experts in testing such as Jeff Fry also have identified some liabilities of Model-based testing. Thus, the approach must first prove its feasibility and usability in practice.

Recently, van der Bijl has founded a company called Axini that offers support for test automation to customers.

Rate this Article

Adoption Stage

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

It's not the testing, it's the fixing by Shane Hastie

Testing doesn't take "a third to a half of total development costs" - fixing the defects that testing finds is what takes the time. Quality compromises are not because testing takes too long, they are made because someone decides it will be too expensive to FIX the bad work done pre-testing.
Testing is the thermometer of quality - just testing something doesn't change the quality. Applying a quality process (for example applying TDD in a disciplined way) when building the product is the thermostat of quality - that's how you build quality in.
Model driven testing MIGHT save some time in the test design activities, and automated test execution is a good idea in the appropriate circumstances, but quality has to be built in to the product from the beginning to significantly reduce the "test-rework-retest-rerework--reretest-- cycle that many projects get into.

My short summary by john zabroski

- I cannot find this guy's Ph.D. thesis anywhere
- He is terrible at explaining what he has done to radically improve model-based testing
- He has zero empirical evidence mentioning that his teams have gone from 30 to 60% testing to less testing.
- He doesn't explain what kind of problem domains his team was tackling, and what in particular was difficult to test and an example of how model-based testing makes that difficulty easy
- To explain how model-based testing works, he gives a verbose, confusing summary of Harry Robinson's work on Google Maps route testing, without mentioning Harry's work or Google Maps.

Re: My short summary by john zabroski

Re: It's not the testing, it's the fixing by Jonathan Allen

On the project I'm currently working on the vast majority of the problems can be blamed on deferred maintenance. If we had just spent a few weeks on code cleanup and bug fixes six months ago then we wouldn’t have QA screaming at us on a daily basis. But the TLs keep telling us there is no time to do a good job, as if we are somehow going to magically get more time down the road.

MBT for thoroughly testing complex software by Machiel Van der Bijl

John, thanks for adding the link to my thesis.

I understand the hunger for more details. The snippet in the link provided by the PR people of the University of Twente is more of a teaser to wet your appetite :) (I am Machiel, the PhD guy btw).

I find Model Based Testing important because it is a successful test automation technique that makes it economically feasible to test thoroughly and quickly. It automates the three phases of testing: test case creation, test case execution and the evaluation of the outcome of the test execution. No other testing technique is capable of this as far as I know.

The reason why so much time and effort is related to software testing is because thorough testing takes a lot of effort. It is based on our own measurements and experience, furthermore companies like Gartner claim the same thing. One part is the testing effort itself. We're in the complex software business (medical systems, train systems, pension systems, etc) and testing complex software requires many tests. The creation of tests, test designs etc alone takes a lot of time. However a big part of the time and effort is in the test-execution phase. Take a system test. At the time a system test takes place the development team is biggest. They have to wait while the system test is performed. Executing the system test takes quite some time, for complex software we're talking man-weeks or man-months. During the tests bugs are found, they have to be fixed and re-tested. This cycle of test, bugfix, re-test can is also known as hardening. Hardening can take months. Same thing for hardening at the integration level.

This is where MBT shines. Instead of days or weeks you get an answer within hours, shortening the cycle hardening significantly. Furthermore, MBT already finds bugs while creating the model. Since models can be simulated and analyzed we provide early feedback on the design, thus preventing errors from occurring.

Sorry to hear that the maps metaphor was confusing. People that do not know anything about model-based testing or software in general have a hard-time grasping the concept. For this we use the analogy of software testing with checking the map of a city. It is by no means related to testing of Google maps. It's just an example nothing more, in our experience it helps people comprehending what MBT is about: thorough and fast testing.

I hope this explains things a bit, if you have any questions I'm glad to answer them.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

5 Discuss

Login to InfoQ to interact with what matters most to you.

Recover your password...


Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.


More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.


Stay up-to-date with curated articles from top InfoQ editors

"You dont know what you dont know" change that by browsing what our editors pick for you.