Will Machiel van der Bijl make manual Software Testing obsolete?
Machiel van der Bijl from the University of Twente in the Netherlands recently introduced a approach which is supposed to automate software testing. According to van der Bijl
Software testing easily accounts for a third to half of total development costs. Our automated method can improve product quality and significantly shorten the testing phase, thereby greatly reducing the cost of software development.
In software engineering the testers are responsible to obtain information about a system so that architects and developers can assess quality aspects. Thus, testing represents an important safety net for architecture and design activities. Unfortunately, development projects need to cope with the tradeoff between quality assurance and the amount of resources required for sufficient testing. One of the main reasons is that many tests have to be created manually. Testing activities are often reduced to save money and time. Or as a test expert once said “products are not released. They escape!”.
Model-based testing is the application of Model based design for designing and executing the necessary artifacts to perform software testing. This is achieved by having a model that describes all aspects of the testing data, mainly the test cases and the test execution environment. Usually, the testing model is derived in whole or in part from a model that describes some (usually functional) aspects of the system under development.
It needs to be mentioned, however, that many experts in testing such as Jeff Fry also have identified some liabilities of Model-based testing. Thus, the approach must first prove its feasibility and usability in practice.
Recently, van der Bijl has founded a company called Axini that offers support for test automation to customers.
It's not the testing, it's the fixing
Testing is the thermometer of quality - just testing something doesn't change the quality. Applying a quality process (for example applying TDD in a disciplined way) when building the product is the thermostat of quality - that's how you build quality in.
Model driven testing MIGHT save some time in the test design activities, and automated test execution is a good idea in the appropriate circumstances, but quality has to be built in to the product from the beginning to significantly reduce the "test-rework-retest-rerework--reretest-- cycle that many projects get into.
My short summary
- He is terrible at explaining what he has done to radically improve model-based testing
- He has zero empirical evidence mentioning that his teams have gone from 30 to 60% testing to less testing.
- He doesn't explain what kind of problem domains his team was tackling, and what in particular was difficult to test and an example of how model-based testing makes that difficulty easy
- To explain how model-based testing works, he gives a verbose, confusing summary of Harry Robinson's work on Google Maps route testing, without mentioning Harry's work or Google Maps.
Re: My short summary
Re: It's not the testing, it's the fixing
MBT for thoroughly testing complex software
Machiel Van der Bijl
I understand the hunger for more details. The snippet in the link provided by the PR people of the University of Twente is more of a teaser to wet your appetite :) (I am Machiel, the PhD guy btw).
I find Model Based Testing important because it is a successful test automation technique that makes it economically feasible to test thoroughly and quickly. It automates the three phases of testing: test case creation, test case execution and the evaluation of the outcome of the test execution. No other testing technique is capable of this as far as I know.
The reason why so much time and effort is related to software testing is because thorough testing takes a lot of effort. It is based on our own measurements and experience, furthermore companies like Gartner claim the same thing. One part is the testing effort itself. We're in the complex software business (medical systems, train systems, pension systems, etc) and testing complex software requires many tests. The creation of tests, test designs etc alone takes a lot of time. However a big part of the time and effort is in the test-execution phase. Take a system test. At the time a system test takes place the development team is biggest. They have to wait while the system test is performed. Executing the system test takes quite some time, for complex software we're talking man-weeks or man-months. During the tests bugs are found, they have to be fixed and re-tested. This cycle of test, bugfix, re-test can is also known as hardening. Hardening can take months. Same thing for hardening at the integration level.
This is where MBT shines. Instead of days or weeks you get an answer within hours, shortening the cycle hardening significantly. Furthermore, MBT already finds bugs while creating the model. Since models can be simulated and analyzed we provide early feedback on the design, thus preventing errors from occurring.
Sorry to hear that the maps metaphor was confusing. People that do not know anything about model-based testing or software in general have a hard-time grasping the concept. For this we use the analogy of software testing with checking the map of a city. It is by no means related to testing of Google maps. It's just an example nothing more, in our experience it helps people comprehending what MBT is about: thorough and fast testing.
I hope this explains things a bit, if you have any questions I'm glad to answer them.