Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Q&A on the Book Testing in the Digital Age

Q&A on the Book Testing in the Digital Age

Leia em Português

Key Takeaways

  • Testing in the digital age brings a new vision on test engineering, using new quality attributes that tackle intelligent machines and a roadmap split up in five hops
  • With everything digital, there are more possibilities for test automation and piles of (test) data growing out of control. Smart test dashboards help streamline this problem
  • Working together with robots (cobotics), using artificial intelligence in testing and eventually predicting the occurrence of defects brings your testing to the digital age
  • Complex systems cannot be tested within a specific timeframe with an acceptable coverage. We need to make the test sets smarter, or even make use of AI to generate test cases for us
  • Collecting data is very important in testing with AI; we must distinguish between data needed for learning, for testing and for operating

The Book Testing in the Digital Age by Tom van de Ven, Rik Marselis, and Humayun Shaukat, explains the impact that developments like robotics, artificial intelligence, internet of things, and big data are having in testing. It explores the challenges and possibilities that the digital age brings us when it comes to testing software systems.

InfoQ readers can download a sample of the book Testing in the Digital Age.

InfoQ interviewed Van de Ven about the changes and challenges that the digital age has brought us, testing with artificial intelligence (AI) and testing of AI, recent developments in testing for security and privacy, and developing digital test engineering skills.

InfoQ: What made you decide to write this book about testing?

Tom van de Ven: The starting point was the fact that digital is put in front of a lot of common words: digital transformation, digital sports, digital music, digital post, digital twin.

This made me curious as to what made existing products and services have a digital variant. What makes it digital? For a written letter, it is obvious that the digital version can be an e-mail or whatsapp message, but for a lot of other things it isn’t outright clear.

This got me thinking on how the test processes need to change with all this digital around us. Is digital testing different than regular testing? Rest assured, a lot of what we know about testing is still valid in the digital age :)

InfoQ: For whom is the book intended?

Van de Ven: Heads of engineering departments, R&D managers and the engineers doing the work. With engineers we target not only test engineers, but all engineers involved in product development. They all can use a quality boost and good tips and tricks to keep on improving testing in the digital age.

InfoQ: What changes is the digital age bringing us?

Van de Ven: A factory with robotic arms assembling products must work around the clock and cannot make one error. Robots must act like that. The interaction with a chatbot on an insurance website must also be available 24/7. This robot must give the user the experience of talking to a real person together, having all the information at hand you can possibly think of.

These examples of digital solutions have demand for a high level of quality in common. The customer operating the robots (whether it be factory or website) demands from his supplier an almost immediate response to problems found. A robotic arm stopping or doing the wrong operations, stops the assembly line. The chatbot going offline might mean losing potential new customers.

Digital solutions can help us in a great way. They are characterized by:

  1. a high level of quality
  2. the ability to receive updates quickly
  3. non-stop operation

With the three characteristics of testing digital solutions, we can break it down to the impact for testing as follows:

  • Complexity
    Digital products and services tend to be able to communicate a lot easier with each other. Interfaces are everywhere and with all combinations possible the complexity of functions grows rapidly. With new technologies around us, the complexity in one product is growing as well. Computational power is available for a relatively low price nowadays; this can run increasingly complex algorithms at high speeds.
  • Speed
    The market speed is increasingly fast. Updates and new product features need to be released on a weekly basis. Terms like continuous delivery and continuous integration are popping up everywhere.
  • Large amounts of data
    Data storage is cheap. Internet of Things makes it possible to measure anywhere; this creates large amounts of data. The data can be used as input for new services, but also for testing.
    On the other hand, in the digital age we can test a lot more automatically. The amount of test products (test cases, test results, etc.) is growing just as fast. We need to cope with this as well.
  • AI
    Artificial Intelligence is available on a variety of digital platforms (think, IBM Watson, Microsoft Azure and also open source libraries in Python). The testing of AI solutions is different as opposed to classical testing, where a clear result is compared with a prediction. AI might give different answers every day.

InfoQ: How do these changes challenge the way that software is tested?

Van de Ven: Complex systems cannot be tested within a specific timeframe with an acceptable coverage. We need to make the test sets smarter, or even make use of AI to generate test cases for us.

A good example for generating test cases can be the use of an evolutionary algorithm in testing automated parking on a car. You can imagine that with automatic parking, the amount of situations the car can be in are nearly infinite. The starting position may vary with surrounding cars positioned in many different ways, or other attributes that cannot be hit are around the car. The automatic parking function may not hit anything when parking and the car needs to be parked in a correct way. In this case we can generate a series of starting positions that the automatic park function needs to tackle. Ideally this is virtual so we can run a lot of tests quickly. It could be physical tests of course, but it would take more time in test execution. We need to define a fitness function that is evaluated with each test execution run. In this case it would be a degree of passing for the parked car. You can imagine some points for not hitting anything, and points for how well the car is parked in the end. Now we generate a series of tests and run them. Each outcome is evaluated and assigned a total points value. With this value, the evolutionary algorithm decides what test cases go through and which ones are not interesting any more. We break the test cases up (for example half of the variables per test case get split off) and we recombine in new test cases. We might add a new series of random test cases in this mix. Maybe do some random mutation (you can set the mutation rate and other evolutionary variables as well). The new set of test cases is run and the outcomes evaluated. Each series is called a generation, and with evolutionary algorithms we try to optimize the fitness function variable in order to find a situation where the parking function hits an object or cannot park anymore.

In this case we use AI to quickly iterate to possible defects. If after a (predefined) series of generations no faults occur, we conclude the parking function operates accurate enough.

Collecting data is increasingly important for testing:

  • For the use of AI in testing we need data. A self learning mechanism needs data fed into the mechanism to learn. We distinguish between data needed for learning, for testing and for operating.
  • Data in testing: we are collecting loads of test-related data like test cases, results and defects. We have to collect the right test-related data in order to show where problems lie or where to look at when data is present in a wide variety of sources (defect management system, test management system, requirements sources, etc. may all live in different tools). We need to delve through test data and create smart test dashboards. By classifying all the different data everywhere, a smart dashboard might learn from changes in the code (check-ins) and find relevant test cases to run and test results to look at.

Finally, we need to create continuous testing for CI/CD with full test automation, even in the physical domain with robots: cobotics! Automated test sets can be put in place with all kinds of test automation frameworks. Testing in the physical domain might require a robot doing actual tasks. For example, the duration test of the cockpit of an airplane (in a simulated environment) can be executed for a large part by a robot. Maybe starting up and tearing down the system might be done by an actual person, but flying for hours on end without many changes going on should be good for a robot. Define what to look out for and collect the right data when defects occur, and your robot is good to go.

InfoQ: How can we “test” if an AI system is actually learning?

Van de Ven: Let’s clearly state there is a difference between AI in testing and testing AI. The use of AI in testing takes shape in many different ways, like the use of an evolutionary algorithm or machine learning. Maybe fully in-depth knowledge of each algorithm is not needed to operate it in a test environment, but we do need to know what parameters can be manipulated and to what effect. Thinking about the data needed to teach your AI system in testing is very important here. Keep in mind that AI test systems can find local problems quickly, but may still miss areas! The use of AI is supported for testing engineering for now. Humans are still in charge.

Now, look at testing AI. This is also difficult to test. An AI solution may not give the exact same answer in the same situation. Time may have an impact here, but also what the system learnt in between. We need to start using windows of correct answers to check whether a test has passed or failed, instead of comparing to exact results. Knowledge of the AI mechanism also helps in looking for test situations to test. Make sure that you understand the difference between learning data, test data and actual data!

The most difficult combination where we start testing AI solutions with AI technology is something we have to postpone for a while, until we’ve learnt more on both situations!

InfoQ: How can we use artificial intelligence in testing?

Van de Ven: The first example of AI testing was already given with testing automatic parking. We can also look at test case selection: generate the right set of test cases that fit the changes to code or product in the best way. Changes in the code may have an easy link to test cases, but there may be correlations found with AI from code to test that we did not find, but really help out in preventing defects occurring in the field.

Another example is the use of AI testing with respect to test environment setups. Collect data from field use and generate two or three test environments that can be built for testing that cover the largest amount out there. With AI (evolutionary algorithm for example), this can be sped up and leads to interesting setups. A setup generated in this way may not exist in real life but can really help testing. On the other hand, always think about real environments that you want covered as well.

InfoQ: What are recent developments when it comes to testing for security and privacy? What will the future bring us?

Van de Ven: We see the use of AI in security testing coming in. There are contents in hacker conferences that stimulate the use of AI in this area. A very speedy security test can be created in this way. With test automation and a good AI feedback loop, the weak spots are iterated to very fast.

Let’s look at the big hackers conference DEFCON. Already in 2016 a contest was held where contestants needed to find security breaches using AI. From a defensive point of view, cyber-security professionals already use a great deal of automation and machine-powered analysis. Yet the offensive use of automated capabilities is also on the rise.  For example, using OpenAI Gym, the open-source toolkit for learning algorithms, can be used in an automated tool that learns how to mask a malicious file from antivirus engines, by changing just a few bytes of its code in a way that maintains malicious capacity. This allows it to evade common security measures, which typically rely on file signatures – much like a fingerprint – to detect a malicious file.

InfoQ: What are digital test engineering skills, and what can be done to develop them?

Van de Ven: In the future the role of test engineers will change. We will be speaking on test engineering instead of test engineers. One specific test engineer will not be there any more. There will be engineers with a combination of skills like testing, AI, blockchain, mathematics, etc. At different times a different set of skills is needed within test engineering for a project. They are not directly coupled to persons (test engineers) any more. It is much more about forming cross-functional teams that transform along the way.

The skills needed within test engineering in the digital age correspond to new technologies out there. Knowledge of Artificial Intelligence is needed for one; not directly to program the algorithms but at least to use the algorithms. You need to know what buttons to push in the AI algorithms and what variants to choose in what situation.

Furthermore, I would say to choose the tech you feel happy with and delve a bit deeper in the theory behind it. It can be AI, weather forecasting, robotics or 3D printing. It may be that your hobby can be put to good work in testing in the digital age. Once your expertise is there, your place in the cross functional team takes shape.

Education on these different topics nowadays is available on different MOOC (Massive Open Online Courses) platforms like Udemy, Coursera or Pluralsight. They offer a great way of touching upon a technology (beginner or introduction course), delving a bit deeper (with workshops), and eventually becoming an expert (finish a six month series of online classes with assignments and even a grade or certificate).

About the Author

Tom van de Ven has been active in the field of High Tech testing for 15 years. As a High Tech test expert, he is a frequently a sparring partner for Sogeti High Tech customers with regard to test projects. He is author of the books IoTMap and Testing in the Digital Age. Being a member of SogetiLabs and an ambassador for SPIRIT within the Sogeti group, he is a recognised international authority on IoT and High Tech testing.

Rate this Article