BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Testing Machine to Machine Systems

Testing Machine to Machine Systems

Devices are becoming increasingly interconnected through the internet where they are communicating directly with each other. Testing such machine to machine (M2M) systems can be difficult due to their complexity and the usage of different platforms, as Peter Varhol explained in his talk about testing in the M2M world at the QA&Test 2014 conference.

Machine to machine systems are networks of connected machines which usually have little or no human interaction when they are communicating. The machines will do things based on input from other machines. Examples are automotive control systems, home automation, aircraft and aerospace systems and retail or point of sale (POS) systems. These systems can be performing safety-critical or mission-critical functions, therefor it is important to assure that these systems can be accurately tested.

InfoQ interviewed Peter Varhol about the challenges of testing machine to machine systems, defining a test strategy for testing them and the importance of code coverage in machine to machine testing.

InfoQ: What are the challenges with testing machine to machine systems?

Peter: There are several challenges. One is the complexity of individual subsystems. Major systems in automobiles, aircraft, and process control systems have hundreds of thousands or millions of lines of code running on 32-bit embedded processors.

A second challenge is that these systems often have no user interface, or at best a very minimal user interface. Little if any functional testing can be done in traditional ways. Instead, testers have to figure out other ways of interacting with the system, such as by using a test harness to define and send inputs, and record outputs.

Last, individual systems are interconnected with other systems, meaning that inputs are coming from multiple places. Testers not only have to worry about individual embedded systems, but also the interactions between interconnected systems. In an automobile, a user may initiate processing, such as by pressing on the gas pedal. That input is sent to a computer managing the engine subsystem, which interprets it based on the user input, but also based on inputs from the brake, steering, and collision avoidance subsystems. In extreme circumstances, pressing the gas pedal may not even result in acceleration, depending on the algorithm used and other inputs.

InfoQ: How does these challenges impact the test strategy for machine to machine testing?

Peter: Probably the biggest challenge from the standpoint of traditional testing is the lack of a user interface. Testers have to be able to understand where inputs to individual systems come from, and be able to simulate them, through a test harness or with instruments. Outputs also rarely come to a screen, in fact, an output may be analog in nature (engine acceleration or temperature changes). You need to find a way of recording and measuring that output. It will vary depending on the system, so you have to work with developers and engineers to determine what data you need and where you can get it from.

Also, it’s a challenge to localize and analyze bugs in systems that communicate with each other. As I pointed out in the presentation, race conditions are especially insidious, because they seemingly occur randomly. Even without race conditions, defects in one subsystem may not be apparent until they are used by another subsystem. That’s why integration testing is so important in M2M. But actually localizing and analyzing a defect can require an intimate understand of the system and all of the communicating subsystems in order to understand exactly what they are doing and if they are producing correct data.

InfoQ: Do you have suggestions how you can test communication between machines?

Peter: I’ve done things like shut off electricity or pull the network cable in the middle of processing. Can the system recover, or will you have an indeterminate system state. In other words, will it recover with data still active? Have we rolled back any processing? Were partial results delivered to an unaffected subsystem?

You may also want to follow an input and tap into it at various points in processing. For an M2M system, you might do that with a network sniffer, which means you might be examining the contents of individual packets to make sure you have data integrity.

InfoQ: In your presentation you talked about the importance of code coverage in M2M testing. Can you elaborate on that?

Peter: Code coverage has traditionally been a developer’s tool, to exercise code while writing and running unit tests. In the M2M world, testers need to understand and perform code coverage analysis too. You can write functional tests that cover all requirements, yet only exercise perhaps 30 percent of the code. Your functional tests may not be looking at all of the branches or conditions that can exercise entirely new code paths.

By looking at code coverage, testers can better understand how processing occurs under different circumstances, and write additional tests to address those circumstances. You will never get to 100 percent code coverage (some code is there for handling errors that may never occur, or that aren’t worth testing), but in most cases 70-80 percent code coverage is achievable. And you will never understand the behavior of an embedded system under a wide range of circumstances until you exercise as much of the code as possible.

Rate this Article

Adoption
Style

BT