BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Optimize Automated Testing Using Defect Data

Optimize Automated Testing Using Defect Data

Leia em Português

By integrating the test framework and the bug tracking system, it becomes possible to deactivate test cases for known bugs and reactivate them when the bug is solved. Aneta Petkova, QA chapter lead at SumUp, presented The Framework That Knows Its Bugs at TestCon Moscow 2019.

Petkova started her talk by explaining what often happens when a test case fails. She said: "Engineers are smart people, so we always go for the simplest solution - we deactivate the test, usually by commenting it out." Every shortcut is paid for in the future, she said; people tend to forget those tests when the storm has passed. "We leave ourselves open for the same issues to occur again, unchecked", argued Petkova.

Her team wanted to automate the test deactivation for known bugs and ensure that tests are reactivated when a bug is solved. They decided to integrate the test framework and the bug tracking system to make it possible to track test cases that are affected by a defect.

Petkova mentioned that when she started working with this idea, her focus was entirely on the test framework and she viewed Jira as a closed container that she could not change in any way. She used tags to assign issue keys to every affected test in the code, but that meant multiple commits and rebuilding the framework for each new defect. After presenting the idea at SeleniumConf in 2017, she got some great feedback which helped her to rethink the whole process. "I realized I was not utilizing Jira’s capabilities enough and I came up with using Jira’s custom fields", said Petkova. She decided to add a custom field to their project, of type Labels, and then list the identifiers of all affected tests.

The process itself consists of three steps and requires very little coding, as Petkova explained. The first step can be done in Jira by creating a filter to return all the defects that contain test identifiers, are currently active, and have low enough priority to allow them in your next product build. Second, the test framework can receive this list of defects and the associated tests via Jira’s Rest API. Finally, those tests can be disabled by leveraging the test runner of choice capabilities, in their case - TestNG.

Petkova clarified that only tests that are related to low priority bugs are disabled. "Anything that is critical should result in test failures, so there is no chance we release a build containing serious problems", she stated.

Petkova concluded that this approach allows them to automatically ignore minor problems, but still feel confident they are tracking them for the long run.

InfoQ spoke with Aneta Petkova after her talk at TestCon Moscow 2019.

InfoQ: You mentioned in your talk that checking results of automated tests can take up quite some time. Can you elaborate?

Aneta Petkova: When tests are high level, especially UI end-to-end, there can be multiple reasons for a failure - from infrastructure and networking, through different application layers, to the test code itself. It can take a real investigation to get to the root cause, not just the manifestation of a problem.

InfoQ: How do you follow up on tests that are failing due to known bugs?

Petkova: I have customized the report generated by TestNG to contain a list of all currently impacted test cases and the respective issues. On top of that, I’ve created a Jira board, containing all issues that have impact on the automation tests, and the team would always pay special attention to this board during planning meetings.

If a defect remains unaddressed for more than one sprint, we would often raise its priority, because by disabling tests it creates a possibility for other defects sneaking past our regression suite. However, I’d say following up is a matter of process and culture, and the technical solution is only complimentary here.

InfoQ: What are the benefits that you get from tracking and managing test cases related to defects?

Petkova: As our test framework is collecting defect-related data every day, we could use it for some statistical analysis and raise an alarm when issues remain unresolved for too long, or when the same test cases get hit by bugs day in and day out, as it’s a symptom of a seriously suffering functionality.

The integration made us pay more attention to the quality of our test cases - by tracking the "downtime" of every scenario, we found out some of them are simply too broad, covering too many different parts of our software, or too brittle and sensitive to the smallest UI change. As all of those examples show, the integration itself is not going to solve your problems, but it will support you in your efforts to learn, adapt and improve.

Rate this Article

Adoption
Style

BT