Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Q&A on the Book Accelerating Software Quality

Q&A on the Book Accelerating Software Quality


Key Takeaways

  • AI and ML systems are growing fast to address key pains throughout the DevOps pipeline from software creation to testing and production monitoring.
  • Testing teams can start exploring test automation, test impact analysis, and test management that have AI abilities to expedite their tasks and reduce the brittleness in their existing artifacts.
  • DevOps engineers should start exploring AIOps abilities so they can accelerate their feedback loop, resolution of production issues, predictive and trending analysis, and more.
  • Developers are in a great position today with the rise of smart code-quality tools to boost their unit testing creation, code analysis, and fuzzing that are all aimed toward better productivity.  
  • Executives must ensure that when starting to explore AI and ML systems there are solid metrics, measurements, and criteria on how to embed such tools into the existing tool stack. Only by those measures will success realization and ongoing monitoring be possible.  

The book Accelerating Software Quality by Eran Kinsbruner explores how we can combine techniques from artificial intelligence and machine learning with a DevOps approach to increase testing effectiveness and deliver higher quality. It provides examples and recommendations for using AI/ML-based solutions in software development and operations.

InfoQ readers can download a sample of Advancing Software Quality.

InfoQ interviewed Eran Kinsbruner about the challenges with quality and testing, and the solutions that are provided by AIOps and AI/ML based testing.

InfoQ: What made you decide to write this book?

Eran Kinsbruner: After authoring my first two books on continuous testing and DevOps and getting great feedback on the value they provided developers, testers, and executives, I also learned that the current practices used by these individuals work up until a certain level. After that, the technology falls short.

There is a need for more advanced technology based on AI and ML that can run hand-in-hand with traditional technologies to make DevOps more efficient, more automated, and with higher quality deliverables.

As I started to write the book, I learned that the market is already in mature stages in developing multiple tools based on AI and ML to solve some significant pains throughout the development cycle. I then started categorizing and classifying these tools based on pains, target practitioners, use cases, and more. And I compiled them into a complete guide for users to get started with.

InfoQ: For whom is this book intended?

Kinsbruner: The book is intended for almost any person in DevOps. For test engineers, developers, performance and operation engineers, DevOps support, and executives, this book provides a one stop shop with practices, tools, and how-to guides to fit the needs of these people.

Operational engineers can learn the benefits of AIOps, developers can learn about fuzzing and automated code reviews backed by AI and ML, test automation engineers can learn about autonomous testers and RPA, and more.

InfoQ: How has the software industry developed over time?

Kinsbruner: DevOps and Agile have significantly matured over the past year, and there is a strong understanding around the value of fast software delivery and incremental value to customers. With that in mind, the industry is still struggling with few major challenges, including:

  • Low automation percentages across the different phases in the development lifecycle (testing, build acceptance, code reviews, deployment, root cause analysis resolution, and more).
  • Dealing with huge amounts of data generated throughout the lifecycle, including test data, production data and logs, code maintenance, and more.
  • The length of the decision-making process, feedback loops, code, and test impact analysis.

These are subsets of the challenges that traditional technologies — including commercial and open-source with no AI/ML abilities — are falling short on today. More advanced tools that are based on AI and ML are growing and when used in parallel with standard ones, they can improve the overall efficiency, performance, and quality of software deliveries.

InfoQ: What are the challenges that software teams are facing these days when it comes to quality and testing?

Kinsbruner: There are multiple challenges that can be divided across test automation creation and maintenance, test reporting and analysis, test management, testing trends, and debugging.

Traditional tools are not efficient enough to provide practitioners with reliable, robust, and maintainable test scripts. Test automation scripts keep breaking upon developers’ code changes made to the apps, or elements on the app that aren’t properly recognized by the test automation framework. Ongoing maintenance of scripts is also a challenge that causes lots of false negatives and noise that drills into the CI pipeline.

As test execution scales, large test data accumulates and needs to be sliced and diced to find the most relevant issues. Here, traditional tools are limited in filtering big test data and providing data-driven smart decisions, trends, root cause of failures, and more.

Lastly, the time it takes to create a new script that is code based, and debug it, is way too long to fit into today’s aggressive timelines. Hence, AI and ML are in a great position to close this gap by automatically generating test code and maintaining it through self-healing methods.

InfoQ: How would you define AIOps and what benefits can it bring?

Kinsbruner: AIOps is the application of artificial intelligence (AI) to enhance IT operations. Specifically, AIOps uses big data, analytics, and machine learning capabilities to do the following:

  • Collect and aggregate the huge and ever-increasing volumes of operations data generated by multiple IT infrastructure components, applications, and performance-monitoring tools.
  • Intelligently sift ‘signals’ out of the ‘noise’ to identify significant events and patterns related to system performance and availability issues.
  • Diagnose root causes and report them to IT for rapid response and remediation — or, in some cases, automatically resolve these issues without human intervention.

An AIOps complete solution does not only cover smart APM (application performance monitoring), but also leverages ITIM (IT Infrastructure Monitoring) and ITSM (IT Service Monitoring) to build a comprehensive layer of production and operational insights analysis that can run on big data and against advanced modern software architecture (micro-services, cloud, etc.).

With the power of AI-based operations, teams can better focus on determining the service heath of their applications, and gain control and visibility over their production data. With that, DevOps teams can expedite their MTTR (mean time to resolution) using automated incident management in real time and quickly.

As an aside, I refer readers to the IBM definition of AIOps.

InfoQ: What solutions do artificial intelligence and machine learning provide for test automation?

Kinsbruner: AI and ML based tools for test automation provide a wide range of abilities.

From a creation point of view, AI and ML tools can generate test scenarios autonomously without writing a line of code. This can be done via NLP (natural language processes) and other methods.

On other fronts, ML tools for test automation can utilize self-healing abilities to auto-maintain object locators for web and mobile apps in an agnostic way. Another use case for test automation using ML is test impact analysis (TIA), and automated root cause analysis (RCA) classification.

In such cases, ML can run through the test data, build acceptance test results on the code itself and provide predictive analysis, trends, and guidelines around which regression tests to run for the next build, what coverage gaps exist, and much more. The main benefit here is to optimize the regression test suite to cover the most valuable test cases based on data, history, and predictive analysis

Lastly, there are ML tools for the automated creation of unit tests that can ease the work for the developers and accelerate their development cycle and their build acceptance testing (BAT).

InfoQ: How can we test AI-based systems?

Kinsbruner: Testing AI-based systems or AIIA (AI infused applications) requires a thorough methodology that involves various elements. The first item to investigate is the data accuracy itself. Neural networks (NN) that are based on inaccurate data of below 85-90% reliability will make a non-reliable algorithm and a flaky solution.

The next consideration is around the use of static vs. dynamic NN and AI algorithms. Obviously for dynamic ones, there is a greater need to examine and continuously test the datasets vs. static and known input.

Complex systems would use more than a single NN, hence, the testing of each and the dependency on one process or the next will be a critical success factor (e.g. a collision detection system might use NN to analyse the base images and a relatively simple algorithm to determine if a collision is possible.). Like in any other system, security is a key aspect to cover.

For AI based systems, uncovering all the security-related flaws is essential. Here, the test engineer would need to identify the use cases, the potential limitations of the AI algorithm that can allow users to trick the system through different inputs to the system etc., and also to understand the impact of such holes in the system.

While there is more data-related testing to consider, the additional aspect to consider is the fitness of such testing types in the overall product, tool stack, pipeline, and tool selection. Within the book there is a full methodology with examples on how to get started with testing such systems.

InfoQ: What possibilities does AI offer to test software apps and systems?

Kinsbruner: The book covers a wide range of possibilities to test chatbots apps, mobile and web apps, and desktop or business apps. Through RPA tools, testers can automate inner processes for the business and reduce the time and cost of doing it manually.

For test automation creation as mentioned above, AI can auto-generate and maintain the scripts upon code and environment changes. For test analysis, AI can provide trends, predictions, and reduce MTTR (mean time to resolution) for fixing defects. For visual testing, AI can leverage neural networks to generate baselines and compare between different visuals across devices, web browsers and more automatically.

For developers, AI can generate unit testing by exploring the code changes through code coverage and other static code analysis tools. 

InfoQ: How can we measure the effectiveness of our AI/ML testing approach?

Kinsbruner: This is a great question, and indeed the value of utilizing AI and ML must be determined upfront and measured throughout the adoption of such tools to ensure that teams are heading toward the right path.

Each of the AI solutions that are mentioned in the book address a unique pain or challenge. The pain needs to be clear, and metrics for what good will look like following the utilization of the AI systems need to be set.

For example, without RPA, engineers are manually running tests to validate an HR system that enters a new employee into the payroll – it takes XX number of hours, and XX amount of resources to do. With an RPA system, this should be 100% automated and take minutes to complete.

On a different aspect, for support engineers to classify and resolve a production outage or ticket takes XX number of hours and is mostly done manually. With an AIOps system, the classification and auto-resolution might be agnostic or will take XX% time less.

To measure success requires an individual examination of the problem at hand, and then a comparison of metrics/KPIS before vs. after. Note that in many cases, the value realization may take longer and will come in phases, with the solution widely distributed across the company and teams.

InfoQ: How can we do automated code reviews using AI and what benefits can that bring?

Kinsbruner: Automated code reviews with AI can be done in various ways. Such AI systems need a baseline of code to start with and understand the “current” level of quality. From that point on, such systems can run upon each code change through a scheduled trigger and output the recommended changes.

Tools like Facebook infer, Amazon CodeGuru, and others have the ability to automatically understand the intent of the code after they have been trained on millions of code repositories, and by that, provide the developers solid recommendations on code waste, code performance issues, or standard code quality problems.

The main goals of such tools is to expedite the time it takes developers to open a PR (pull request), review, approve and deploy it to the main repository after it underwent all the relevant checks. This domain that runs in parallel with standard static code analysis tools is evolving, and will become more widely adopted as the market gains confidence in such tools' outputs.

InfoQ: How can AI help to make test maintenance easier and reduce test flakiness?

Kinsbruner: AI systems in test maintenance can help improve the reliability of a test suite in many ways. From the element locators’ self-healing, through the added steps to the suite as the app changes, and through TIA, such tools can serve as a guardian angel for the test engineers. 

In today’s reality, when elements of the app are changing constantly, AI systems can automatically switch the scripts to “click” on the new element (button e.g.) on the app without the need for any code changes.

From an autonomous testing perspective, such tools can examine in runtime a mobile or web app and determine a new flow or screen that was added from the previously taken “baseline,” and generate new steps in the test suite to cover the new functionality. This can be then promoted as an anomaly for the decision maker to accept as a new test flow or reject due to a bug. This assumes that the new flow or screen is not a bug that has been introduced, and can then be promoted as an anomaly for the decision-maker to accept or reject. 

From a TIA perspective, such systems can guide the QA managers on what to test next, what to exclude, which platforms are more flaky, which environments are outdated, and much more. It is important to note that such systems should be led, adopted, and driven by the testing teams and not replace these teams.

InfoQ: What do you expect that the future will bring for AI and ML testing?

Kinsbruner: The future of AI and ML in DevOps depends on the reliability, adoption, and fitness of such tools into the standard tool stack. As mentioned in the book, AI and ML are solving specific challenges in the current software development lifecycle, and to really solve the challenges, without creating new ones, these tools must be 100% integrable into current CI/CD/CT (continuous testing) tools, and build value on top of rather than forcing changes that aren’t required.

The DevOps future should look into more automation, more autonomous abilities in the flaky and error-prone activities, and be able to deal with huge data throughout the cycle.

About the Book Author

Eran Kinsbruner is chief evangelist and product manager at Perfecto by Perforce. He is also the author of the 2016 Amazon bestseller ‘The Digital Quality Handbook’, the Book Authority award-winning ‘Best New Software Testing Books’, ‘Continuous Testing for DevOps Professionals’, and ‘Accelerating Software Quality – ML and AI in the Age of DevOps’. He is a development and testing professional with over 20 years of experience at companies such as Sun Microsystems, Neustar, Texas Instruments, and General Electric, among others. He holds various industry certifications such as ISTQB, CMMI, and others. Kinsbruner is a recognized influencer on continuous testing thought leadership, an international speaker, blogger and also a patent-holding inventor (test exclusion automated mechanisms for mobile J2ME testing). He is active in the community and can be found across social media and has his own blog. For further information, please visit:


Rate this Article