BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Software Testing, Artificial Intelligence and Machine Learning Trends in 2023

Software Testing, Artificial Intelligence and Machine Learning Trends in 2023

This item in japanese

Bookmarks

Key Takeaways

  •  Significant changes are coming in 2023 and the years ahead that will affect the software testing industry in big and small ways and as a result, you should start investigating how AI and ML can be used to improve your testing processes, leverage AI-based security tools, and implement risk-based methods such as risk-based testing that can leverage big-data insights.
  • Software Testing Trends - The need to rapidly reinvent business models or add new capabilities to handle remote working/living during the pandemic, developers were in high demand and in short supply, resulting in the paradoxical need for more programming expertise to do testing and more competition for those programming skills.
  • Machine Learning Changing Software Testing - Software applications are constantly changing as users want additional features or business processes to be updated; however, these changes often cause automated tests to no longer work correctly. One of the first ways we've seen ML being used in testing is to make the current automated tests more resilient and less brittle. 
  • How AI Is Changing Security Testing - AI is poised to transform the cybersecurity industry in multiple ways and we are now seeing AI being used for the first time to target and probe systems to find weaknesses and vulnerabilities actively.
  • New Roles and Careers - As AI becomes more mainstream, there are likely to be entirely new career fields that have not yet been invented. 

In many ways, 2022 has been a watershed year for software; with the worst ravages of the pandemic behind us, we can see the temporal changes and which ones have become structural. As a result, companies that used software to build a sustainable long-term business that disrupted the pre-pandemic status quo have thrived. Yet, at the same time, those that were simply techno-fads will be consigned to the dustbin of history.

The software testing industry has similarly been transformed by the changes in working practices and the criticality of software and IT to the world's existence, with the move to quality engineering practices and increased automation. At the same time, we're seeing significant advances in machine learning, artificial intelligence, and the large neural networks that make them possible. These new technologies will change how software is developed and tested like never before. In this article, I discuss trends we're likely to see in the next few years.

Software Testing Trends

Even before the pandemic, software testing was being transformed by increased automation at all levels of the testing process. However, with the need to rapidly reinvent business models or add new capabilities to handle remote working/living during the pandemic, developers were in high demand and in short supply. This resulted in the paradoxical need for more programming expertise to do testing and more competition for those programming skills.

One of the outcomes was moving to 'low-code' or 'no-code' tools, platforms, and frameworks for building and testing applications. On the testing side, this has meant that code-heavy testing frameworks such as Selenium or Cypress have competition from lower-code alternatives that business users can do. In addition, for some ERP and CRM platforms such as Salesforce, Dynamics, Oracle, and SAP, this has meant that the testing tools themselves need to have more intelligence and understanding of the applications being tested themselves.

Machine Learning Changing Software Testing

One of the first ways we've seen machine learning (ML) being used in testing is to make the current automated tests more resilient and brittle. One of the Achilles heels of software testing, mainly when you are testing entire applications and user interfaces rather than discrete modules (called unit testing), is maintenance. Software applications are constantly changing as users want additional features or business processes to be updated; however, these changes will cause automated tests to no longer work correctly.

For example, if that login button changes its position, shape, or location, it may break a previously recorded test. Even simple changes like the speed of page loading could fail an automated test. Ironically, humans are much more intuitive and better at testing than computers since we can look at an application and immediately see what button is in the wrong place and that something is not displayed correctly. This is, of course, because most applications are built for humans to use. The parts of software systems built for other computers to use (called APIs) are much easier to test using automation!

To get around these limitations, newer low-code software testing tools are using ML to have the tools scan the applications being tested in multiple ways and over multiple iterations so that they can learn what range of results is "correct" and what range of outcomes is "incorrect." That means when a change to a system deviates slightly from what was initially recorded, it will be able to automatically determine if that deviation was expected (and the test passed) or unexpected (and the test failed). Of course, we are still in the early stages of these tools, and there has been more hype than substance. Still, as we enter 2023, we're seeing actual use cases for ML in software testing, particularly for complex business applications and fast-changing cloud-native applications.

One other extensive application for ML techniques will be on the analytics and reporting side of quality engineering. For example, a longstanding challenge in software testing is knowing where to focus testing resources and effort. The emerging discipline of "risk-based testing" aims to focus software testing activities on the areas of the system that contain the most risk. If you can use testing to reduce the overall aggregate risk exposure, you will have a quantitative way to allocate resources. One of the ways to measure risk is to look at the probability and impact of specific events and then use prior data to understand how significant these values are for each part of the system. Then you can target your testing to these areas. This is a near-perfect use case for ML. The models can analyze previous development, testing, and release activities to learn where defects have been found, code has been changed, and problems have historically occurred.

How AI Is Changing Security Testing

If ML is changing the software testing industry, then AI is poised to transform the cybersecurity industry in multiple ways. For example, it is already touted that many antivirus and intrusion detection systems are using AI to look for anomalous patterns and behaviors that could be indicative of a cyber-attack. However, we are now seeing AI being used for the first time to target and probe systems to find weaknesses and vulnerabilities actively.

For example, the popular OpenAI ChatGPT chatbot was asked to create software code for accessing a system and generating fake but realistic phishing text to send to users using that system. With one of the most common methods for spear phishing using some kind of social engineering and impersonation, this is a new frontier for cyber security. The ability for a chatbot to simultaneously create working code and realistic natural language based on responses it receives in real-time from the victim allows AI to create dynamic real-time offensive capabilities.

If you don't believe that you would be fooled, here's a test – one of the paragraphs in this article has been written by ChatGPT and pasted unaltered into the text. Can you guess which one?

How Do We Test or Check AI or ML Systems?

The other challenge as we deploy AI and ML-based systems and applications is: how do we test them? With traditional software systems, humans write requirements, develop the system, and then have other humans (aided by computers) test them to ensure the results match. With AI/ML-developed systems, there often are no discrete requirements. Instead, there are large data sets, models, and feedback mechanisms.

In many cases, we don't know how the system got to a specific answer, just that the answer matched the evidence in the provided data sets. That lets AI/ML systems create new methods not previously known to humans and find unique correlations and breakthroughs. However, these new insights are unproven and maybe only as good as the limited dataset they were based on. The risk is that you start using these models for production systems, and they behave in unexpected and unpredictable ways.

Therefore, testers and system owners must ensure they have a clear grasp of the business requirements, use cases, and boundary conditions (or constraints). For example, defining the limits of the data sets employed and the specific use cases that the model was trained on will ensure that the model is only used to support activities that its original data set was representative of. In addition, having humans independently check the results predicted by the models is critical.

How Is AI Changing Computer Hardware?

One of the physical challenges facing AI developers is the limits of the current generation of hardware. Some of the datasets being used are on the scale of Petabytes, which is challenging for data centers that simply don't have sufficient RAM capacity to run these models. Instead, they must use over 500 General Processing Units (GPUs), each with hundreds of gigabytes of RAM, to process the entire dataset. On the processing side, the problem is similar, where the current electrical CPUs and GPUs generate large amounts of heat, consuming vast quantities of electricity, and the speed of parallel processing is limited by electrical resistance. One possible solution to these limitations is to use optical computing.

Optical computing is a type of computing that uses light-based technologies, such as lasers and photodetectors, to perform calculations and process information. While there has been research on using optical computing for artificial intelligence (AI) applications, it has yet to be widely used for this purpose. There are several challenges to using optical computing for AI, including the fact that many AI algorithms require high-precision numerical computations, which are difficult to perform using optical technologies.

That being said, there are some potential advantages to using optical computing for AI. For example, optical computing systems can potentially operate at very high speeds, which could be useful for certain AI applications that require real-time processing of large amounts of data. Some researchers are also exploring the use of photonics, a subfield of optics, for implementing artificial neural networks, which are a key component of many AI systems.

What New Roles and Careers Will We Have?

As AI becomes more mainstream, there are likely to be entirely new career fields that we have not yet been invented. For example, if you ever try using chatbots like ChatGBT, you will find out that it can write large amounts of plausible, if completely inaccurate, information. Beyond simply employing teams of human fact-checkers and human software testers, there is likely to be a new role for ethics in software testing.

Some well-known technologies have learned biases or developed discriminative algorithms from the datasets fed in. For example, the Compass court-sentencing system would give longer prison sentences to persons of color or facial recognition technology that works better on certain races than others. The role of software testers will include understanding the biases in such models and being able to evaluate them before the system is put into production.

Another fascinating career field would be the reverse of this, trying to influence what AI learns. For example, in the field of digital marketing, it is possible that chatbots could partially replace the use of search engines. Why click through pages of links to find the answer when a chatbot can give you the (potentially) correct answer in a single paragraph or read it out to you? In this case, the field of Search Engine Optimization (SEO) might be replaced by a new field of Chat Bot Optimization (CBO). Owners of websites and other information resources would look to make their content more easily digestible by the chatbots, in the same way that web developers try to make websites more indexable by search engines today.

Which paragraph did ChatGBT write?

Did you guess? It was the last paragraph in the section "How Is AI Changing Computer Hardware?"

Summary

In conclusion, significant changes are coming in 2023 and the years ahead that will affect the software testing industry in big and small ways. As a result, you should start investigating how AI and ML can be used to improve your testing processes, leverage AI-based security tools, and implement risk-based methods such as risk-based testing that can leverage big-data insights.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT