BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles What Do AI and Test Automation Have in Common?

What Do AI and Test Automation Have in Common?

Bookmarks

Key Takeaways

  • AI and Test automation are both suffering from misconception and controversy infused by poor marketing and bad media coverage. Both allegedly pose threats to “Human” positions. Test engineers fear that test automation can potentially make their position unnecessary, while non-testers are fead with similar information by the media that AI is going to take over a big percentage of human vacancies. 
  • Whether we like it or not AI is already here and it’s embedded in our lives more than you can even imagine. AI already improves our quality of life and helps us perform tasks that in a less technological way would be harder to implement. 
  • Test automation is important, and improving it affects development teams in a positive way
  • AI and test automation evolved over the years into what it is today.
  • Testim.io harnessed the power of these complex algorithms to resolve some of the most painful pitfalls of test automation - which is test stability and flakiness.

As a test automation engineer, you spend most of your days in your IDE, writing and debugging code, or authoring test scripts using some kind of software. One of the most challenging things in our job is maintaining a “Green” (Successfully passed)  test suite alongside your application code.

A bit about test automation

First things first, In software testing, test automation is the use of software separate from the application being tested to validate the desired functionality of the application under test.

Confusing? Well, basically speaking it’s providing an additional security blanket to the stakeholders and the developers to go faster, develop and refactor with confidence that the product didn’t break by code changes made.

During the course of history, test automation tools have evolved along with the software development trends. Going back in the timeline we will learn that the initial test automation solutions were bulky tools that required heavy installation and were equipped with a “gray looking”, non-inviting interface. The open-source revolution changed the game. Open-source projects that were introduced to the software testing community such as Selenium and Appium quickly grew in popularity and achieved something I can fairly describe as global domination. Due to their massive use and true cross-browser abilities they ended up being standardized and served as a foundation for many commercial and free test automation tools. With that said, the solution provided was far from perfect. The need for browser drivers and enforcement of the webdriver protocol drew its own set of limitations which some of them will be addressed in this article.

With the obvious rise in popularity and availability, grew the popular misconception that test automation can replace the human manual tester. That is, of course, total nonsense, there is still a high demand for test engineers and there will always be. However, the end of the software tester job is a frequently discussed topic that tends to draw a lot of readers.

Another popular misconception is that test automation saves you time. Well, it was the initial goal, but what many companies fail to realize is that in most cases, before you can benefit from test automation (increased test coverage, saving time and manpower) you have to put in a huge amount of effort in implementation and eventually maintenance.

Back to test automation stability - Why is that a challenge?

Test stability is affected by many factors. Environmental instability, timing issues, performance issues, etc. But, let’s focus on the main aspect that gets affected by the fast pace of the software development process. When we design our test automation and its infrastructure layers we go oppon how the application is built at that particular time. The fast pace infuses frequent changes to the application’s layout, structure and business flows. Even in a static application, one of the challenges is implementing a good “Element location strategy”, choosing the right selector for the right job. To elaborate on that, let’s take a closer look at how the most common test automation tools locate elements in our AUT (Application under test). In order for us to perform actions on our AUT, we need to first identify the target web elements. The identification can be done by one of the locating methods exposed by the test automation framework. It could be a single unique attribute value or an expression like Xpath or Css-Selector. Once a unique enough identifier is established, the test relies on it to actually find and perform the action on the correct element. Xpath and CSS selectors also need to be correctly chosen, in order for them to be “strong” selectors.

The challenge is that application DOM is far from being static. Attributes are being changed, layouts change, and each time the application changes, the selector we rely on can become invalid and as a result, our tests can potentially break. We will get to the way we resolved this issue in a moment but first let’s take a look at another technological field that suffers from a bad reputation and common misconception.

AI, what is the deal?

For better and worse, AI has also become a big buzzword these days. It draws a lot of attention, but, what do we really know about it? While AI rises in popularity, the controversy surrounding it flourishes as well. It’s negative reputation gets to the point where people who know nothing about AI, nor do they come from a technical background, perceive it as something negative. 

The average person is getting bombed with information via numerous media channels, and what draws better ratings than headlines like “10 jobs that will be replaced by AI” or “Another Self-Driving Car Accident“. What better way to draw a crowd to a conference and create a “buzz” than having a famous speaker addressing AI as being something dangerous, unpredictable, a passing trend, or untestable technology.

Another reason for AI’s bad reputation is probably the fact that people often fear the new and the unknown. Testers are skeptics and critics by nature, so they imagine horror scenarios where AI is difficult to contain control or test, and the consequences “bugs” in such technologies can lead to. On the other hand, people that do not come from the tech industry simply do not grasp the concept of AI and the benefits we can get from it, and end up “feeding” on what they heard in the news. And to top it all, there are opinion articles and bad marketing initiatives that give AI a commercial look and feel.

Whether we like it or not...

Whether we like it or not AI is already here and it’s embedded in our lives more than you can even imagine. If you ever interacted with “Alexa” or “Siri”, received a recommendation for the next “Netflix” movie to watch, chances are you encountered AI in this form or another. Did you recently search anything via the world's most popular search engine? Then you must know that you will receive different results for the word “Java'' depending on whether you are a programmer or coffee-maker. Because that algorithm “studies” who you are, and the things you “Volunteer” about yourself to Google, and it will make your search more productive. Did you recently upload a photo featuring your friends to social media? Then you must have noticed the new feature that automatically recognizes their faces and offers to “tag” them in the uploaded image. Did you ever stop to think about how much clicks it saves you for a photo with, let’s say, 10 people? By the way, if we mentioned photos, how about the new age guessing feature? Well, that one I hate. It always says I’m older than my actual age. Oh well, how about the transportation field? We have Autonomous vehicles and auto parking. Hate parking? Don’t enjoy long drives? You can actually go out and purchase a car that will do it for you. Highly popular car manufacturers such as Tesla and Audi are a part of an industry that is doing fantastic revolutionary work in developing and perfecting this technology. Are they perfect? Of Course not, but perfect software does not exist and if it will exist, Testers like me would be out of jobs.

Fear vs facts

Did you read about the latest autonomous vehicle accident? Well, I hope you're sitting down because I’ve got devastating news for you. Ready? Here it comes - 99.9999% of all fatal accidents that ever occurred on this planet, were performed by human drivers. Oh, and by the way, one of the purposes of developing driver-assisting technologies and autonomous vehicles in the first place was helping to reduce the “Human factor” as they call it that gets hundreds of people killed each year.

All of the above are examples of the fact that technology progresses with time to meet our needs, raise our quality of life, entertain us, and help us perform tasks that we struggled to do in a less technologically assisted way.

What did we do differently?

The big question was, how to improve test stability. We knew that by utilizing AI and machine learning related algorithms we can make tests more stable.

One of the main decisions we made was that we are not going to use Selenium WebDriver as the main base for our solution, and by that, we can implement our own way to capture and execute actions on the AUT. The first product we developed was a codeless test automation tool. That product features a browser extension and a Test-Editor that makes it possible for each UI action performed by the user to generate a unique test step inside that editor. The test step contains detailed information about the target element the user performed an action on and its properties. That approach allowed us to implement something we called “Smart Locators” to evaluate hundreds of attributes and score them, so that way if the DOM changes, your tests don’t break.

How does it work?

In Machine Learning there is a concept called “Confidence level”. It involves estimating the performance of a machine learning algorithm on unseen data.

Let’s discuss it in a nutshell by using a fairly primitive example. Let’s say we are developing an algorithm that can identify images of vehicles. So, we will take 100 images of different models of cars, “tag” them as vehicles and “feed” them to our algorithm. Once it learns to identify vehicles, I can provide my algorithm an image of a model of a car it didn’t see before to evaluate, and quite surprisingly, it will recognize that it is a vehicle with a high level of confidence. Why?, Because it has 4 wheels, and doors, and a steering wheel, and it has a general shape that resembles a car (Are you familiar with the old saying “If it walks like a duck, sounds like a duck then it must be a duck”?).

So, once we understand the concept of using DATA for establishing confidence levels, let’s see how we can apply it in test automation. During our quest for the perfect algorithm, we have found that we can apply a similar logic for WebElements. Once I start capturing my test steps, the algorithm will “Study” everything he can “learn” about that element and any elements leading to it. Let’s say that my target element (Which I would like to click) is a link. Our algorithm will capture that it’s a <a> tag, all of its relevant attributes and values, and then proceed to go up the DOM to the <div> surrounding it, it’s attributes, index and values and so it will go on until the last relevant information it will find is evaluated. These properties we captured will be fed into our algorithm and compared in runtime to the actual application DOM. Then, it will provide us with a score and visual confirmation of any changes found. Using this approach, we were able to achieve a totally new way to enforce stability. Since we pinpoint changes in the app and score accordingly we can set a threshold so that even if one of the unique attributes breaks, it is still able to find the target element and perform the desired action on it.

One thing I’m always asked about is “False positives” that can be triggered by using this method of work. Well, it just requires a smarter approach to designing your tests, like putting validations where it matters. All the algorithm does is locate the element, so, if for example I’m required to locate and perform a text validation, then the algorithm will in fact find the desired element, but if the text changed the validation will fail the step/test. With that said, if all I want to do is to click on a button and I do not care where that button is located, what is its placement in the DOM, or how it looks visually, then the fact that the designer decided to change the layout of the page, or that the developer changed some attributes should not fail my test for no justified reason? The strategy of using smart locators worked so well that we decided to implement it in both of our coded and codeless automation solutions.

Conclusion

Before I finish up, I would like to show you a short video. It is actually from 1997 and It features an autonomous vehicle called the “Navlab” project, and it is powered by artificial intelligence software called “Alvin”. Can you imagine? 1997, think about the computers we had that year.

I think that the most important part of this video is actually the first few seconds when she says, “Years ago, who would have thought cellular phones would be as common as they are''. I think that if I would have gone back in time and showed that lady my cell phone today, her hair would turn gray on the spot, but think about how this small piece of technology changed our lives.

What I’m trying to say here is that tools and technology are meant to help us, and it tends to evolve and progress along with our needs and way of life. What worked perfectly fine years ago is not sufficient in super-fast pace companies work nowadays. In our case, technological progress gave us the ability to positively influence many teams. The ability to move fast, without worrying about constantly stabilizing and refactoring tests. It gives them the power to release faster, push more code, deploy more frequently and focus on what really matters - which is caring about product quality instead of constant maintenance.

About the Author

Daniel Gold is the Head of QA & Automation at Testim.io, Blogger, Instructor and International Speaker. Daniel has worked in various roles which included founding QA and test automation departments from scratch. In his current role, Daniel and his team are shaping how the next generation of AI assisted QA Automation tools help us approach test automation in the future.

Rate this Article

Adoption
Style

BT