BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Test Automation in the World of AI & ML

Test Automation in the World of AI & ML

Key Takeaways

  • There are many criteria to be considered before building framework / selecting tools for Functional Test Automation
  • It is very important to prioritise framework / tools capabilities needed for the software-under-test
  • A good, scalable Test Automation Framework that provides fast and reliable feedback to the team enables collaboration and CI/CD
  • Debugging / RCA (root cause analysis) and support for libraries / tools used is an afterthought in most cases. Do not fall in that trap.
  • There are some promising commercial tools that fit seamlessly in the Agile way of working. Depending on the complete context, these tools may be a good choice over building your own framework for Functional Automation.

Artificial Intelligence and Machine Learning, fondly known as AI & ML respectively, are the hottest buzzwords in the Software Industry today. The Testing community, Service-organisations, and Testing Product / Tools companies have also leaped on this bandwagon. 

While some interesting work is happening in the Software Testing space, there does seem to be a lot of hype as well. It is unfortunately not very easy to figure out the core interesting work / research / solutions from the fluff around. See my blog post - “ODSC - Data Science, AI, ML - Hype, or Reality?” as a reference.

One of the popular theme currently is about “Codeless Functional Test Automation” - where we let the machines figure out how to automate the software product-under-test. A quick search around “codeless test automation” or, “ai in test automation” will show a few of the many tools available in this space.

I was very keen in understanding what is really happening here. How is AI really playing a role in Functional Test Automation, or are these marketing gimmicks to lure the unsuspecting folks into using the tools. 

Before I proceed further, I want to highlight some criteria / requirements I consider are crucial from a Functional Test Automation design / tooling / framework perspective, especially in the Agile World.

Criteria and Requirements of Functional Test Automation in the Agile World

Often it is thought that Functional Test Automation should be done only once the feature / product is stable. IMHO - this is a waste of automation, especially when everyone now sees the value from Agile-based delivery practices - and doing incremental software delivery. 

With this approach, it is extremely important to automate as much as we can, while the product is being built, using the guidelines of the Test Automation Pyramid. Once the team knows what now needed to automate at the top / UI layer, we should automate those tests.

Given that the product is evolving, the tests will definitely keep failing as the product evolves. This is NOT a problem with the tests, but the fact that the test has not evolved along with the evolving product. 

Now to make the once-passing test pass again, the Functional Test Automation Tool / Framework should make updating / evolving the existing test as easy as possible. The changes may be required in the locators, or in the flow - it does not matter. 

If this process is easy, then team members will get huge value from the tool / framework and the tests automated / executed via the same.

 Clear and visible intent of the automated test

This is the most important aspect to me of Test Automation - understanding what has been automated - and does it indicate value of the test as opposed to a series of UI actions.

Deterministic and Robust Tests - locators & maintenance

The results of an automated test should always be the same, provided the test execution environment (i.e. product-under-test, test data associated with that test, etc.) is the same. This aspect can also be considered as Test Stability. 

If the test is failing for some reason (defect in the product, or outdated test), the failure reason of the tests should be the same in each repeated execution of that test.

One of the ways to ensure tests are deterministic and robust is to ensure the locators can be identified and update reliably - thus making maintenance easier. In some cases the tool-set used may have (artificial) intelligence to figure out the next best way to identify the same element, preventing a test failure because of element not found as locator changed. This is especially true in cases where unique locators may not be available, or the locators change based on state of the product.

There can also be different ways of identifying an element uniquely. The tool / framework should allow identification of these multiple locators, and the test author should be able to specify how to use them.

Usually tests will fail here or become flaky for the following reasons:

  • The locators are dynamic - they change on each launch / use of the product 
  • The locators depend on the context of the product-under-test 
    • example: based on the dataset available when running the test

The above factors make it quite annoying and frustrating to implement deterministic and robust automated tests. 

The aspect I am very happy to see in the (relatively) new toolset is the ability to identify an element using various locator-identification strategies. As you run the test more times, the tool learns what the test expectation is, and will try to find the element in the most reliable way possible. This way, the robustness of the test increases - without compromising the quality of the test, nor making the test “unintentionally pass”.

Reusable & ease of authoring / updating / customizing snippets of test

It should be easy to create snippets of automated functionality, and reuse them in different tests, on demand, and potentially with different data values (if required). These snippets may be encompassing simple logic, conditional logic, and may also have aspects of repeatability in them.

Ex: Login snippet - which is recorded / implemented once, and used in all tests needing login to be done with specific data in each case.

Many times, we need to update existing scripts. Reasons for this could be the evolution of the test (as the product-under-test has evolved), making the test robust (ex: to handle varying / dynamic test data), or handling specific cases in order to run the test against some environments, etc.

If the scripts are implemented using open-source tools, like Selenium WebDriver, etc., then we are directly dealing with code, and the task is relatively easier. With good programming practices, the refactoring and evolution can be done.

However, if the scripts are implemented using any non-programming, or non-coding based tool (free / commercial), then the task may get tricky. We need to ensure the tool will allow specific customizations and also do not need to re-implement the whole test, just because of the small(ish) change required to be done.

Test Data

Depending on the domain, and type of test, Test data can be simple, or extremely complex. There are various ways to specify test data, like:

  • In test implementation (ex: username / password hardcoded in your Login.java page file)
  • In test specification / intent (ex: in your tests using say, @Test annotations)
  • In code, but separate data structures / classes / etc.
  • External files / data stores
    • CSV
    • JSON
    • YAML
    • Property
    • XML
    • INI
    • Excel
    • Database

The Test Automation tool / framework should - 

  • Support multiple ways to specify / query test data, 
  • Support the optimization of the specification / query of the same
  • Provide the ability to specific different sets of data for different types / suites of tests
  • Provide the ability to specify data for each environment we want to run the tests in

References:

Support for API interaction

API testing solves a different purpose and is very valuable. It is the layer of tests below the UI tests in the Test Automation Pyramid. 

However, as part of functional test automation, wherever feasible and as supported by the product-under-test, we should leverage the API testing techniques in areas such as:

  • Test data setup / creation
  • Test state setup
    • Ex: Login using APIs, instead of having each test go through the UI flow for login, which is time consuming, and potentially brittle during execution as well
    • We could also use APIs during the test execution to do certain activities which are not necessary to be performed from the UI all the time

The Test Automation Framework / Tool should have the capability to leverage APIs for test data setup / creation. This means:

  • Creation of appropriate API requests using as required headers and parameters
  • Parsing the API response and ability to make sense, and if perform assertions on the response.

Parallel execution

Functional Test Automation is slow and takes time to execute. As the number of automated tests increase, the time to get this feedback increases, and as a result, the value of these tests decreases. A way to counter this (other than have fewer tests automated at the Functional layer), is to execute tests in parallel. 

This also means that we need to ensure that the tests are independent (can be run in any order), and do not share, nor rely on the state of the product-under-test created by another test.

Ability to run tests against local changes

This is an aspect often ignored / neglected. The implementer of the tests should be able to run the test against local code changes during implementation, or for investigation / RCA of specific issues in the product-under-test. 

Note that this is not about running tests on a particular local machine, but more about the ability to run tests against local changes of the product code. Ex: I have fixed a defect and I need to run tests against that. So I will deploy the code on my machine, and run the (subset of) tests (either run locally, or via the cloud) by pointing them to my local (and temporary) environment to get feedback on the same. If all is well with the changes, then I will push my code to version control system.

This should be a straightforward feature of the automation solution.

Environments

We should be able to run the test against any environment of choice. 

If the code deployed is the same in multiple environments (say, dev, qa, staging), then the test execution results should be the same for each environment the tests were run in.

This change of environment should be a simple configuration change.

The important aspect here is that it should be possible to segregate / execute tests based on specific environments as well. There should be a way to specify / tag each test with a set of environments it can be executed on. When running the tests, based on the choice of the execution-environment, only the applicable / relevant tests should run automatically.

Multi-browser / Mobile (Native Apps) support

This is another essential aspect. Rarely is any software built in a way that it will be supported on only a specific OS-browser combination, or only for a specific device. Based on the context of the product, it may need multi-browser support, or if it is a native app, then ability to work on a variety of devices. 

Accordingly, the implemented Functional Tests should be able to run on various OS-browser combinations, or devices as required by the product-under test.

The switch to such different execution environment should be a simple configuration change.

Debugging & Root-Cause-Analysis (RCA)

Tests are going to fail. In fact, if your tests do not fail at any time, it is good practice to check that there is no problem in the test by changing something in the execution ecosystem to ensure it fails and you see the right type and reason for failure(s).

The value of your automated tests is to ensure whenever tests fail, the following happens - 

  • The tests failed for some valid reason - i.e. not related to test instability
  • You are able to easily identify the reason for the failure - i.e. RCA - is easy
  • In many cases, the result of the test is not sufficient to know why it really failed. The Test Automation Framework / Tool should allow rerunning the test in debugging mode, step-by-step, to allow understanding and finding out the root-cause of the failure, or, better yet, direct you to the specific test element that failed and mention the specific cause.

Based on the RCA, if the test needs to be updated, the Test Automation Framework / Tool should make it easier to fix the problem.

Version Control

All tests & test code should be in version control system. This will allow reviewing history / changes as required. 

Integration with CI (Continuous Integration) Tools

The core value of any form of automation, is the ability, freedom and value of running the tests as frequently as possible. 

I prefer setting up a good CI-pipeline (ref - see “Introduction to pipelines and jobs”) - where for each build triggered, each type of test is automatically and progressively run on each commit. This gives the team early feedback on what has failed as a result of recent commits, and they can debug and fix the issue(s) ASAP.

In order to integrate Functional Test Automation in the CI process, all the capabilities listed in this article, along with setup (install of software / libraries / configurations / etc.) required for the test execution - should be automated - i.e. - done via running relevant scripts, with appropriate parameters through a command line.

Rich Test Execution Reports, with Trend Analysis

Good test execution reports are essential to understanding the state of the product-under-test, especially if the number of tests is large. The reports include metrics and information that help understand the overall quality of the product, and help take meaningful steps to improve the quality of the product. 

The ability to see the test results as a whole, or in parts / subsets, and in different visual ways can provide a lot of meaningful information. 

The reports should have lot of information about the executed tests, and the state of the product during the execution - example 

  • screenshots, 
  • video recording, 
  • server logs, 
  • device logs (if the test ran on real device), etc.
  • meta-data related to the test execution (ex: CI build-number, product-under-test version, browser / device, OS, OS version, etc.

Additionally, there would be different types of reports needed for different stakeholders. 

Ex: 

  • Managers may want to see more of aggregated reports, trends, flaky tests, etc.
  • Team members would be more interested in seeing details of a test run, reason of failures, etc. - i.e. information that would help them do quick root-cause-analysis and take meaningful steps to improve the quality of the product / tests in the subsequent runs.

Integrations with other tools / libraries

There are lot of interesting tools / libraries that do certain things very well. 

Example: 

  • If you want to do logging, you can use log4j.
  • If you need to integrate with CI, just provide command-line interface to all configuration and execution options to your tests. This way, your tests can be integrated with any CI

To think that all the capabilities that are needed in your Test Automation need to be built from scratch, or one tool should provide them all is not only silly, but also will make the tool very bulky and non-optimal.

The Test Automation Framework / Tool you use should allow easy integration with different tools. This integration will also allow you to get value from your Test Automation sooner.

Integrate with cloud solutions for execution

Implementing automated tests is one aspect. We need to setup the os-browser combination infrastructure, or have a good device coverage (based on context of the product-under-test) where the tests will execute. 

In many cases, it may be feasible to setup this infrastructure in-house, with the help of virtual machines, or simulators. In many cases though, setup, management and maintenance may become overwhelming. Also, as a result, the focus may move from testing the product, to managing and maintaining the infrastructure. 

In such cases, there are lot of interesting cloud-based, or on-premises private-cloud solutions that allow you to build / implement tests locally, and execute in the cloud.

This takes the burden / cost of setting up a lab (web / mobile) and managing the infrastructure away from the team, and instead they can focus on the core aspects of testing the product.

Some noteworthy cloud-based tools for execution are - SauceLabsBrowserStackpCloudyAWS Device FarmGoogle’s Firebase Test Lab, etc. 

Visual testing

In some cases, it is not sufficient to just have functional validation. We also need to ensure, with certain level of tolerance, that the product-under-test also appears exactly as was designed and expected, over a period of time.

There are many great tools / utilities, both open-source and commercial, that can integrate with the automated tests, and do the added visual regression as well. This helps reduce and avoid the error-prone manual validation from a visual perspective.

Some noteworthy examples for automating visual testing are - NakalGalenApplitools, etc. 

Commercial / Open-source

In the 20 years of my career, I have used mostly open-source tools, but also commercial tools for Functional Test Automation. 

Over the recent years, I have heard and seen far too many cases that open-source tools are being used as they are “free-of-cost”. This is a big misconception. You may not be paying to use the tool itself, but there is a cost associated with it.

I had my own reasons for preferring NOT to use commercial tools back then - 

  • The tools were too expensive to use - some having cost-per-hour, or cost-per-user model
  • Needed extensive training on the tool for the “chosen” people - as the readily available documentation was not sufficient and hence training became important. This was added cost - for the tool, the person going to use the tool (salary), and the training. 
  • Because of the heavy cost of license + training, the tools were made accessible only to the “chosen” few to keep the costs under control
  • The tools needed special, dedicated hardware to run
  • The tools were mostly record-and-playback. This meant when the product evolved, the scripts pretty-much had to be re-recorded
  • The commercial tools were not very easy to use and had a significant ramp up / learning curve. Hence using such commercial tools without support could be a huge roadblock to solving problems and teams would end up needing tool-support to help or guide in proceeding with automation implementation / execution.

Likewise, my reasons of preferring to use open-source tools were - 

  • It gave me flexibility to do what I needed to do - which is a very important feature of any automation framework / tool. This allows the automation to also evolve along with the product-under-test.
  • I could look into the tool source-code and find workarounds / solutions as required, without having to wait for any “support-person” to help solve my problem
  • I didn’t need to pay a ton of money for the tools. If I have, or can learn, the technical skills required for the tool, I can get started directly. (This thought process did not account for cost / salary for the programmer / automation engineer, or if some specific libraries were not available free).
  • I love programming and test design as well - and most of the open-source tools available (back in the days) required programming. 

After reading and using some of the newer commercial tools though, my thought process has changed. Here are some reasons why -

  • Tools are built with the mindset of working and supporting the “agile-teams”
  • Very easy and fast to automate the tests, with a very low learning curve
  • Easy to update / reuse / customize scripts with a product evolving on daily-basis
  • Great documentation and support
  • Lightweight tools - no heavy / complex installers
  • Good integrations with other tools / libraries
  • The cost may be lower and value may be higher of the Functional Test Automation using commercial tools
    • For open-source tools, you have to factor in the cost / salary of the developers / SDETs (Software Developers in Test) implementing the automation. Also, implementation, maintenance, refactoring and evolving the automation code does take some time and effort as well.
    • The speed of implementation, configurability and integration with CI - cloud-solutions for commercial tools may be much higher than implementing all this manually. Hence the net-value to the team in terms of getting tests running and giving feedback may be higher when using commercial tools.

Support

Implementing Functional Test Automation is all about using the right tools and libraries and implementing tests using those tools. Because each implementation is different, and the skills and capabilities of the test implementers are different, there is some form of support required to help answer questions, fix issues in the tools - libraries, or find/provide workarounds to allow the team to move forward. 

This support can be in the form of:

  • User forums / community support
  • Documentation 
  • 24x7 support
  • Interacting / raising issues / giving feedback to the creators of the libraries / tools

In many cases, available Support mechanism is the deciding factor when it comes to selecting tools / libraries for implementation of the Functional Test Automation.

Interesting Products / Tools

Based on the criteria mentioned above, I embarked on a quest to compare the new and interesting tools in the market from a Functional Test Automation perspective.

A quick look at any tool made me realise how much easier it has become to get started with Test Automation. Also, while evaluating the new generation of tools, I had a déjà vu moment - an idea I had proposed in my talk on “Future of Test Automation Tools & Infrastructure”, at the 1st vodQA at ThoughtWorks, Pune in June 2009 and later published in various places like ThoughtWorks blogsSilicon India, etc..

My notion of record-and-playback tools from the past - as being big, monolith tools, in which tests are more fragile than a feather in the wind - is now being challenged. These tools are built with the mindset of automating an evolving product - as opposed to the traditional record-and-playback tools only automating the “stable functionality”. 

In my next article, I plan to review some tools like testim.iotestcraft.io, etc. based on the above criteria:

Keep watching this space for that!

References:

Articles / Blogs:

Automation Tools:

Visual Regression Tools:

Cloud solutions for test execution:

About the Author

Anand Bagmar is a Software Quality Evangelist with 20+ years in the software testing field. He is passionate about shipping a quality product, and specializes in Product Quality strategy & execution, and also building automated testing tools, infrastructure and frameworks. Anand writes testing related blogs and has built open-source tools related to Software Testing – WAAT (Web Analytics Automation Testing Framework), TaaS (for automating the integration testing in disparate systems) and TTA (Test Trend Analyzer). You can follow him on Twitter @BagmarAnand, connect with him on LinkedIn or visit essenceoftesting.com.

Rate this Article

Adoption
Style

BT