BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Hit or Miss: Reusing Selenium Scripts in Random Testing

Hit or Miss: Reusing Selenium Scripts in Random Testing

Bookmarks

Key Takeaways

  • Test automation is a well-documented and a clearly defined approach which enables the same test scripts to run over and over again. However, it’s possible to leverage those test automation scripts to be used in a bit more creative way at the same time.
  • Though it’s complicated to automate analytical thinking, we can definitely introduce randomness into our scripts.
  • The amount of ‘randomness’ in tests can vary: from random inputs and arguments to fully random test cases.
  • While it is hard to match random steps with appropriate verifications, we can use different verification strategies to ensure our application is working as expected.
  • Random testing can’t substitute subjective or traditional testing techniques, but it brings an additional level of confidence in application quality during regression testing.

As defined by Cem Kaner in one of his tutorials, exploratory testing is a style of software testing that emphasizes the personal freedom and responsibility of an individual tester to continually optimize the quality of their work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project.

In simple terms, in his definition the famous Software Quality and Consumer advocate enables a tester with freedom and responsibility to test whatever they find applicable within the project. Documenting step-by-step specifications is seen as no longer needed for the simple reason that it’s barely possible to document creativity, right? In his speech on contextual decision-making in testing at TestBash 3 conference, Mark Tomlinson supports the idea of subjective understanding of the system.  Putting it into the core of exploratory, risk-based and session-based testing techniques (let’s call them subjective techniques), the tester is in the position of subjectively identifying important places of an application that might cause failure.

Think about this kinetic optical illusion of a spinning dancer: at specific moments our mind dictates a particular vision of the spinning direction, either left or right. The same applies to testing: we might consider different flows to achieve the same result, or the same flow leading to a different but expected result, or well… any other result.

The entire test execution process using a subjective technique is guided by a great deal of solid analytical thinking and a good portion of “randomizm”. With the latter one being a key ingredient, this article is dealing with automation of the yet-unveiled “randomizm”.

To make things clear, test automation is not creativity; it’s a well-documented and a clearly defined approach which enables the same test scripts to run over and over again. The question is, how can we leverage those test automation scripts and be more creative at the same time?

Product Quality over Time

A product quality model with the documented test scenarios can be outlined with a specific state machine with external attributes. And that’s what test automation loves. Test automation is all about writing test scripts based on a very specific set of test requirements.

This works well for functional regression testing: clean, polished, freshly released, and created by guru-developers. Let’s call it Shiny.

But how might the system look like after a really long and tiring development timeline (with several releases, years of support, hundreds of bug fixes, feature requests, etc.)?

Indeed, from a user interface perspective, it might look pretty similar to the good old system, but what is behind the scene is frequently referred to as a “big ball of mud”.

Within such a system, even with automated scripts, what part of actual functionality is still tested to the same level as right after initial production release? The answer probably ranges between 30% and 80%. But what about other functionality? No answer.

For sure, the easiest possibility is to review all the existing quality documentation, to refine old scenarios, to introduce new scenarios (just in time) and so on. But taking into account our industry experience, as a rule test documentation for legacy systems is outdated; even though keeping it up to date is crucial, it’s not always a realistic task to do so.

Well-Designed Architecture of Test Automation Solution

The following image is a fine representative example of a neat test automation solution with three layers (analogous to business applications approach based on UI, business logic, and database): UI/API maps, business logic and test scripts.

  • UI/API maps represent a technical side of the solution: UI automation tool level is highly bound to an automated system UI. Methods on this layer might look like focus(), type_text(), click_button().
  • Business logic is a library of keywords that form a business actions. A business action is a step that can be performed within the application (e.g., login(), create_user(), validate_user_created()) 
  • Test scripts perform a series of chains uniting business steps.

Putting a Separate Test under the Microscope

Let’s consider a simple documented test case: do this – verify this, do that – verify that, do bla – verify bla. A qualified automation developer will create a set of methods that would resemble the following one:

do_that(), verify_that(), do_this(), verify_this(), do_bla().

The test script is calling this exact same method in a very specific sequence:

mySpecifiedCase_1(){
do_that(); 
verify_that();
do_this();
verify_this();
do_bla();
verify_that();
verify_this();
}

Since this script no longer finds any bugs, our task at a given stage is to make it look for potential system issues.

Randomizm Approach #1 – Nude Random

From a business perspective, any step in an automation solution is valid. In its turn, exploratory testing gives us the freedom to execute any step at any point of time. Mixing the steps is pretty simple. We need to follow the already implemented steps and create “random” test cases with a limited number of tests after.

Input: All the business methods in solution, number of test scripts to generate, number of steps to generate per test script.

Output: similar to the following script:

myRandomCase_1(){
do_that(); 
do_bla();
verify_this();
}

It is pretty obvious that even if some test cases might (even) run successfully, the majority would still fail as a huge amount of cases would try to complete invalid actions. verify_this() will fail if do_this() was not previously executed.

Randomizm Approach #2 – Random Method with Prerequisite

The idea behind this approach is to add a step into a workflow only if a prerequisite step is already included into the workflow; the codebase should be enriched in a way that the test case generator understands and ensures the right sequence. This can be implemented via attributes or notations put on top of methods:

@Reguires(do_this)
verify_this()
{…}

Now, we get:

myRandomCase_2(){
do_bla();
do_this();
verify_this(); //can be added, because prerequisite step is already in test
}

That’s a much more predictable approach. But what if do_this() and verify_that() should be executed on the same page1, but do_bla() navigates to page2?

In this case we face a new issue: verify_that() would definitely fail because of its inability to find controls/context where it needs to be executed in.

Human Randomizm Approach #3 – Context Awareness

The test generator should be acquainted with the context (in case of web development, a page) where it can be executed. Again, attributes/annotations may be used to tell generator about the active context.

@ReguiresContext(pageThis)
verify_this()
{…}
@ReguiresContext(pageThis)
do_this()
{…}
@ReguiresContext(pageThis)
@MovesContextTo(pageThat)
do_bla()
{…}

In this case do_this(), verify_this() will not be put after method that changes context to pageThat or that has context pageThat.

It lets us get a similar test script:

myRandomCase_3(){
do_this();
do_bla(); 
do_that(); 
}

Alternatively, this can be implemented through method chaining. Assuming that business methods return page objects, the test case generator keeps track of the page displayed in the browser before and after “step” execution and is able to figure out the correct page to call verification or “step” methods. This approach still needs an additional check to verify if the flow is correct, but this may also be achieved without any annotations.

Filtering the Right Cases

So far, the described approaches allow generating a relatively large number of test cases to run.

The main issue is the rather considerable amount of time needed to validate and verify if the failed test scenario is actually a bug in an application, not in your automation script logic.

Now, it is possible to implement an Oracle class that will try to predict if the produced output is satisfactory or if it indicates any error, with a consequent analysis if needed. However, in our case we will go a slightly different way.

Here is a set of rules that signify that an application fails as a result of a bug:

  • Error 500 or a similar page
  • Javascript error
  • “Unknown error” or a similar popup in case of a misuse
  • Messages about exceptions and/or error situations in application log
  • Any additional product specific error identifications

In this case, the application state may be validated after each step has been executed. That would make our autogenerated scripts look like:

myRandomCase_3(){
do_this();
validate_standard_rules();
do_bla(); 
validate_standard_rules();
do_that(); 
validate_standard_rules();
}

Where validate_standard_rules() method allows searching for the issues described earlier.

Note: combined with OOP, this approach seems to be rather powerful and is capable of detecting actual bugs. Implementing general checks in a Page Object superclass will look for “general issues” such as JavaScript errors, logged application errors, etc. For page-specific sanity checks, override this method and add additional page-specific checks.

The Experiment

To carry out an experiment, we decided to use a publically available mail system. Considering the popularity of Gmail and Yahoo, the chances are high that all the bugs in those systems have been already found. So we opted for ProtonMail

Taking Over Random

Having assumed that the automation solution is already in place, we are “adopting” automation tests from the Shiny system: as a starting point, we had a generic Java/Selenium test project with a few smoke tests implemented using Page Object pattern. Following the known best practices, all our business methods return either a new Page Object (for the page displayed in the browser at the end of the business method) or a current Page Object, unless the page has been changed.

In order to do an automated exploratory testing, we added classes that are in explr.core package, with TestCaseGenerator and TesCaseExecutor being of the utmost interest.

TestCaseGenerator

To generate a new “random” test case you can call one of two ‘generateTestCase’ methods from TestCaseGenerator class. Both of them take an integer that represents the number of step-verification pairs needed in the generated test case as an argument. The second method takes an additional argument that represents a “verification strategy” to be used (the first one uses the default strategy, USE_PAGE_SANITY_VERIFICATIONS in our case).

Verification strategies represent the approach used when adding “check” steps to the test case. Currently we have two options:

  • USE_RANDOM_VERIFICATIONS: the first and the most obvious one. The idea behind it is to use the existing verification methods from page objects. The drawback of this method is that it’s highly context-bound. For example: we have randomly chosen a method that verifies if a message with specific subject exists. Firstly, we have to know what subject to look for. To handle this, we have introduced @Default annotation and DefaultTestData class. DefaultTestData contains generic test data to be used in random testing. @Default annotation is used to bind this data to a specific method argument. Than we have to make sure that the message with this subject exists before the verification (created during execution of this specific or any of the previous tests). This can be achieved with @Depends annotation that tells TestCaseGenerator to check for a specific method call and add it if it wasn’t found prior to current step. In addition, we also have to make sure that our message was not deleted before the verification. We have found that dependencies greatly reduce the level of randomness in the generated test cases and that they are not as stable as needed.
  • USE_PAGE_SANITY_VERIFICATIONS: this strategy checks for obvious application failures, such as a wrong page displayed, error messages, JavaScript errors, errors in application log, etc. It provides more flexibility in terms of dependencies, and allows implementing page specific checks if needed, i.e. being flexible enough to find actual bugs. Currently it is used as a default verification strategy.

TestCaseGenerator class searches for Page objects by class name: every class that contains a “Page” string in its name is considered to be a page object. All the public methods of a page object are considered to be business methods. Business methods that contain a “verify” string in their names are treated as verifications and all the other methods are considered as test steps. @IgnoreInRandomTesting annotation is used to exclude some utility methods or the whole page objects from the list.

The next step is to generate the test case by randomly selecting methods from two lists: the list that contains test steps and another list with verifications (if a verification step is required by the selected verification strategy). After the first method is selected, it’s return value is checked as being another page object. If the returned value is another page object, then the next step is selected from its methods (see the note above). There is a 10% chance of jumping to a completely random page to help avoid cycling between two pages. If the method has any dependencies marked with @Depends annotation, those may be also resolved and added if needed.

To avoid a case when a test method is called from an object other than the currently displayed page, the generated test case passes an additional validation that adds the missing navigation calls.

TesCaseExecutor

After generation, the test case is basically a list of class-method pairs. It has to be executed or saved in a specific way. Even though runtime execution is possible, writing it to a file is a better approach in terms of debugging and further analysis.

Understanding that the generated test case can be executed in several different ways and TesCaseExecutor is an interface, SaveToFileExecutor as its only implementation simply creates a .java file that represents a generated test case. Surprisingly, this rather simple solution best fits our needs: it was quick to implement and it allows in-depth analysis of test run, as well as insights on how it was generated. The only drawback is that you have to manually compile and run those generated test cases, which is not crucial in this experimental scenario though.

SaveToFileExecutor generates the code of the test case which is transformed into a compilable file using a template. Here is an example of the test generated this way:

@Test(dataProvider = "WebDriverProvider")
    public void test(WebDriver driver){
        login(driver);
//****<Generated>****
        ContactsPage contactspage = new ContactsPage(driver, true);
        InboxMailPage inboxmailpage = contactspage.inbox();
        inboxmailpage.sanityCheck();
        ComposeMailPage composemailpage = inboxmailpage.compose();
        composemailpage.sanityCheck();
        composemailpage.setTo("me@myself.com");
        composemailpage.send();
        inboxmailpage.sanityCheck();
        List list = inboxmailpage.findBySubject("Seen that?");
        inboxmailpage.sanityCheck();
        inboxmailpage.inbox();
        inboxmailpage.sanityCheck();
        DraftsMailPage draftsmailpage = inboxmailpage.drafts();
        draftsmailpage.sanityCheck();
        inboxmailpage.inbox();
        inboxmailpage.sanityCheck();
        inboxmailpage.sendNewMessageToMe();
        inboxmailpage.setMessagesStarred(true, "autotest", "Seen that?");
        inboxmailpage.sanityCheck();
        TrashMailPage trashmailpage = inboxmailpage.trash();
        trashmailpage.sanityCheck();
//****</Generated>****
}

   SaveToFileExecutor generates the code between <Generated> comments; the template adds the rest of the code.

Our generated test cases are not very diverse in terms of actions performed, but this can be easily addressed by adding more page objects that will contain more test steps.

After running a thousand “random” tests, we found that Proton mail had no major errors (such as error pages), but some JavaScript errors were reported by the browser, which can be crucial for a system that relies on JavaScript for mail decoding/encoding. Obviously, we had no access to the server logs to form a full picture, but for our experiment it was enough to show how such an approach can improve the quality of the system under test.

Yet again, random testing can’t substitute subjective or traditional testing techniques, but it brings an additional level of confidence in application quality during regression testing.

The scripts mentioned in the article can be found here.

About the Authors

Oleksandr Reminnyi is a Test Automation Expert at SoftServe. With 12 years of experience in software development, he uses knowledge of different IT areas to come up with the best possible solutions for any given problems. Oleksandr is a speaker at numerous Ukrainian and international conferences such as Atlassian Summit, ITWeekend, HotCode, TC World, Information Energy Netherlands, and SQA Days. In his articles, Oleksandr outlines different aspects of development, test automation, common mistakes and problem-solving patterns.

Pavlo Vedilin is a Test Automation Expert at SoftServe. He has 8 years' experience in IT, trying himself as a QA, JEE developer, as well as embarking Test Automation and UI scripting for six years. Pavlo has been working on projects related to engineering software, web services and security software, involving a variety of test automation tools for all kinds of testing and UI scripting. Pavlo believes that automation is fun and easy, and it should be used to make computers cope with routine and boring tasks.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT