Functional GUI Testing Automation Patterns
The process of developing an automated functional test solution for a specific program is not much different from the process of creating the same program. Automated testing is a rather young area which is going through a lot of advancement, improvement and standardization. New tools are created to interact with the system under test (SUT).
Currently there is a wide selection of methodologies and approaches to software development: Object-oriented programming, functional programming, Domain Driven Design, Test Driven Design, Behavior Driven Design, etc. These approaches have specific declarative concepts and theorems which simplify definition processes of the initial system architecture, understanding of the system, and exchange of knowledge between developers etc.
I will be targeting mainly the test automation of GUI (Graphical User Interface) applications, when the system under test (SUT) is represented as a black box for the automation developer. (A system under test (SUT) refers to a system that is being tested for correct operation. In the case of desktop applications it’s an application itself and in the case of browser systems – it’s a website/ webproject, etc.) This situation is common for a high percentage of corporate legacy systems or in case a fresh system was developed without testability quality attribute in mind.
Preparation and definition of the best practices are critical parts of the automated test development. The picture below demonstrates traditional interaction between a system under test and a tester:
(Click on the image to enlarge it)
Interaction between a tester and a SUT
The center of such a system is a person with the role of a tester. The tester is replicating the scenarios described in test cases using manual interaction and visual analysis of the application and also specific access tools for non-visual interfaces of the SUT. In case of failure or unexpected behavior of the system the tester enters information about the incorrect behavior into a fault tracking system.
The main objective of automated testing is elimination (or at least minimization) of human interaction with the SUT. That is a very common issue in case of continuous delivery product development cycles. A review of literary sources shows that there are plenty of automated testing systems. Commercial products usually declare a set of specific requirements and recommendations that works specifically with their products. But it is hard to declare a set of tool agnostic practices that would be applicable to use with any automation tool.
Also automation tool software providers often resort to marketing tricks and describe the advantages of their systems based on a small amount of functional tests. But as the number of automated tests grows, maintenance of the existing tests becomes the most costly part in the process of working with the system.
Automated testing frameworks are purposed to help solve these problems. They define basic reusable components for the system, declare best practices and unify automation approaches. In order to develop an automated testing framework correctly you need to be guided by independent best practices.
Patterns of Automated Functional Testing
As an example for an automation solution appliance let us review the following problem of web-application automation (Figure 1). The application contains a login page. Each test has to go through the login page to perform further functionality.
Figure 1: An example of a simple web-application with a minimal set of pages and functions.
Classification scheme (Figure 2) gives us generalized view of all functional testing patterns that will be described later in this article.
(Click on the image to enlarge it)
Figure 2: Classification of automated functional testing patterns.
Patterns of test implementation
Test implementation is performed by an automated testing tool that performs recording of manual tester actions and playback. To some extent they are considered to be a bad practice due to expensive maintenance.
Test implementation is performed by a programmer using the API of the automation tools (Selenium WebDriver API as ane example).
Implementation of a basic template (Test Template)
Implementation of a basic test template class. Test variations are created by inheritance and expansion of the template class functionality.
Data driven implementation
Implementation of a basic test is defined by a test case. Test variations are created by a set of various input data combinations.
This approach is implemented in most of the unit testing frameworks. For instance, MSTest gives access to such property as DataContext (key-value collection) inside of the test, and the same test method body is run many times but with different data in that DataContext property.
Keyword driven implementation
Test implementation with the help of keywords (Click, Enter etc.). Test implementation is done via special IDEs which allow hooking into the app’s UI.
Currently there are several software tools which allow implementation of tests with the help of keywords. Test steps are presented as a combination of a keyword, a control name on the screen, and input parameters. Good example of such IDEs would be HP QTP, MonkeyTalk.
Model driven implementation
Any application at a certain moment of time with specific input data can exist in only one specific state. According to such a definition we can picture the software program as a finite state machine (finite automaton). Considering this fact and the availability of state and transition models (Figure 1 as an example) we can define certain sets of transitions (workflows) between pages that would cover most of the program’s functionality.
Multi Layered Test Solution
It splits logic of the testing system into separate logical layers.
It’s a widely spread practice to split a software system architecturally into separate layers. The first level encapsulates the logic of presentation, the second level is a business logic level and the third layer is responsible for data storage. The use of this paradigm allows for the decreasing the cost of application maintenance since the components inside each level can be changed with minimal impact on other levels. The same approach can be applied to the testing system.
The test code can be split into three layers: the layer of UI automation tool interfaces for the system under test (SUT) access, the layer of functional logic and the test case layer. Each layer has a certain responsibility with a common goal of decreasing the expenses for test maintenance and facilitation of new test creation.
(Click on the image to enlarge it)
Figure 3: Architectural archetype – multilayered architecture of the test system
The pattern defines a set of basic independent utility classes that are generic for any automation tool and can be reused between different automation projects.
Such solutions may be needed in case different projects are tested inside one organization and the corporate standards require a unified interface of results. Also the Meta Framework improves the metrics of code reuse between projects as it may include useful utility methods. Basic classes as for both functional and test objects simplify the knowledge transfer between projects. The Meta Framework is displayed on the right side of the Figure 3.
Functional composition patterns
The pattern abstracts the application-specific business function from its implementation on UI, API or another level.
Many tools for automated testing allow the creation of so called “recorded scenarios” when a test developer performs certain actions with a specific application and they automatically create a test script. It can be later replayed and checked whether it has been run correctly after the changes made in the program.
Example: Changing the appearance of the login page will require changes in all proposed scenarios. If we abstract extract the login method as Application.Login(username, password) and use this method in all tests then in case of any changes on the Login page we will need to modify only one functional method and the changes will be automatically distributed to all tests where this method is used.
Figure 4: Interaction of the test script with the user interface without the transitional layer of functional methods (a) and with the layer of functional methods (b). The filled objects are changed when the application is changed.
The pattern groups functional methods of a certain page.
Functional methods for the application pictured in Figure 1 can be moved into a single class as there are only few of them. But in order to improve the code maintainability the pattern suggests grouping the methods according to the pages which these methods represent: PageLogin: methods: Login(); PageHome: methods: Logout(), CreateUser().
It groups functional objects or (and) functional methods of a certain specific application into one module suitable for reuse.
Startup and teardown object of the SUT (SUT Runner)
It allows the initial launch of the system under test, its initialization. After that the test object releases resources associated with the system.
Among the functional methods we can distinguish a set of those that are not related to functionality testing: for example the launch of a web-browser and navigation to the login page of SUT. After the test run the web-browser should be closed. SUT Runner is responsible for such general activities.
Object Source (Object Mother, Object Genie, Object Factory)
It creates objects in initialized and required form for test execution.
It centralizes the navigation control in the tested system according to the test requirements.
This object encapsulates a whole logic associated with the implementation of navigation within the tested system. Thus the problem of business logic does not interfere with the navigation within the system.
For the case described on Figure 1 we will have a Transporter class with following methods: NavigateToLogin(), NavigateToHomePage(), NavigateToCreateuser() etc. Alternatively, each separate page object may have its own transport methods and in such case act as a transporter on its own.
Composite Page Object
It aggregates reused page objects in one external object.
This pattern allows structuring the page objects in a more “object-oriented” way by separating sub objects that can be reused on different pages and include them into the parent object.
Figure Х: Using the Navigation page object through aggregation in Home and Create User page objects.
Extended Page Object
It extends basic page object through inheritance and makes an alternative to the composite page object.
It splits the test execution process into three stages:
- given (defining the preconditions);
- when (sets the specific operations to work with context);
- then (checking results).
Four stage test
It splits the test execution process into four stages:
- Defining the preconditions;
- Calling business functions;
- Checking results;
- System teardown.
It allows performing both business operations and checks within one test. They can alternate to achieve the final goals of the test.
The pattern defines a mechanism that allows continuing the test run after a non-critical fault.
Test dependency patterns
It returns the tested system to the same state as before the test.
The preliminary test sets up the state of SUT that will be needed for the tests to follow.
Test grouping patterns
Test Method per Test Class
A separate test method is placed in a separate test class.
Grouped Test Methods in Test Class
Multiple test methods are placed in a separate test class.
The driving force behind the design of testing solution is selection of a specific test implementation pattern. It serves as a start point for all the future test solution development, influences readability, maintainability and many other qualities. Also setting the practices once can help to better reuse the resources between the projects and decrease time of the new project automation start.
This article provides an idea on how to build your test solution with reference to design patterns.
1. “Design Patterns: Elements of Reusable Object-Oriented Software” by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides ©1994, Addison-Wesley Professional;
2. “xUnit Test Patterns : Refactoring Test Code” by Gerard Meszaros ©2007 Addison-Wesley;
3. Design Patterns for Customer Testing by Misha Rybalov, Quality Centered Developer;
4. Meta-Framework: A New Pattern for Test Automation by Ryan Gerard and Amit Mathur, Symantec, Security 2.0;
About the Author
Oleksandr Reminnyi works as a software architect at SoftServe Inc., a leading global provider of software development, testing and technology consulting services. Oleksandr is responsible for establishing automation projects and processes for new and existing customers. He believes that automation success and failure are completely dependent on the established process and setting the right goals. Oleksandr is currently working on his PhD research dedicated to automation. He can be contacted at firstname.lastname@example.org.