BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Proven Solutions to Five Test Automation Issues

Proven Solutions to Five Test Automation Issues

Key Takeaways

  • API and service simulators can eliminate five common issues that block test automation.
  • You can simulate APIs or services that are not yet available.
  • Simulators can automate slow or manual processes in backend or third-party systems.
  • Use simulators to create test data you control and get around common test data issues.
  • Use simulators to set up hypothetical scenarios for error scenario testing and increase your test coverage.
  • Test rate limiting and throttling by simulating them in your test suite.
     

This article discusses five typical issues that stop teams from automating their testing. Teams can solve those problems using API and service simulation.

Over the last 15 years working with different software teams, I noticed that using over-the-wire test doubles, like API and service simulators, is standard practice on some teams that already know about the technique. Those are typically agile or Extreme Programming (XP) teams with TDD and BDD experience.

Other teams have never used mocks or simulators, mainly because they have never heard about them. This article is for groups starting their journey with test automation and shows how several daily problems can be solved using simulators. 

What is an API or service simulator?

Using a simulator instead of a real microservice, third-party service, mainframe, or other software system is called API and software simulation. You can also find names like API mocking, service virtualization, over-the-wire test doubles, and tools for stubbing and mocking HTTP(S) and other protocols.

The names of the techniques are not that important. What is essential is that they enable component (System Under Test) testing in isolation.

You can simulate APIs, services, or both. I have chosen to highlight both names in this article as I have seen with my clients that different names are more familiar to different teams depending on which continent or country they are in.

I have also noticed that, for example, “API” is a popular phrase among developers and testers using HTTP, and “service” might be more prevalent when you are a head of development or QA at a company working with third-party service providers.

Let’s work with an example microservice architecture shown in Figure 1. It is a typical situation where we have a website that has a backend of multiple microservices that connect to a database, a third-party system, and a legacy mainframe.

Figure 1: Sample microservice architecture in production

Figure 2 shows the development and testing infrastructure, which has multiple simulators. 

Figure 2: Development and testing infrastructure with API and service simulators

Automated testing coverage

As Martin Fowler observed more than a decade ago, measuring test coverage is great for finding untested code but bad as a quality target.

I mention that at the start, as I use code coverage in parts of this article as a metric. I wanted to clarify what I mean by “20% code automated test coverage.” It means, “We know we have 20% of code covered by our different types of automated tests, but 80% of code is not covered by tests.” It does not mean “code coverage is a good metric to assess the quality of our deliverables.” There are better metrics to measure the quality of your software delivery process, like the four key metrics. A BDD approach can also help with test quality.

Having said this, knowing that tests cover code is a good start. Code that is not covered by tests can have issues such as undetected bugs or unintended changes in behavior. In particular, automated test coverage has the property of providing a repeatable regression suite or tests (test harness) to help ensure the behavior of a system stays consistent between releases. 

Test automation issues solved by using simulators

During the development and testing of a product, developers and testers may experience several common issues:

  1. APIs or services are not yet available.
  2. Slow or manual process in backend or third-party systems.
  3. Test data issues (test data setup required; test data changes break existing automated tests; test data refresh required).
  4. Set up hypothetical scenarios for error scenario testing.
  5. Third-party API and service restrictions.

These problems can be solved by using simulators. I will review each situation in detail below. 

Figure 3 highlights how each of these problems can be more problematic when encountered at later stages of your test automation journey.

Figure 3: Create automated tests faster with API and service simulation

Issue 1: APIs or services not yet available

If an API or service you rely on is unavailable, you can use an API simulator to parallelize teamwork and deliver your product faster.

But the benefits do not stop there, as API simulators will help you with your test automation.

Let us say you are working on the Purchasing microservice shown in Figure 1. Following TDD, you want to write tests for a new feature that relies on a new API in the Payments microservice. Unfortunately, the payments team is still developing the new Payments microservice API. When you run your tests for the Purchasing microservice, they have no API to connect to. If the API is unavailable, your automated tests will fail unless you use a test double, for example, an API simulator.

Using an API simulator, you can set up programmatic test data responses for the new, non-existent API and then run your test. Figure 4 shows this workflow. You don’t have to wait for the payments team and can continue developing and testing your Purchasing microservice.

Figure 4: Automated tests set up the API simulator and test the microservice

Case study—APIs not yet available

I have described a sample use case whereby four teams working on a new platform are working in parallel on different microservices in the article “Using API-First Development and API Mocking to Break Critical Path Dependencies.” It allowed them to cut time to market significantly. 

The teams used an API simulator tool to create simulators for non-existing APIs. The simulators allowed them to run automated and manual exploratory tests without waiting for other teams to finish their work.

They have followed a workflow:

  1. The teams start by collaboratively designing the API in OpenAPI format.
  2. Producers and consumer teams can work in parallel on their microservices.
  3. The consumer team can use simulators to simulate the backend producer service, which allows for writing automated tests that do not depend on the real backend microservices. It also allows for exploratory manual tests that do not rely on the backend microservices.
  4. Communicating feedback about the API specification during the development phase is essential to allow for the API to evolve and take into account unforeseen changes.
  5. Once the microservices are ready, they can test them together without simulators and release them to production.

Issue 2: Slow or manual processes in backend or third-party systems

Another typical issue preventing teams from creating automated tests is a slow or manual process that is part of the test cases.

It is most typical when communicating with dependent systems using asynchronous technologies such as IBM MQ, RabbitMQ, JMS, ActiveMQ, or AMQP.

For example, as shown in Figure 5, your microservices will send a request message to a request queue for the backend services to consume. Backend data processing could take minutes or hours before responding with a response message to the response queue.

Figure 5: Request and response message communication with the backend

In this case, your automated tests could wait minutes or hours for the response message from the backend. Waiting for minutes or hours means blocking the build pipeline for minutes or hours per test case. For that reason, you need an alternative approach to testing your microservice. Simulators will replace the slow dependency and respond within milliseconds instead of minutes or hours, allowing your tests to continue running. Figure 6 shows the introduction of a simulator for the backend component.

Figure 6: Using simulators in a request and response message communication

You will need a similar approach when there is manual request document processing in the backend systems. For example, a human-in-the-loop scenario where completing a user journey and a test case requires a human to interact with a backend or third-party system. A manual process like that always takes more time than acceptable for automated testing, often minutes, hours, or days.

Case study—manual processes in backend or third-party systems

As part of my usual engagements, I consulted for a software house working on a government project that wanted to automate their manual regression tests. Their challenge was that almost all user journeys involved sending IBM MQ messages to a government service for processing. The processing was done manually, even in the government test environments. This manual process took anywhere between 30 minutes to 2 days. The team used Traffic Parrot to simulate the IBM MQ government systems (disclaimer: I represent Traffic Parrot). Simulators that responded in milliseconds instead of hours or days unblocked the person responsible for automating the manual regression tests.

Issue 3: Test data issues

When you run your automated tests, you need the dependent systems to support your test scenarios. That includes setting up the API and service responses to match what is needed for your test cases. 

Setting up test data in backends might be problematic, as they might not be within your team’s control. Relying on another team to set up the test data for you means you may end up with incorrect or missing test data and, therefore, cannot continue working on or running your automated tests.

Another issue is that even if you have the test data, running your automated tests frequently in the build pipeline might use it all up (test data burning). Then you need a test data refresh, which might take even longer than the partial test data setup, and you are blocked again.

Even if you have all the test data you need, when you (or some other team) run their automated or manual tests against the same services, the test data might change (for example, the balance on account or list of purchased items by a user). The tests break again because of test data issues rather than actual issues with the product.

You want your automated tests to fail when there is an issue with the code, not when there is an issue managing test data.

One of the solutions to the problems we discussed above is again using simulators. You can set up the required service and API responses in the simulator from individual tests, which will be relevant only to the given test. That allows you to test your microservice in isolation without relying on help from other teams and the dependent systems. This is illustrated in Figure 8.

Figure 7: Setting up simulators from automated tests

Case study—setting up test data in a third-party system

One of my media clients was integrating with a third-party system where they could configure test data in the third-party system only manually via a specific interface. The test data was used/burnt by automated tests almost every day for a week. They created a simple API simulator using one of the popular tools in 2 hours which meant they did not have to refresh the test data in the third-party system anymore. During the following three months, they spent another 20 hours adding more responses to the simulator that covered all the happy-path scenarios they wanted. In addition, they could add error scenarios to the simulator, which were impossible to configure in the real third-party test environment, leading to further increases in the automated test coverage.

Case study–simulating a whole hardware environment

The same client had a microservice integrated with a hardware platform using HTTP REST APIs. Unfortunately, to set up a test environment with all the hardware devices to support all test cases just for the automated tests, it would take 6+ months to procure and install the hardware devices in the test environment. The team decided to spend two weeks of 2 developers’ time creating a software device simulator instead. Building a simulator with enough features available for their needs allowed them to create the tests without waiting for the new devices in the test environment.

Issue 4: Set up hypothetical scenarios for error scenario testing

On top of the test data issues mentioned above, sometimes setting up hypothetical situations or error cases is not even possible in some systems in test environments. So if you have a production bug that you would like to cover in your automated tests and it relies on a specific configuration of backend systems that is impossible to replicate in a test environment, you cannot create your automated test.

When multiple upstream systems are involved, it’s even more complicated. For example, if the Payments microservices connect to a few third-party APIs and mainframe systems, a specific combination of responses from those systems might be needed to reproduce a bug found in production systems. 

Simulators again come to the rescue. Since a simulator is entirely under your control, you can use it instead of all those third-party and mainframe systems and set up the hypothetical situation in detail.

There are also different types of errors you will encounter in production environments:

  • HTTP and other protocol error responses (such as 503 service unavailable, 401 unauthorized)
  • Slow responses
  • Timeouts
  • Dribbling responses
  • Dropped connections

Selected API and system simulation tools will allow you to simulate those types of errors and allow for increasing your black-box test coverage to high numbers, above 80%.

Case study—setting up error scenarios in a third-party system

I consulted for a client whose third-party service had 23 expected error scenarios to be accounted for because they were likely to happen during normal product usage. Still, the third-party environment did not allow for configuring those error cases easily. To automatically test those scenarios, the client used an API simulator that could return the specified error responses and allow for automated black-box testing of their software in all 23 error cases.

Issue 5: Third-party API and service restrictions

Many third-party APIs or services have restrictions on usage in production environments. That includes:

  • Throttling or rate limiting—maximum number of requests per minute or hour
  • Burst thresholds—maximum number of requests per second for a few seconds
  • Maximum number of parallel connections

With simulators, you can simulate third-party throttling, rate limits, or the maximum number of parallel connections, which allows you to test the limits of the whole system under test in Black Friday scenarios.

Some companies choose to rate limit access to their test and sandbox environments to avoid incurring costs for supporting excess testing infrastructure for all of their clients. Using simulators, you can work around the rate limits in third-party test environments and run as many automated functional and performance tests on your side as you need.

Backend systems test environments can be slow compared to production environments, preventing you from running automated performance tests in a production-like environment. You can use simulators to simulate the production environment response times in your test environment.

Case study—rate limited third-party API

One of my clients was integrating with a third-party API with a burst threshold of 3 hits per second, continuously, during a span of 5 seconds. After hitting the limit, the API returns an error 429, “Too many requests.” They wanted an automated test where their software would throttle requests to meet the allowed burst threshold. They created a simulator that simulated the third-party behavior and returned error 429, “Too many requests,” when there were more than 3 hits per second, continuously, during a span of 5 seconds. They validated in their automated tests that their production code could throttle across a span of one minute with a high number of requests.

Next steps

Learning new things is great, but the new skills do not yield returns if you don’t use them! 

If you are the head of QA with a KPI to move away from manual to automated testing, review with your team if you experience any of the abovementioned problems.

If you are an architect, a developer, or an automation tester experiencing one or more problems listed above, talk to your team and management to see if you can spend some time investigating using simulation to solve your issues.

Wikipedia has a helpful comparison of API simulation tools, where you can find one that fits your needs.

If you have any project-specific questions, feel free to reach out to me on LinkedIn, Twitter, or via email wojtek [at] trafficparrot.com

About the Author

Rate this Article

Adoption
Style

BT