BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Testing Microservices: Examining the Tradeoffs of Twelve Techniques - Part 2

Testing Microservices: Examining the Tradeoffs of Twelve Techniques - Part 2

Leia em Português

Bookmarks

Key Takeaways

  • A successful microservice testing strategy must effectively manage the interdependent components involved. This may involve isolation, mocking, virtualization, or other techniques
  • Organizational characteristics have an impact on which testing techniques to choose, such as the maturity of the team and the required pace of change e.g. brownfield versus greenfield
  • We believe that from a business perspective there are three primary consequences of a testing approach: time to market, costs, and risks. 
  • Each testing technique has advantages and disadvantages. Which approach should be used for your application, depends on your context. 

This is the second article in the Testing Microservices series. Please see also Part 1: An Overview of 12 Useful Techniques  and Part 3: Six Case Studies with a Combination of Testing Techniques.

The first part of this series, "Testing Microservices Part 1: Overview of 12 Useful Techniques" explored techniques for managing microservice-dependent components when testing microservices. This article will compare the techniques based on the maturity of a team, pace of change, time to market, costs, and risks.

This comparison is based on our experience on 14+ projects, but we might have missed something or our experience might not reflect yours. So, please help us improve this summary so that we can help more people together as a community. Please comment below the article, post on LinkedIn or Tweet with tag #TestingMicroservices

The following table compares the techniques for testing microservices from a manager’s point of view. A plus sign (+) indicates advantages, a minus sign (-) indicates negative impact, and a tilde (~) indicates little or neutral effect.

Technique Organizational characteristics Consequences of using a given testing approach
Maturity of the team Pace of change Time to market Costs Risks
1. Testing your microservice with a test instance of another microservice.

Low impact.

Low impact.

+ Quick to start.

- Slows projects as complexity grows.

+ Low cost when complexity is low.

- Can get costly as complexity grows

+ Reduces the chances of introducing issues in test doubles.

- Risk of not following the testing pyramid.

2. Testing your microservice with a production instance of another microservice

Moderate impact.

Low impact.

+ Quick to start.

- Slows projects as complexity grows.

+ Low cost when complexity is low.

- Can get costly as complexity grows.

+ Reduces the chances of introducing issues in test doubles.

- Risk of not following the testing pyramid.

- Can change the state of production systems.

~ Hard to simulate hypothetical scenarios.

3. Testing a microservice with third-party dependencies.

Moderate impact.

Low impact.

+ Quick to start.

- Slows projects as complexity grows.

+ Low cost when complexity is low.

- Can get costly as complexity grows.

~ Calls to third-party APIs can generate costs.

 

+ Reduces the chances of introducing issues in test doubles.

- Risk of not following the testing pyramid.

- Can change the state of production systems.

~ Hard to simulate hypothetical scenarios.

4. Testing a microservice with legacy non-microservice internal dependencies.

Moderate impact.

Low impact.

+ Quick to start.

- Slows projects as complexity grows.

+ Low cost when complexity is low.

- Can get costly as complexity grows.

+ Reduces the chances of introducing issues in test doubles.

- Risk of not following the testing pyramid.

- Can change the state of production systems.

~ Hard to simulate hypothetical scenarios.

5. Testing a microservice with non-software (hardware) dependencies.

Moderate impact.

Low impact.

+ Quick to start.

- Slows projects as complexity grows.

~ Test-only hardware can be costly.

+ Fast feedback loop.

6. Mocks (in-process or over the wire/remote).

Moderate impact.

Moderate impact.

~ A moderate amount of time to start.

+ Reduces complexity.

~ Might need in-house development efforts.

+ Increases test coverage.

- Can become obsolete.

7. Stubs (in-process or over the wire/remote).

Moderate impact.

Moderate impact.

~ A moderate amount of time to start.

+ Reduces complexity.

~ In-house can be moderately costly.

+ Increases test coverage.

- Can become obsolete.

8. Simulators (in-process or over the wire/remote).

Moderate impact.

Low impact.

+ Quick to start with off-the-shelf simulations.

- In-house efforts can take a lot of time

+ Off-the-shelf simulations can be cost effective.

- In-house efforts can be costly.

+ Hypothetical scenarios can increase your test coverage.

- In-house efforts can introduce discrepancies.

9. Service virtualization (over the wire/remote), also called API simulation or API mocks.

Moderate impact.

Low impact.

+ Off-the-shelf products help you get to market faster.

+ Off-the-shelf products can be cost effective.

~ Commercial off-the-shelf products can get expensive.

+ Reduces the risk of making common mistakes.

+ Allows simulation of network issues.

~ Open-source products come without a support contract.

~ Virtual services can become obsolete.

10. In-memory database

Moderate impact.

Low impact.

+ Reduces time to market where provisioning new databases is problematic

+ Reduces the cost of licensing commercial databases.

~ In-memory databases can behave differently than the real ones.

11. Test container.

Moderate impact.

Low impact.

+ Allows teams to move at their own pace.

+ Reduces time to market where provisioning new environments is problematic.

+ Can reduce licensing costs.

+ Can reduce infrastructure costs.

~ Can have licensing-cost implications.

~ Test containers can have a different configuration than the real production dependency.

12. Legacy in a box

Moderate to high impact.

Low impact.

+ Quick to start.

- Slows projects as complexity grows.

+ Provisioning containers is an order of magnitude faster than provisioning hardware environments.

~ Time spent up front to configure containers.

- Potential time for refactoring.

+ Quick to start.

- Slows projects as complexity grows.

+ Provisioning containers is an order of magnitude faster than provisioning hardware environments.

~ Up-front cost to configure containers.

- Potential time for refactoring.

+ Reduces the chances of introducing issues in test doubles.

- Risk of not following the testing pyramid.

Beyond the tradeoffs in the table above, the characteristics of your organization influence the choice of testing approach. The task-relevant maturity of the team will affect your choice. The pace of change of the project requirements of your microservice or its dependencies also affects what you’ll choose. For example, greenfield projects in competitive markets will value tradeoffs differently than projects in maintenance mode.

The consequences of using a given testing approach are your time to market, costs, risks, and additional consequences.

Here is a high-level overview of each of the twelve techniques.

1. Testing your microservice together with a test instance of another microservice.

Team maturity has little impact because the team does not have to know anything about the types of test doubles and how to use them. It is comparatively easy to test this way if you are new to software development and testing.

This technique suits any pace of change. If the pace is high, the team gets fast feedback on compatibility issues between APIs. When the pace is slow, it does not matter.

The time to market slows for most projects as complexity grows. It’s a typical pitfall for software teams and a source of tech debt for many large enterprises. This technique is easy to start since it requires little additional infrastructure or test-doubles knowledge.

Many companies stay with this approach after the initial testing, which results in the rapid accumulation of technical debt, eventually slowing development teams as the complexity of the system under test grows exponentially with the number of its components.

Most projects need a combination of testing techniques, including test doubles, to reach sufficient test coverage and stable test suites — you can read more about this so-called testing pyramid. You are faster to market with test doubles in place because you test less than you otherwise would have.

Costs can grow with complexity. Because you do not need much additional infrastructure or test-doubles knowledge, this technique doesn’t cost much to start with. Costs can grow, however — for example, as you require more test infrastructure to host groups of related microservices that you must test together.

Testing a test instance of a dependency reduces the chance of introducing issues in test doubles. Follow the test pyramid to produce a sound development and testing strategy or you risk ending up with big E2E test suites that are costly to maintain and slow to run.

Use this technique with caution only after careful consideration of the test pyramid and do not fall into the trap of the inverted testing pyramid.

2. Testing your microservice together with a production instance of another microservice.

The team needs to take extra care when testing with a production instance and must live up to a higher degree of trust than teams working with test doubles.

Costs and time to market do not differ from the first technique. Your team is tied to the production release cycle, which may delay their tests.

The team needs to take extra care because the risks for this technique are so much higher than for tests on test doubles. Testing by connecting to production systems can change the state of the production systems. Use this method only for stateless APIs or with carefully selected test data that the team can use in production.
Testing with a production instance often means it’s hard to simulate hypothetical failure scenarios and error responses. Keep this in mind when designing your development and testing strategy.

Performance-testing a production instance of dependency can put unnecessary strain on production systems.

This technique typically is applicable for simple, stable, non-critical APIs, which is a rare use case. Avoid it unless you have identified a specific good reason to do it.

3. Testing a microservice with third-party dependencies.

The team needs to know how to set up test data in the third-party dependency but otherwise need not be specially experienced.

Your team is tied to the third party’s release cycle, which may slow them down.

The organization may have to pay to test with a third-party API as third parties typically charge per transaction. This is especially relevant when testing performance.

4. Testing a microservice with legacy non-microservice internal dependencies.

This technique offers a fast feedback loop on issues with the contract between the new world of microservices and old legacy systems, reducing risk.

In addition to all of the shortcomings of technique 1, you also need to keep in mind that legacy systems often have issues with test-environment availability and test-data setup

5. Testing a microservice with non-software (hardware) dependencies.

The team needs to know how to set up test data in the hardware dependency.

Hardware generally has a slow pace of change, but it can be pricy. It can cost a lot to acquire even a single instance of hardware to be used only for testing purposes. When you need more hardware to run tests in parallel by different teams or builds/pipelines, your costs can significantly increase.

Similar to the previous technique, there’s a fast feedback loop between the microservices and the hardware that reduces risk — but the hardware may have issues with test-environment availability and test-data setup.

6. Mocks (in-process or over the wire/remote).

The team must know how to use in-process mocking.

Mocks help reduce the complexity of test cases and decrease the time to investigate and reproduce issues but introduce a high risk of discrepancies between APIs. They reduce complexity by making assumptions about the behavior of other systems but can make incorrect or obsolete assumptions about how an API works. The higher the pace of change of project requirements, the more important it is to keep the mocks up to date. See contract testing for strategies to reduce this risk.

It takes a moderate amount of time to start using mocks and define a mitigation strategy like consumer-driven contracts, integrated tests, or integration tests to make sure mocks stay current. Mocks can get out of date, and a successful run of an obsolete test suite will provide you with false confidence in quality.

There might be no free mocking solutions available for your technology stack, so you could have to develop one in-house or buy a commercial product.

Mocks let you set up a low-granularity failure and hypothetical scenarios, hence increasing your test coverage.

Using mocks is a good idea in most use cases and can be a part of most healthy testing pyramids or testing honeycombs. Mocks are often an essential technique for testing complex systems.

7. Stubs (in-process or over the wire/remote).

While a mock replaces an object the microservice depends on with a test-specific object that verifies that the microservice is using it correctly, a stub replaces it with a test-specific object that provides test data to the microservice. The tradeoffs are similar.

In-house development of stubs for complex dependencies can be time-consuming and costly. Choose off-the-shelf mocks, simulators, or service virtualization tools over stubs.

8. Simulators (in-process or over the wire/remote).

The team must know how to use the simulator you choose. Simulators generally present individual ways for developers or testers to interact with them.

Organizations typically create simulators for widely used or stable APIs and you can quickly start using these off-the-shelf simulators. Fast, early testing shortens your time to market and the use of off-the-shelf, stable simulators can be cost effective. They can provide a wide range of predefined error responses and hypothetical scenarios to increase your test coverage.

Developing your own in-house simulator for complex dependencies can be time-consuming and costly. Developing your own simulator risks introducing discrepancies between the simulator and the real dependency and creating false confidence in the test suite.

It's easy, however, to replace a complex dependency with a sophisticated simulator and forget that you must modernize it as the dependency evolves. Take ongoing maintenance costs into account.

This technique lets you simulate network issues, which is critical for testing microservice architectures that rely on networks.

Choose off-the-shelf simulators whenever possible. Only build in-house simulators when your team has a vast amount of experience with the real dependency and simulation.

9. Service virtualization (over the wire/remote), also called API simulation or API mocks.

Your team has to know how to do service virtualization and it’s essential to choose a tool that comes with tutorials and other materials.

Service-virtualization tools help to keep virtual services up to date. The faster the pace of change of project requirements, the more critical it is to keep the virtual services from becoming obsolete. Service-virtualization tools provide techniques and components, one of which is "record and replay", a technique that lets you rapidly recreate virtual services for APIs that change often. See "contract testing" for other strategies to reduce this risk.

Use of off-the-shelf service-virtualization offerings helps you get to market faster if you adopt its built-in well-tested patterns. Consultants familiar with the off-the-shelf tools can work closely with developers and testers and help you choose a microservice testing approach based on experience across many projects.

Use of off-the-shelf tools can be cost effective when the team is new to microservices because the vendor can help you avoid common mistakes. Over time however, your use of a commercial offering can become expensive. Choose a vendor based on ROI projections.

Using open-source tools without a support contract might result in your developers and testers spending time to fix bugs and to maintain documentation and training materials.

Service virtualization helps reduce the complexity of the system under test. The tools help you manage your assumptions. A majority of service-virtualization tools have been on the market for many years and have been designed with mainframes and monolithic systems in mind. Choose the open-source or commercial tool that best fits your microservices architecture — but any virtual service can get out of date so look at your vendor’s recommendations for mitigation strategies.

10. In-memory database.

The team has to understand the technical risks of switching to testing with an in-memory database.

Because this technique uses a database, the pace of change has little impact here. It significantly reduces time to market for projects when provisioning new databases for development or testing purposes is problematic.

Many in-memory databases are open source and free, which obviously can help reduce licensing costs. But in-memory database can behave differently than the real one in edge cases. Perform a test against a real database as part of your testing strategy so you can observe differences between the real and in-memory databases.

11. Test container.

Your team has to know how to run test containers.

Since the test operates on a real dependency, in a container, the pace of change has little impact on this solution.

Running third-party service or service-virtualization test containers can reduce inter-team dependencies and allow each team to move at its own pace. These containers can also significantly reduce your test infrastructure costs.

Running a test container instead of relying on a shared instance reduces the likelihood that environmental issues will affect test results.

Running a development edition of a commercial database as a test container for development and testing purposes can reduce your licensing costs. Running production-configuration commercial databases as a container can be expensive.

The test container can be configured differently than the real dependency, leading to false confidence in your test suite. Make sure you configure the container database the same as the production database (for example, use the same level of transaction isolation).

12. Legacy in a box.

The maturity of your team has only moderate impact when the legacy system can be ported to containers without much effort but is of greater consequence if your team needs to refactor parts of the configuration and code in the old legacy system to make it work in a container. The amount of work depends on the project, so first research and assess the size of the job.

Legacy infrastructure takes time to provision. After an initial, potentially substantial investment in setting up the legacy in the container, the time and money spent to start and run new environments (containers) is orders of magnitude less than typical hardware setups.

This technique reduces the chance of introducing discrepancies between test doubles and real systems. Make sure you configure your container legacy the same as the production system so that there are no significant discrepancies between your production and test environments.

Summary

We have explored techniques for managing microservice dependencies when testing microservices from a manager’s point of view and compared them based on team maturity, pace of change, time to market, costs, and risks.

The information should fill in a few gaps and help you define your development and test strategy (including the testing pyramid) to cut time to market, reduce costs, and increase quality in your organization.

Each method has its advantages and disadvantages, and which choice makes sense for your application depends on your environment. Part 3 of this article will include case studies that highlight how our clients applied this knowledge to reach their decisions.

If there is anything not clear in the article, please leave a comment below, and the authors will be happy to assist you. wojtek@trafficparrot.com

If you find anything not clear in the article or if you have any project-specific concerns or questions, please contact the authors: Wojciech Bulaty at wojtek@trafficparrot.com  and Liam Williams at liam@trafficparrot.com.

About the Authors

Wojciech Bulaty specializes in agile software development and testing architecture. He brings more than a decade of hands-on coding and leadership experience to his writing on agile, automation, XP, TDD, BDD, pair programming, and clean coding. His most recent offering is Traffic Parrot, where he helps teams working with microservices to accelerate delivery, improve quality, and reduce time to market by providing a tool for API mocking and service virtualization.

Liam Williams is an automation expert with a focus on improving error-prone manual processes with high-quality software solutions. He is an open-source contributor and author of a number of small libraries. Most recently, he has joined the team at Traffic Parrot, where he has turned his attention to the problem of improving the testing experience when moving to a modern microservice architecture.

 

This is the second article in the Testing Microservices series. Please see also Part 1: An Overview of 12 Useful Techniques  and Part 3: Six Case Studies with a Combination of Testing Techniques.

Rate this Article

Adoption
Style

BT