BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Testing Microservices: Six Case Studies with a Combination of Testing Techniques - Part 3

Testing Microservices: Six Case Studies with a Combination of Testing Techniques - Part 3

Leia em Português

Key Takeaways

  • An important consideration when testing microservices is how to manage the associated dependencies. Various techniques can be used to decouple dependencies, but they all have tradeoffs that development/QA teams must be aware of.
  • Architects, developers, and QA teams must work together to understand and specify the testing goals, context, and current constraints.
  • Providing a holistic testing approach often requires the implementation of several testing strategies. These must be chosen carefully to avoid duplication of effort or the addition of accidental complexity to the test suite.
  • Contract testing is a valuable approach to verifying interactions between services. Contract testing can be combined with the judicious use of a limited number of end-to-end tests to verify complete business flows.
  • Mocks and service virtualization help with decoupling components during testing. Care should be taken to ensure that these test doubles remain correct and up to date with the current associated implementation.

This is the third article in the Testing Microservices series. Please see also Part 1: An Overview of 12 Useful Techniques and Part 2: Examining the Tradeoffs of Twelve Techniques.

At Traffic Parrot, we have recently worked with six companies that represented a broad spectrum of industry domains and of maturity in adopting microservices. These companies have used a combination of the testing techniques that we described in Part 1 and assessed in Part 2 of this article series, techniques that allow you to manage dependent components while testing microservices. In Part 3, we will present case studies that demonstrate how six different companies used these techniques.

We begin with three case studies where a combination of testing techniques were applied as a holistic solution. The last three case studies describe the application of a single technique to solve a specific problem.

Combination of techniques: US insurance startup

Architecture: Greenfield microservices replacing a recently built monolith.

Tech stack: Go, Python, NodeJS, gRPC, protocol buffers, Docker, Kubernetes.

Priority: Deliver fast; will refactor the release process after the first production release.

Testing techniques used:

  • Technique #1 — Testing your microservice with a test instance of another microservice (E2E tests)
  • Technique #6 — Mocks (in-process)
  • Technique #9 — Service virtualization (over the wire/remote)
  • Technique #11 — Test container

Contract management:

  • Teams use API mocks to communicate syntax and semantics of contracts between each other.
  • API mocks define contract snapshots that are tested automatically to make sure they are up to date with latest protocol specifications.

Key takeaways:

  • Used gRPC protocol API mocks to allow teams to work in parallel.
  • Used automated tests to make sure API mocks did not get out of date.

The startup had two teams working on a set of three new microservices that had to be delivered in two months. The microservices were replacing part of a deprecated monolith. The teams decided to test the Go microservices internal components with unit tests using in-process mocks implemented with the GoMock framework. They tested interactions with the database with component-level integration tests, which use a test container database to avoid having a dependency on a shared instance of a database.

To manage contracts between teams and allow teams to work in parallel, they decided to use API mocks that the API producers created and shared with the API consumers. They created the gRPC API service mocks using an API mocking tool.

They also used a handful of manual E2E tests in a pre-production environment to make sure that the microservices would work together in the absence of sufficient automated testing. They would fill the gaps in automated testing after the first deadline.

Since this was a greenfield project that had to go to production on time, the company decided not to version the microservice APIs. Instead, they released everything together to production per release (often called snapshot releases). They allowed for downtime of services and corresponding graceful degradation of the customer-facing components, which was communicated to the customers beforehand. They started to semantically version APIs after releasing the first product. This allowed them to improve the API change management to allow backwards compatibility and improve uptime for customers.

Another interesting issue they came across early on was that the API mocks were becoming obsolete on a daily basis, because the APIs genuinely were changing daily. These changes were often not backwards compatible, as the developers were refactoring the protocol files to reflect the rapidly evolving domain model, which they had not yet clearly defined. This is a common characteristic of greenfield projects delivered by teams that use an iterative approach to delivering value to customers. To hit their deadlines, the company decided to test the API mocks by firing a request at both a mock and a real microservice. They compared both responses to a contract definition of expected request/response pairs as defined in a company-specific custom format. This way, the API mocks and the real service were proven to be up to date when compared to the latest definition of the expected behavior defined in the contract file.

Combination of techniques: Spanish e-commerce company

Architecture: Moving from a decade-old monolith to microservices.

Tech stack: Java, HTTP REST, gRPC, protocols, JMS, IBM MQ, Docker, OpenShift.

Priority: Move to API-first approach to allow parallel work and decouple teams. Scale adoption of microservices across 3,000 company developers.

Testing techniques used:

  • Technique #1 — Testing your microservice with a test instance of another microservice (E2E tests)
  • Technique #6 — Mocks (in-process)
  • Technique #9 — Service virtualization (over the wire/remote)
  • Technique #11 — Test container

Contract management:

  • Teams use API mocks to communicate syntax and semantics of contracts among each other.
  • Behavior-driven-development (BDD) API tests, which also verify API mock interactions.
  • The APIs are designed to be always backwards compatible.

Key takeaways:

  • This allowed teams to work in parallel by implementing an API-first approach with API mocks.
  • Developers created mocks based on OpenAPI and protocol specifications.

The company decided to move away from a monolithic architecture to more autonomous teams and microservices. As part of that transition, they decided to embed recommended good practices rather than force use of specific technologies and solutions onto teams.

The architects were responsible for gathering techniques, guidelines, and tools to be used by the developers. They were also responsible for creating an architecture that would minimize waste by reuse of proven techniques, tools, and components.

The developers wrote JUnit and TestNG integration tests, and used an API mocking tool to mock dependent components. They also wrote Cucumber/Gherkin BDD acceptance API tests to capture the business requirements (they called these "contract tests"), which use a Docker image of the microservice and a Docker image of an API mocking tool called Traffic Parrot. The BDD tests verify both the microservice API and interactions with dependent components by verifying the interactions on the API mocks. That way, the BDD tests verify both microservice API request and response and all communication with dependent components by assertions and verifications.

The company used JMeter to create performance tests. JMeter tests test individual microservices, and replaces the dependent components with API mocks of real dependencies like the microservices and the old legacy monolith. One of the techniques used is configuring response times on the API mocks and observing the impact of increased latency on the calling service.

All the unit, acceptance, and performance tests ran in a Bamboo continuous-delivery pipeline.

It’s interesting how the company decided to create the API mocks. They do that in two ways.

If the API that a developer wants to consume already exists, they create the API mocks by recording requests and responses. A developer starts by creating a new test on their computer. They then run the test and create API mocks by recording them. They commit the tests and mocks to the microservice project in Git. In a QA pipeline (a pipeline that is run per commit to check the quality of the product), they start a Docker container that runs the API mocking tool and mount the mock definitions from the microservice project.

If the API the microservice will consume does not exist yet, a developer will create the API mocks from OpenAPI specifications for HTTP REST APIs or create the API mocks from protocol files for gRPC APIs.

Whenever a developer needs a Cassandra database in the test suite, they run a Cassandra database test container. The benefit is not having to rely on a centralized copy of the database. They built their own Docker image with their custom Cassandra configuration.

They also develop and run automated E2E smoke tests. This is one of the techniques to test contracts between microservices and it makes sure groups of microservices work well together. The presence of the E2E test suite is justified as it is tests not only the producer side of contracts, which is tested in the BDD tests, but also the consumer side, and so provides more confidence. The architects monitor the number of E2E tests. They keep the complexity of the E2E test suite at a level that does not cripple the release process or daily development activities.

Combination of techniques: UK media company

Architecture: Already 100+ microservices in production running on an environment with manually provisioned hardware.

Tech stack: Java, HTTP REST, Docker, Kubernetes.

Priority: Move to the cloud (internal Kubernetes cluster). Reduce infrastructure costs and reduce time to market by moving away from hardware managed by operations team to autonomous feature teams who release to a Kubernetes cluster. Improve uptime from 99.5% to 99.95% mainly by introducing zero-downtime releases.

Testing techniques used:

  • Technique #1 — Testing your microservice with a test instance of another microservice (using other microservices for manual exploratory testing early in the cycle)
  • Technique #3 — Testing a microservice with third-party dependencies (third-party UK media-infrastructure test APIs)
  • Technique #5 — Testing a microservice with non-software (hardware) dependencies (network hardware)
  • Technique #6 — Mocks (in-process)
  • Technique #9 — Service virtualization (over the wire/remote API mocks of third-party services and other microservices)
  • Technique #11 — Test container (Oracle database test containers, API-mock test containers, and dependent-microservice test containers)

Contract management:

  • Consumer-driven contracts and contract testing with the other microservices in the cluster.
  • Third-party-API per-contract narrow integration testing.
  • No regression E2E tests.
  • BDD API tests also verify API-mock interactions.
  • The APIs are backwards and forwards compatible (version compatibility n±1).

Key takeaways:

  • Used consumer-driven contracts and consumer-driven contract testing, BDD API testing, and API version management instead of E2E tests.

The company had an existing stack of 100+ microservices that was primarily tested with automated BDD E2E tests but these were costly to maintain because it took significant developer time to write and debug the suite of tests. Developers were often frustrated as the tests were flaky due to the complexity of the system under test, leading to many non-deterministic failure points. These tests would prevent them from releasing new features on demand as the tests took a few hours to run. They realized that the complex suite of tests would take too much time and cost too much to abandon. For more details about E2E test issues, please have a look at "End-to-End Testing Considered Harmful", by Steve Smith.

With this experience, the company decided to avoid E2E testing for the new product they were working on. This product would run on a new internal Kubernetes cluster and use different contract-management techniques with minimal E2E testing.

The main way to grow confidence in the contracts between new microservices and the behavior of the product as a whole was to design contracts in a consumer-driven way. The company chose consumer-driven contract testing with Pact-JVM to test those contracts. Most of the teams were entirely new to consumer-driven contracts but they picked it up rapidly.

Another technique they used to improve their microservices was to have a manual tester on every feature team. The tester would perform manual exploratory testing of every new user story. For the tests, the tester would run the microservice using a Docker test container on their computer, sometimes along with other microservices.

The developers decided to implement some of the microservices with clean architecture in mind, which required the use of in-process mocking, in this case with Mockito.

API mocks running as Docker test containers were used extensively for mocking third-parties' network hardware and other microservices within the stack.

The BDD acceptance tests used WireMock for API mocking while running inside the TeamCity builds. The manual testers used an API-mocking tool that had a web user interface for their exploratory testing. That made it easier to set up test scenarios and test the microservices.

Specific problem solved: US railway company

Problem: Moving to a new CI/CD pipeline infrastructure for autonomous teams required a service virtualization tool that could be run inside the pipelines instead of using a shared environment.

Technique used to solve the problem: Technique #11 — Test containers (run API mocks as Docker test containers)

Key takeaway: Used on-demand API-mock Docker test containers to avoid relying on shared service-virtualization infrastructure.

This company decided to move from a monolithic to a microservice architecture. One of the major changes they pushed for as part of the restructuring was a new CI/CD pipeline infrastructure.

The architect responsible for designing the pipeline architecture wanted to use service virtualization deployed in a shared environment. The reason for that was that the company already had a solution like that in place. Virtual services were being shared among teams and CI/CD builds in the existing monolithic architecture.

After careful consideration, the architect realized that use of a service-virtualization environment shared by multiple pipelines, developers, and testers in the new world of microservices would be counterproductive. One of the reasons the company was moving to microservices was to let feature teams work independently of each other. Relying on a shared service-virtualization environment managed by a centralized team of administrators would not help them achieve that goal. Also, any shared environment is a single point of failure, which they wanted to avoid.

The architect decided that instead of a shared service-virtualization environment, they would use API mocks that would be deployed as part of the build jobs. He designed a fully distributed deployment approach, where the API mocks are deployed as needed. A Jenkins build would start the API-mock Docker test containers before the test suite and run in OpenShift. These Docker containers would be torn down after the build finishes.

Specific problem solved: Israeli e-commerce startup

Problem: Introduce automated API and third-party integration testing to an environment where developers are not yet doing any.

Technique used to solve the problem: Technique #9 - Service virtualization (over the wire/remote with third-party API virtual services backed by a database created using an off-the-shelf service-virtualization tool)

Key takeaway: Used a third-party API service-virtualization tool to create virtual services backed by a database to speed up onboarding of developers new to automated integration testing.

The startup was launching a new product to the market. The developers were adding new features but were doing no automated API or third-party integration testing. They had to launch new features fast, so there was not much room to change the existing development process.

The QA automation lead developed an API and integration testing framework in Jest. With an off-the-shelf commercial service-virtualization tool he had used on many projects before, he created virtual services that replaced the third-party APIs.

The tool was deployed in a shared environment, as he believed this would allow him to have more control over the adoption of the new testing approach. Each of the individual virtual services was backed by a database table that contained HTTP-request-to-response mapping data. He decided to allow the virtual services to be set up via a database, as the developers were already used to setting up test data in the database. He chose for the database MongoDB, which the developers were already familiar with.

This was the first step in introducing any kind of automated API and integration testing to developers in that startup. The lead believed that the developers would easily grasp the database-driven virtual services. A few hours of onboarding per person were enough to allow the developers to start writing their first automated integration tests.

Specific problem solved: US goods reseller

Problem: Introducing Amazon Simple Queue Service (SQS) queues to the tech stack resulted in manual-testing issues when having to verify that the correct messages were sent to the right queues. It also blocked automated testing efforts.

Technique used to solve the problem: Technique #8 - Service virtualization (over the wire/remote with an Amazon SQS simulator)

Key takeaway: Used the Amazon SQS simulator to allow testing without access to a real SQS instance.

The company introduced Amazon AWS components to their architecture. The development and testing teams unfortunately were disconnected in this organization so testing teams worked manually and with minimal help from the development teams. This meant that the testing team did not have access to tools that would help them to test the product. They were always behind the development team, pressed for time and working weekends. On top of all that, they were asked to introduce automated testing to slowly start moving away from manual regression testing.

One of the technical challenges was to test integrations manually and automatically with an Amazon SQS queue. To do that, they used an Amazon SQS simulator with a user interface, which allowed them to continue manual testing by simulating stateful SQS queues. They used the simulator’s user interface to inspect messages on the queue for manual verification of request messages. It also allowed them to start introducing automated tests that integrated with SQS by using the SQS simulator’s APIs.

Next steps

The case studies described here are only partial accounts of what happened at those organizations. We have focused on how the teams managed dependent components while testing and specifically chose these six case studies as they represent very different approaches. They show how teams use different techniques depending on context and a team’s task-relevant maturity.

We would be keen to get feedback and stories from readers. What uses of the techniques mentioned in Part 1 and Part 2 have you seen? It would also be great to hear if you agree or disagree with how these companies in Part 3 approached their testing and chose the techniques they used.

Please leave comments below the article or contact us via LinkedIn or Twitter.

If you have any project-specific concerns or questions, please contact the authors: CEO Wojciech Bulaty at wojtek@trafficparrot.com or @WojciechBulaty and technical lead Liam Williams at liam@trafficparrot.com or @theangrydev_.

About the Authors

Wojciech Bulaty specializes in agile software development and testing architecture. He brings more than a decade of hands-on coding and leadership experience to his writing on agile, automation, XP, TDD, BDD, pair programming, and clean coding. His most recent offering is Traffic Parrot, where he helps teams working with microservices to accelerate delivery, improve quality, and reduce time to market by providing a tool for API mocking and service virtualization.

Liam Williams is an automation expert with a focus on improving error-prone manual processes with high-quality software solutions. He is an open-source contributor and author of a number of small libraries. Most recently, he has joined the team at Traffic Parrot, where he has turned his attention to the problem of improving the testing experience when moving to a modern microservice architecture.

 

This is the third article in the Testing Microservices series. Please see also Part 1: An Overview of 12 Useful Techniques and Part 2: Examining the Tradeoffs of Twelve Techniques.

Rate this Article

Adoption
Style

BT