BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Microservices Integration Done Right Using Contract-Driven Development

Microservices Integration Done Right Using Contract-Driven Development

Bookmarks

Key Takeaways

  • Integration testing has become the largest hurdle impacting the independent development and deployment of microservices which is impacting the quality, time-to-market, predictability of delivery and ultimately business agility
  • We need an alternate approach to help identify compatibility issues between microservices early in the development cycle to reduce the dependence on integration testing
  • Adopting API specifications such as OpenAPI or AsyncAPI is a step in the right direction to clearly communicate the API signature to avoid communication gaps. Why stop there when you can get a lot more value from them?
  • Contract Driven Development helps us leverage API specifications as executable contracts using Specmatic to shift left the identification of compatibility issues thereby eliminating / reducing the need for integration tests
  • Specmatic has a #NOCODE approach that holds both consumer and provider teams accountable to commonly agreed API specification by emulating the provider for the consumers through "Smart Mocks" and emulating the consumer for the provider through "Contract as Test"

The ability to develop and deploy a single microservice independently is the most critical indicator of a successful microservice adoption strategy. However, most teams must undergo an extensive integration testing phase before deploying them. This is because integration tests have become necessary to identify compatibility issues between microservices since unit and component/API tests do not cover interactions between microservices.

Firstly, integration tests are a late feedback mechanism to find compatibility issues. The cost of fixing such issues increases severalfold with how late they are discovered (represented by the heatmap at the bottom of the above image).

Also, this can cause extensive rework for both consumer and provider teams which impacts the predictability of feature delivery severely because teams often must juggle regular feature development along with fixing integration bugs.

Also, integration environments can be extremely brittle. Even a single broken interaction, because of compatibility issues between two components/services, can render the entire environment compromised, which means even other unrelated features and microservices cannot be tested.

This blocks the path to production, even for critical fixes. And can bring the entire delivery to a grinding halt. We call this the "Integration Hell."

Integration Testing - understanding the beast

Before we kill integration tests, let us understand what it actually is. The term is often a misnomer.
 
Testing an application is not just about testing the logic within each function, class, or component. Features and capabilities are a result of these individual snippets of logic interacting with their counterparts. If a service boundary/API between two pieces of software is not properly implemented, it leads to what is popularly known as an integration issue. Example: If functionA calls functionB with only one parameter while functionB expects two mandatory parameters, there is an integration/compatibility issue between the two functions. Such quick feedback helps us course correct early and fix the problem immediately.

However, when we look at such compatibility issues at the level of microservices where the service boundaries are at the http, messaging, or event level, any deviation or violation of the service boundary is not immediately identified during unit and component/api testing. The microservices must be tested with all their real counterparts to verify if there are broken interactions. This is what is broadly (and in a way wrongly) classified as integration testing.

Integration testing is used as a term to cover a broad spectrum of checks:

  1. Compatibility between two or more components
  2. Workflow testing – an entire feature that involves an orchestration of interactions
  3. Interaction with other dependencies such as storage, messaging infrastructure, etc.
  4. And more, just short of end-to-end tests with production infrastructure

To be clear, when we are talking about killing "integration testing," we are talking about removing the dependency on "integration tests" as the only way to identify compatibility issues between microservices. Other aspects, such as workflow testing, may still be necessary.

Identifying the inflection point - knowing where to strike

When all the code is part of a monolith, the API specification for a service boundary may just be a method signature. Also, these method signatures can be enforced through mechanisms such as compile time checks, thereby giving early feedback to developers.

However, when a service boundary is lifted to an interface such as http REST API by splitting the components into microservices, this early feedback is lost. The API specification, which was earlier documented as an unambiguous method signature, now needs to be documented explicitly to convey the right way of invoking it. This can lead to a lot of confusion and communication gaps between teams if the API documentation is not machine parsable.

Without a well-documented service boundary,

  1. The consumer/client can only be built with an approximate emulation of the provider - Hand-rolled mocks and other stubbing techniques often lead to a problem called stale stubs, where the mock is not truly representative of the provider anymore.
  2. Likewise, for the provider, there is no emulation of the consumer.

This means that we must resort to a slow and sequential style of development where we wait for one of the components to be built before we start with the other. This is not a productive approach if we need to ship features quickly.

We have now lost two critical aspects by moving to microservices:

  1. Ability to clearly communicate the API specification for a service boundary between two components leveraging it to interact with each other
  2. Also, the ability to enforce that API specification describing the service boundary.

We need a way to compensate for these deficiencies.

API Specifications

Adopting an API specification standard such as OpenAPI or AsyncAPI is critical to bring back the ability to communicate API signatures in an unambiguous and machine-readable manner. While this adds to developers’ workload to  create and maintain these specs, the benefits outweigh the effort.

That said, API specifications, as the name suggests, only help in describing the API signatures. What about the aspect of enforcing them in the development process for early feedback? That part is still missing.

Code/Document Generation - Ineffective and unsustainable

We can argue that API specifications can be enforced by code generation techniques. And it seems to make sense at a surface level that if the code was generated based on the specification, how can it deviate from the specification?

However, here are the difficulties

  1. Ongoing development - Most code gen tools/techniques generate scaffolds for server/provider and client/consumer code and require us to fill in our business logic within that scaffold/template. The problem is when there is a change in the specification, and we usually need to regenerate the scaffold, extract our business logic from older versions of the code, and paste it there again, which leaves room for human error.
  2. Data type mismatches - Code gen tools/techniques have to build the capability for each programming language. In a polyglot environment, the generated scaffold may not be consistent in terms of data types, etc., across various programming languages. This is further exacerbated if we leverage document generation (generating API specifications based on provider/service code) in one programming language and then leverage that generated specification to further generate scaffolding for client code.

Overall, code generation and document generation may work only in a limited scenario with several   caveats. While they may initially provide a quick way for teams to build applications by giving them free code, the ongoing cost of such techniques makes them impractical.

So, we need another way to enforce API specifications.

Contract-Driven Development - API Specifications as Executable Contracts

A method signature can be enforced by a compiler to give early feedback to a developer when they are deviating from the method signature. Can something similar be done for APIs?

Contract testing is an attempt to achieve this goal. According to Pact.io’s documentation:

Contract testing is a technique for testing an integration point by checking each application in isolation to ensure the messages it sends or receives conform to a shared understanding that is documented in a "contract."

However, it is important to note that there are several approaches to contract testing itself, such as consumer-driven contract testing (Pact.io), provider-driven contract (Spring cloud contract in the producer contract testing approach), bi-directional contract testing (Pactflow.io), and so on. And in a large majority of these approaches, the API contract is a separate document from the API specification. For example, in Pact.io pact, jsons are the API contracts. Spring cloud contract also has its own DSL to define the contract. Instead of maintaining two different artifacts, which could potentially go out of sync, what if we could leverage the API specification itself as the API contract to provide developers early feedback when their implementation deviates from the API specification causing a problem to the consumer/API client?
 
That is exactly what Specmatic can achieve. Specmatic is an open-source tool that embodies Contract-Driven Development (CDD). It enables us to split the interactions between the consumer and provider into independently verifiable units. Consider the following interaction between two microservices, which are currently being verified only in higher environments.

ServiceA <-> ServiceB

CDD helps us split this interaction into its constituent components
 

ServiceA <-> Contract as Stub {API spec of ServiceB}
             Contract as Test {API spec of ServiceB} <-> ServiceB

Let us now examine the above in detail.

  1. Left-hand side - ServiceA => Contract as Stub
    1. Here, we are emulating the provider (ServiceB) for the consumer (ServiceA) so that the consumer application development can progress independently of the provider.
    2. Since the Contract as Stub (Smart Mock) is based on the mutually-agreed API specification, this is indeed a truly representative emulation of the provider (ServiceB) and gives feedback/throws an error when the consumer (ServiceA) implementation deviates from the API specification while invoking it.
  2. Right-hand side - Contract as Test => ServiceB
    1. Emulate the consumer (ServiceA) for the provider (ServiceB) by invoking it and verifying if the response is as per mutually-agreed API specification.
    2. Contract as Test will provide immediate feedback to the provider (ServiceB) application developer as soon as they deviate from the spec.

Now that we can adhere to the API specification at a component level for both consumer (ServiceA) and provider (ServiceB) applications while building them independently of each other, there is no need to test their interactions by deploying them together. Thereby no more dependency on integration tests for identifying compatibility issues.

This is how Specmatic is able to leverage API specifications as executable contracts.

Contract as Code

The linchpin here is the API specification itself, which allows the API providers and consumers to decouple and drive the development and deployment of their respective components independently while keeping all of them aligned.

For Contract-driven development (CDD) to be successful, we need to take an API-first approach, where API providers and consumers collaboratively design and document the API specification first. This means they would need to use one of the modern visual editors like Swagger, Postman, Stoplight, etc., to author their API specification by focusing on the API design and making sure all the stakeholders are in-sync before they go off and start building their pieces independently.

Teams that are habituated to generating the API specification from their code might feel uncomfortable with this reverse flow of writing the API specification first. CDD requires a mindset shift similar to Test-driven-development, where we hand-write the test first to guide/drive our code design. Similarly, in CDD, we hand-code the API specification first and then use tools like Specmatic to turn them into executable contract tests.

With approaches that rely on generating the API specification from code, I have observed that the API design takes a backseat, becoming more of an afterthought or being biased toward the consumer or the provider. Also, with time-to-market pressure, by starting with the API specification first, we are able to independently progress on consumer and provider components in parallel. This is not possible when we are depending on generating the API specification from code (consumers have to wait until providers have written code and generated the specs.)

Now that you have agreed on a common API specification first, it is absolutely important that there is a single source of truth for these API specifications. Multiple copies of these specs floating around will lead to consumer and provider teams diverging in their implementation.

CDD stands on the strength of three fundamental pillars. While "Contract as Stub" and "Contract as Test" keeps the consumers and provider teams in line, the glue holding everything together is the third pillar which is the "central contract repo."

API specifications are machine-parsable code. And what better place to store them than a version control system. By storing them in a version control system such as Git, we can also add some rigor to the process of authoring them by adding a Pull/Merge request process. The Pull/Merge request should ideally involve the following steps:

  1. Syntax check + linting to ensure consistency
  2. Backward compatibility checks to identify if there are any breaking changes
  3. A final review and merge step

It is highly recommended that specs be stored in a central location. This suits most cases (even large enterprises). Storing specifications across multiple repositories are not recommended unless there is an absolute necessity for that practice.

Once the specifications are in the central repo, they can be:

  1. Leveraged by consumer and provider teams to make independent progress
  2. Published to API gateways where appropriate

The death of integration tests

Now that we have eliminated the need for integration tests to identify compatibility issues between applications, what about system and workflow testing?

CDD paves the way for more stable higher environments since all compatibility issues have been identified much earlier in the development cycle (in environments such as local and CI), where the cost of fixing such issues is significantly lower. This allows us to run system and workflow tests to verify complex orchestrations in the now-stable higher environments. And since we have removed the need for running integration tests to identify compatibility issues, this reduces the overall run time of test suites in higher environments.

About the Author

Rate this Article

Adoption
Style

BT