Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Using API-First Development and API Mocking to Break Critical Path Dependencies

Using API-First Development and API Mocking to Break Critical Path Dependencies

Key Takeaways

  • Many organizations are moving towards API-first development in order to decrease the coupling between teams and increase development velocity. Using API mocks can support the goals of this initiative.
  • We present a case study on API-first development that used API mocks to enable teams to work in parallel and be faster to market
  • There is a simple way of estimating the value parallelizing teamwork using API mocks will bring to your organization, based on the cost of delay
  • The spreadsheet model provided in this article can be used to calculate potential savings 
  • Adoption of API mocking requires just one team -- there is no need to migrate the whole organization at once

Teams are using API mocking to break critical path dependencies and enable what were serial execution sequences into parallel paths.

This article looks at where mocks should be used for the greatest impact and provides a model to estimate the effect of implementing API mocking and an API-first approach.

Moving toward API-first development and the case for API mocking

The enterprise software industry is moving away from primarily monolithic systems to more distributed microservice architectures deployed on a private or public cloud.

This architectural change also drives API-first development, where teams define upfront business contracts between each other using APIs.

By completing the contract before the development of the features coupled to this API begins, this enables the teams to work in parallel on the respective producers and consumers of the API.

For the API-first approach to be implemented effectively, in the majority of cases, it will require API mocks.

See Figure 1 below for an overview of how API mocks fit into the testing infrastructure

For details on how to use mocks and other test double techniques when testing microservices see "Testing Microservices: An Overview of 12 Useful Techniques - Part 1"

Figure 1: Architectural overview of API mocks in microservice architectures

API-mocking case study

An InsurTech startup has been developing microservices in Golang and Python and deploying them in Docker in Kubernetes. They used gRPC-based APIs to communicate between microservices. Teams had to wait for one another to finish the gRPC APIs before starting to work themselves. This led to blocked timelines between teams, which meant the startup couldn’t deliver at the fast pace required for their customers. It was clear to the VP of engineering that they needed to find a solution to allow the teams to work independently.

Using a gRPC API-mocking solution, they unblocked the timelines between the teams. Unlike open-source alternatives, it provided features for complex message schemas and the latest protocol feature support.

The teams use API mocks to develop both sides of their gRPC APIs in parallel, without having to wait for the server code to be written before a client can be tested. They run automated test suites on their CI build agents, with API mocks running in a Docker container on the agent.

Model available

We have taken the example from above and created a spreadsheet that you can use to calculate high-level expected returns on investment when deciding to implement API mocking.

Feel free to download the spreadsheet here.

Figure 2: Before and after using API mocks to parallelize teamwork -- two team’s example

Figure 3: Spreadsheet to calculate the cost of delay of not using API mocking

As seen in Figure 3 we see user inputs are in blue and calculations are in yellow.

The spreadsheet as seen in Figure 3 inputs include:

  • How much work does the team have to complete the feature?
  • How long is the team blocked waiting for APIs in days?
  • How long to define APIs? (typical time is 2 days)
  • How long does it take to get up to speed with API-mocking tooling? (typical time is 4 days)
  • How long does it take to create the API mocks to be unblocked? (typical time is 2 days)
  • How much time is needed to re-synch the mocks if the APIs change?
  • How long to integrate with real APIs when they are ready?

In the spreadsheet, we are modeling with several key assumptions:

  • Teams can’t work in parallel without the dependent APIs -- they need the real APIs or API mocks to be able to work
  • Both API producer and consumer teams define the APIs (Swagger, Proto, WSDL, etc.)
  • Teams are working with a Kanban-style approach (no Sprints/Iterations)
  • Developers are writing the automated tests
  • API consumer teams (not API producer teams) create the mocks
  • API definitions are stable -- the APIs don’t change significantly and the minor changes are communicated between teams when needed as soon as they are discovered
  • Teams can use the real APIs for integration testing as soon as they are ready
  • Teams can choose to implement more mocks based on other sheets in this spreadsheet but it’s not part of this model for simplicity (e.g. to speed up builds, handle more failure scenarios, etc.)

If these assumptions match your development lifecycle, our team would be happy to discuss your constraints and prepare a model for them.

Places where mocking should be used-- critical path

We have seen mocking be useful and save time in simple two-team situations and also on projects involving multiple teams working on a new product or a new feature.

Parallelizing work by using API mocks -- simple two team example

In this case, team A depends on team B’s work to be completed before the feature is ready to be released to production.

In Figure 2, we share a Gantt chart describing the situation. Team B is developing new functionality and the APIs they expose will be available on day 20. Then team A picks up developing their feature and takes until day 35 to be ready. Then both teams A and B do their integration testing. Finally, the feature is ready to be deployed to production on day 37.

Suppose, however, the teams decide to follow the API-first approach and start by defining the business contract between them. They define the APIs between their systems and use API mocks, the feature can be deployed to production already on day 26. In that case, while a part of team B has already started working on the feature, a couple of team members from both teams can take a couple of days to define the APIs between their systems. Typically, team A can train on using API mocks and create the mocks in four days or less. Once team A has the API mocks ready, they will be able to implement their feature with mocks. After that is done, they can proceed to integration testing with team B, which might take a day longer than if testing just with real APIs, as mocks are never a fully accurate representation of the real system -- there might be minor changes required for everything to run smoothly with real, not mocked, APIs.

There is also an option to speed things up even more, by outsourcing the mock creation to a third-party vendor. If the API mocks were outsourced, the feature will be completed on day 20.

So, by using API mocks and defining APIs first, we were able to accelerate the feature release to production and release on day 20 instead of day 37.

Parallelizing work by using API mocks -- multiple teams example

Let’s expand on the example above and introduce four teams instead of two. In this example, we have teams A, B, C, and D delivering a new complex piece of functionality for the company.

Team A is working on mobile number porting functionality. But they need the mobile customer provisioning APIs to be able to do that, which are delivered by Team B. Team B needs to be able to do mobile number lookups to complete their mobile customer provisioning feature, which is team C’s responsibility. Team C though is blocked by Team D, which is doing the mobile number third-party services integration.

Team A is on the hook for delivering their part on day 70. But they feel the pressure -- they cannot start working on their park until day 61 because team B doesn’t finish their work until then. See Figure 4.

Figure 4: Four teams working in sequence on delivering a big feature

Team A, after seeing the project plan, decides to derisk their position by starting work early. They approach team B and work closely with them to define the APIs. They create the API mocks and start working on day 43 instead of day 61. See Figure 5.

Figure 5: Teams A and B work in parallel

This means that Team A has a bit more breathing space because they finish on day 66 and have 4 days until day 70. So, if something unexpected happens with Team B’s, C’s, or D’s work, there is a bit of wiggle room for schedule error corrections.

Team B, after reviewing the updated project plan, realizes that they are the ones closest to the deadline schedule dates. They used to have 10 days until day 70 after they finish their work -- now they have only 6 days until day 66. They decide to work in parallel with Team C to parallelize their work as well. Team C, after talking to Team B, realizes they can also work in parallel with Team D. This means that the feature should now be delivered 40 days of schedule on day 30 instead of on day 70. See Figure 6.

Figure 6: All teams work in parallel

It’s also worth mentioning that the order in which teams adopt API mocking in this example could be shuffled around if there is a need. For example, if Team C adopted API mocking first and delivered several days early, this would derisk team B’s and A’s schedules as well by providing a buffer against risks such as late changes to API definitions.

The longer the schedule the more risk of slipping milestones due to unforeseen complexities—you can shift the milestones left and identify key risks early by starting the work with mocks early.

Although this is a model that makes a few assumptions covered above, it hopefully conveys the point of how you can parallelize teamwork using API mocks when doing API-first development.

ROI calculation for parallelizing work

As seen above on the Gannt chart, the feature can be now delivered 40 days ahead of schedule, we can see that in the "RESULTS" section.

This can be important to executives working for startups where there have been promises made to shareholders or investors that a feature will be ready by a given date, as it allows to derisk that schedule and meet those promises.

The spreadsheet calculates the cost of delay of not implementing a solution for 12 months.

Figure 7: The RESULTS section in the cost of delay spreadsheet

This also allows estimating the dollar value of delaying the adoption of API mocking in your organization. In the example in Figure 7, we assume that the company should be making an additional $5000 per day when the feature is available to its customers. Forty days times $5000 of potential new revenue per day means it is costing the company $200,000 if they don’t onboard API mocking to develop this feature. Assuming the company decides to purchase a commercial offering that costs $10,000 per year—the cost of delay has an insignificant drop to $190,000. In this example, we assume that this isn’t the only feature the company will be developing using a similar methodology. Suppose the company has not one but three features that will be constrained similarly in the next 12 months. In that case, they are potentially losing $590,000 by not implementing API mocking and an API-first approach to deliver those features.

How to get started with API mocking

Starting with an API-first approach and API mocking can be done by just one team. There are benefits to onboarding the whole organization to that approach but that shouldn’t stop you from starting today by picking the low-hanging fruit and starting with a selected team.

One way of selecting the first team to implement API first and API mocking is to identify the business-critical features, list the teams involved on a Gantt chart, and choose the one that when working in parallel will make the most difference to the project end dates.

Alternatively, if you are a team lead of a team that is on the hook to deliver on a certain date, like "Team A" in the examples above, you can take the initiative and onboard your team to API-mocking and API-first development and reduce the schedule pressure your team is under.

A useful resource for the team containing a list of API-mocking tools to evaluate can be found on Wikipedia.

About the Author

 Wojciech Bulaty specializes in enterprise software development and testing architecture. He brings more than a decade of hands-on coding and leadership experience to his writing. He is now part of the Traffic Parrot team, where he helps teams working with microservices to accelerate delivery, improve quality, and reduce time to market by providing a tool for API mocking and service virtualization. You can follow Wojciech on Twitter or reach out to him on LinkedIn.

Rate this Article