Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Four Case Studies for Implementing Real-Time APIs

Four Case Studies for Implementing Real-Time APIs

Key Takeaways

  • API calls now make up 83% of all web traffic. Competitive advantage is no longer won by simply having APIs; the key to gaining ground is based on the performance and the reliability of those APIs. 
  • Modern systems can consist of multiple internal API calls among microservices; slow-performing API infrastructure can degrade service levels throughout an organisation’s systems.
  • Processing internal API calls within the corporate firewall instead of routing it through a cloud outside the corporate network can improve performance significantly.
  • Performance trumps feature richness in API management solutions when revenue is correlated with speed. Teams should recognize this and design system applications and infrastructure accordingly.
  • Consolidating traffic management solutions within a system can improve the capability to react to changing requirements, rapidly update configuration, and benchmark overall performance.

API calls now make up 83% of all web traffic. The age of the API has arrived, and companies should be well past the point of just having enthusiasm for developing APIs — they need them to survive in digital business. But in the digital era, it’s easy for your customers and partners to switch services. Don’t like your bank? Opening a new account in another bank is as simple as downloading an app. APIs have made it easy for us to consume services across all industries. 

That means competitive advantage is no longer won by simply having the APIs; the key to gaining ground is based on the performance and the reliability of those APIs. According to our research at NGINX, you need to be able to process an API call in 30ms or less in order to deliver real-time experiences. 

Here are four case studies of companies that created an API structure capable of delivering the real-time speed needed to win business in this highly competitive landscape. 

Automate API management at scale for microservices: Green-field online bank

Objective: Deliver real-time performance and reliability at scale to customers accessing applications from non-smart mobile devices.

Problem:  Microservices-based applications delivering poor performance and struggling with added latency when processing internal API calls for transactions. 

Key takeaway: External services can consist of multiple, internal API calls among microservices; slow-performing API infrastructure can degrade service levels.

Background and challenge

This green-field bank was launched in 2016 with the goal of providing banking services to poor and rural areas in South Africa. In order to compete with the incumbent banks, they decided to build a modern digital banking application from the ground up using containers and microservices to provide digital banking services to unbanked or underbanked households. 

They built a distributed architecture based on this microservices reference architecture. This provided the bank with the flexibility needed to deliver new services quickly enough to match the expectations of today’s digital consumers, while limiting downtime. The bank chose this reference architecture as it was prescriptive and gave them a roadmap to start small – initially handing ingress traffic to a modest Kubernetes cluster – and grow to hundreds or even thousands of microservices connected via a service mesh.

However, pivoting to a microservices architecture introduced a lot of complexity around both API scalability and increased inter-service communication (east-west traffic), which developers needed to allow microservices to communicate with each other. 

This posed a significant challenge to the bank as most of their customers were not accessing their services from smart devices but using non-smart mobile phones. As a result, the bank was not able to rely on customer computing power. Instead, a simple mobile phone would communicate with the bank via SMS, which would kick off a series of internal APIs calls to process the transaction and deliver the result back to the customer via SMS.

How real-time API management helped

Unreliable or slow performance can directly impact or even prevent the adoption of new digital services, making it difficult for a business to maximize the potential of new products and expand its offerings. Thus, it is not only crucial that an API processes calls at acceptable speeds, but it is equally important to have an API infrastructure in place that is able to route traffic to resources correctly, authenticate users, secure APIs, prioritize calls, provide proper bandwidth, and cache API responses. 

Most traditional APIM solutions were made to handle traffic between servers in the data center and the client applications accessing those APIs externally (north-south traffic). They also need constant connectivity between the control plane and data plane, which requires using third-party modules, scripts, and local databases. Processing a single request creates significant overhead — and it only gets more complex when dealing with the east-west traffic associated with a distributed application. 

Considering that a single transaction or request could require multiple internal API calls, the bank found it extremely difficult to deliver good user experiences to their customers. For example, the bank’s existing API management solution was adding anywhere from 100 to 500ms of transaction latency to each call.

The bank had also established CI/CD tooling and frameworks to automate microservices development and deployment. They had a Site Reliability (SRE) team tasked with overseeing the entire system and needed a solution that could integrate easily with their CI/CD pipeline infrastructure.

Deploying an API management solution that decouples the data and control plane introduced less latency and offered high-performance API traffic mediation for both external service and inter-service communication. Runtime connectivity to the control plane was no longer needed for the data plane to process and route calls, minimizing complexity. As a result, the bank was able to process API calls up to three times faster. In addition, the new API management solution integrated directly with the bank's existing microservices deployment and CI/CD pipeline, as well as offering monitoring for SRE teams to help eliminate human error, downtime, and reduce operational complexity.

Connect microservices API traffic - Leading Asian-Pacific telecommunications provider 

Objective: Process 600 million API calls per month across at least 800 different internal APIs. 

Problem: The existing API infrastructure was too expensive and slow to use for internal API traffic, resulting in performance degradation.

Key takeaway: Processing internal API calls within the corporate firewall instead of routing it through a cloud outside the corporate network improves performance significantly.

Background and challenge

A leading telecom organization from the Asia-Pacific region embraced an API-first development philosophy as part of their digital transformation initiative. This drove an explosion of internal APIs needed to process 600 million API calls per month across 800 different internal APIs. The current infrastructure was not well-equipped to process this new API traffic, causing a degradation of performance and developer and DevOps team to invest in non-standard solutions. 

The telecom company was already using a solution for API management. However, it was better suited for handling north-south traffic from external APIs rather than east-west traffic between internal services. The company decided to seek a new solution that would allow them to reduce latency, while also providing capabilities to empower their DevOps teams with self-service API management.

How real-time API management helped

The telecom organization’s existing API management solution relied on deployment models where the API management and gateways were hosted in the public cloud, meaning that traffic must be looped out to the cloud first for every interaction. In this case, sending traffic out to the public cloud was not only costly, it was much slower — adding several seconds of latency. 

The telecommunications organization opted to segment external and internal API management, implementing a higher-performing API management solution to manage and deploy internal APIs and that logically sat “behind” the existing deployment on the enterprise perimeter. This allowed them to process internal API calls within the corporate network, rather than being forced to route them out into the public cloud, resulting in a 70% reduction in latency with API calls being processed at 20ms or less.

DevOps teams were also able to easily create, publish, and monitor APIs, helping to increase application and microservices release velocity by integrating API management tasks directly into their CI/CD pipeline. By automating routine tasks using APIs, such as API definition and gateway configuration, the organization was able to achieve significant savings in time and effort. 

Process large volume of credit card transactions in real-time - Large US-based credit card company 

Objective: Process billions of API calls in real-time — sub 70ms latency. 

Problem: The existing API management solution added 500ms of latency for every API call, resulting in direct revenue loss for the company. 

Key takeaways: Performance trumps feature richness in API management solutions when revenue is impacted.  

Background and challenge

A leading credit card company was struggling with a transaction latency problem. When paying with a credit card, most point-of-sale (POS) systems will time out after a set limit expires. Cards will need to be run through again, but transactions will automatically fail to avoid any duplicate transactions being charged to the card. 

At the same time, the organization was moving to Open Banking standards, which provide API specifications that enable sharing of customer-permissioned data and analytics with third-party developers and firms to build applications and services—increasing the volume of API calls into the hundreds of billions. 

As a result, the company started looking for a solution that would be able to manage and scale to handle billions of API calls as fast as possible — the goal was set to sub 70ms latency per API call. This was deemed as the threshold before customer experience would be impacted for POS transactions.

How real-time API management helped

The company’s existing API solution was adding 500ms of latency for every API call, causing a nominal amount of transactions to fail. But even a small fraction of billions of transactions is a significant number, especially when they result in a loss of revenue. 

It’s common for customers to try paying again with another credit card when a transaction fails. If the second card they use is not issued by the same company, it could potentially mean millions in lost dollars—all due to timed-out API calls. 

The company pioneered a real-time API reference architecture to ensure API calls were processed in as close to real-time as possible. For example, they deployed clusters of two or more high availability API gateways to improve the reliability and resiliency of their APIs. The company also chose to enable dynamic authentication by pre-provisioning authentication information (using API keys and JSON Web Tokens, or JWT), which made authentication almost instantaneous. In addition, they chose to delegate authorization to the business-logic layer of their backend so that their API gateways were only responsible for handling authentication, resulting in faster response times to calls. 

By following these best practices, the company was able to achieve response times that were consistently less than 10ms, exceeding the performance requirements by 85%. This delivered tangible, direct savings—helping to not only recover lost transactions but enabling them to process even more transactions than before. The company concluded that these performance gains were more critical to their business outcomes than some of the additional features the incumbent solution had, which included a richer developer portal, API design tools, and API transformation features of their previous solution.

The improved transaction latency and reliability had the added benefit of helping to create and solidify new revenue streams. As part of their open banking efforts, the company is now able to expose their core transactional engine to ISVs and developers and win more business as a result of the speed and reliability advantages they can demonstrate over their competitors. 

Securely process billions of transactions using a single API management solution – US-based financial services company 

Objective: A lightweight, cost-effective solution that can process and route API calls for REST APIs, SOAP APIs, and externally accessed services that meets strict federal financial compliance requirements.  

Problem: The company had three existing solutions to manage each of the different types of traffic, which required configuring specific gateways multiple times in order to make changes and quickly achieve scale to serve the needs of internal and external consumers. 

Key takeaway: High API transaction throughput (calls per second) is essential for rapid adoption

Background and challenge

Recognizing that the future of financial services would rely heavily on software development, the financial services company began to focus on transforming into a technology company in 2014 by investing in RESTful APIs. This involved placing heavy emphasis on engineering and development, empowering developers to create software that was easy to consume in order to deliver great applications to end users. 

In addition to their own offerings that served millions of customers, the company also created an internal development exchange platform, which was eventually made available externally to allow them to integrate with business partners and third parties through APIs. There was extreme pressure to be able to deliver performant APIs and an infrastructure that could support a high volume of transactions every day. 

Over the years, the company had also acquired a lot of technical debt, including a legacy service bus and appliances to handle SOAP/XML APIs. As a result, the company was using at least three different solutions to manage API traffic, which made it hard to adapt and respond quickly to changing environments and meet market demands. Teams were forced to constantly configure specific gateways — making updates three times rather than once every time a change was needed. 

The company wanted to consolidate, but they were also looking for a solution that would allow them to scale and handle billions of API calls while protecting the high-speed developer exchange platform that was the main core of their business. To meet their scale, flexibility, and performance requirements, the company decided it couldn’t rely solely on a packaged solution or service. They would need a combination of API infrastructure software and custom-developed API tooling.

How real-time API management helped

The goal for the company, like any modern technology-focused business, was resiliency, high speed, and low overhead for internal customers that want to avoid impacting functionality. However, handling hundreds of thousands of concurrent API calls without performance degradation can be tricky using traditional application design, which relies on a process-per-connection model to handle requests. 

Using open source software (OSS) as the main foundation to support their API gateway infrastructure, the company was able to reduce context switching and the load on resources. This allowed them to leverage an asynchronous, event-driven approach that allows multiple requests to be handled by a single worker process — in other words, they were able to scale to support hundreds of thousands of concurrent connections with very little additional overhead.

The company also struggled to manage and configure multiple disparate solutions, which not only consumed resources but also caused a lot of downtime as they had to be restarted during upgrades. The developer and DevOps teams then highly customized the open source beyond its original capability by adding Lua-based modules and scripts, enabling them to standardize on this OSS gateway to handle all types of traffic, helping to eliminate sprawl and complexity. In addition, the company was able to upgrade without downtime or service interruption by starting a new set of worker processes when a new configuration is detected while continuing to route live traffic to the old processes containing the previous configuration. Once the new configuration is tested and ready, the new gateway immediately starts accepting connections and processing traffic based on the new settings. 

The company is now able to handle around 360 billion API calls per month — or about 12 billion calls in a single day with peak traffic of 2 million API calls per second.

Real-time APIs require a real-time API solution 

The case studies above serve to demonstrate some of the most common ways we are helping organizations develop their API programs. We would love to hear from you about your real-time API needs and experiences with delivering APIs in real-time. What API management challenges are you facing? How are you dealing with scaling your API infrastructure to meet your modernization needs?

Please leave comments below or, better yet, use our open source API assessment tool to measure your API performance. Learn more on GitHub.

About the Author

Karthik Krishnaswamy, Director, Product Marketing at F5 Networks, drives marketing initiatives for NGINX API Management and F5 Cloud Services. He is an experienced product marketer with a proven track record of developing and promoting IT solutions. Prior to F5, Karthik held similar positions at Fluke Networks, Cisco Systems and Nimble Storage, a Hewlett Packard Enterprise company.

Rate this Article