BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles API Gateways and Service Meshes: Opening the Door to Application Modernisation

API Gateways and Service Meshes: Opening the Door to Application Modernisation

Leia em Português

This item in japanese

Key Takeaways

  • Modernising applications by decoupling them from the underlying infrastructure on which they are running can enable innovation (shifting workloads to the cloud or external ML-based services), reduce costs, and improve security.
  • Container technology and orchestration frameworks like Kubernetes decouple applications from the compute fabric, but they also need to be decoupled at the networking infrastructure.
  • An API Gateway can decouple applications from external consumers, by dynamically routing requests from end-users and third-parties to various internal applications, regardless of where they are deployed within the data center.
  • A service mesh decouples applications from internal consumers by providing location transparency, by dynamically routing internal service-to-service requests, regardless of where they are exposed on the network.
  • The Ambassador API gateway and Consul service mesh, both powered by the Envoy Proxy, can be used to route from end user to services deployed on bare metal, VMs and Kubernetes. This combination of technologies enables an incremental approach to migrating applications, and can provide end-to-end encrypted traffic (TLS), and increased observability.
     
Want to learn more about Service Mesh?
Read our ultimate guide to managing service-to-service communications in the era of microservices and cloud.
Read the guide
Service Mesh

One of the core goals when modernising software systems is to decouple applications from the underlying infrastructure on which they are running. This can provide many benefits, including: promoting innovation, for example, moving workloads to take advantage of cloud ML and “big data” services; reducing costs through the more efficient allocation of resources or colocation of applications; and improving security through the use of more appropriate abstractions or the more efficient upgrading and patching of components. The use of containers and orchestration frameworks like Kubernetes can decouple the deployment and execution of applications from the underlying hardware. However, not every application can be migrated to this type of platform, and even if they can, organisations will typically want this to be an incremental process. Therefore, a further layer of abstraction is required that decouples traffic routing from the networking infrastructure: both at the edge, via an ingress or API gateway, and within a datacenter, via a service mesh.

At Datawire, we have worked closely with the HashiCorp team over the past few months, and have recently released an integration between the Ambassador API gateway and the Consul service mesh. The combination of these two technologies allows traffic routing for applications to be fully decoupled from the underlying deployment and network infrastructure. Using the power of the Envoy Proxy, both technologies provide dynamic routing from the edge and service-to-service across bare metal, VMs and Kubernetes. The integration also enables end-to-end TLS, and adds support for other cross-functional requirements.

Application Modernisation: Decoupling Infrastructure and Applications

Many organisations are undertaking “application modernisation” programs as part of a larger digital transformation initiative. The goals of this are manyfold, but typically focus around increasing the ability to innovate via modularisation of functionality and integration with cloud ML and big data services, improving security, reducing costs, and implementing additional observability and resilience features at the infrastructure level. An application modernisation effort is often accompanied with a move towards high cardinality modern architecture patterns that strive to be loosely coupled, like microservices and function-as-a-service (FaaS), and an adoption of a “DevOps” or shared responsibility approach to working.

One of the key technical objectives with application modernisation is decoupling applications, services, and functions from the underlying infrastructure. Several approaches to this are being promoted by cloud vendors.

  • AWS Outposts bring AWS services and operating models to a user's existing data center, via the installation of custom hardware that is fully managed by AWS.
  • Azure Stack is an extension of Azure services that enables users to consistently build and run hybrid applications across the Azure cloud and their own on-premises hardware.
  • Google Anthos extends core GCP services onto a user's infrastructure or another cloud via a software-based layer of abstraction and associated control plane.

Other projects like Ambassador, Consul, Istio and Linkerd are aiming to build on the existing cloud-agnostic, container-based abstractions for deployment, and provide a further layer of abstraction at the network to enable the decoupling of applications and infrastructure. Docker popularised the use of containers as a deployment unit, and Google recognised that the majority of applications were deployed as a collection of containers, which they named as a “pod” within Kubernetes. Here containers share a network and filesystem namespace, and utility containers that provide logging or metric collection can be composed with applications. The business functionality deployed within pods are exposed via a “service” abstraction, which provides a name and network endpoint. The use of this abstraction allows the deployment and releasing to be separated. Multiple versions of a service can be deployed at any given time, and functionality can be tested or released by selectively routing traffic to backend pods (for example, “shadowing” traffic, or “canary releasing”). This dynamic routing is typically achieved via proxies, both at the edge -- an “edge proxy”, or “API gateway” -- and between services -- the inter-service proxies, which collectively are referred to as a “service mesh”.

One of the biggest challenges for many organisations is implementing this application and infrastructure decoupling without disrupting end-users and internal development and operations teams. Due to the diversity of infrastructure and applications within a typical enterprise IT estate -- think mainframe, bare metal, VMs, containers, COTS, third-party applications, SaaS, in-house microservices, etc. -- a key goal is to establish a clear path that allows incremental modernisation and migration of legacy applications to newer infrastructure like Kubernetes and cloud services.

Modernisation and Migration, Without Disruption: The Role of an API Gateway and Service Mesh

The open source Envoy Proxy has taken the modern infrastructure world by storm, and with good reason: this proxy was born in the “cloud native” and Layer-7 (application) protocol-focused era, and it, therefore, handles all the characteristics of modern infrastructure and the associated developer/operator use cases very effectively and efficiently. End-user organisations like Lyft, eBay, Pinterest, Yelp, and Groupon, in combination with all of the major cloud vendors, are using Envoy at the edge and service-to-service to implement service discovery, routing, and observability. Crucially, they are often using Envoy to bridge communication between the old world of mainframes and VM-based applications to the more modern container-based services.

Although the data plane (the network proxy implementation itself) is extremely powerful, the control plane, from which the proxy is configured and observed, does have a steep learning curve. Accordingly, additional open source projects have emerged to simplify the developer-experience of using Envoy. Datawire’s Ambassador API gateway is an Envoy-powered edge proxy that provides a simplified control plane for configuring Envoy when used for managing ingress, or “north-south” traffic. HashiCorp’s Consul service mesh is a control plane for service-to-service communication or “east-west” traffic, and this supports Envoy within its range of pluggable proxy configurations.

The key promise of using these two technologies is that they enable applications to run anywhere while remaining available and connected to both external and internal users:

  • An API Gateway decouples application composition and location from external consumers. An API gateway dynamically routes external requests from end-users, mobile apps, and third-parties to various internal applications, regardless of where they are deployed.
  • A service mesh decouples applications from internal consumers by providing location transparency. A service mesh dynamically routes internal service-to-service requests to various applications, regardless of where they are deployed.

Ambassador and Consul: Route to VMs, Containers, and More

A typical deployment of Consul consists of multiple Consul servers (providing high-availability), and a Consul agent on each node. Consul acts as the configuration “source of truth” for the entire data center, tracking available services and configuration, endpoints, and storing secrets for TLS encryption. Using Consul for service discovery, Ambassador is able to route from a user-facing endpoint or REST-like API to any Consul service in the data center, whether this is running on bare metal, VMs, or Kubernetes. Consul can also transparently route service-to-service traffic via Envoy proxies (using the service “sidecar” pattern), which ensures end-to-end traffic is fully secured with TLS.

Ambassador serves as a common point of ingress to applications and services, providing cross-cutting functionality for north-south traffic, such as user authentication, rate limiting, API management, and TLS termination. Consul acts as the service mesh and enables the definition of service names to provide location transparency, and policy-as-code to be declaratively specified to defined cross-cutting security concerns, such as “segmenting” the network. Securing service-to-service communication with firewall rules or IP tables does not scale in dynamic settings, and therefore service segmentation is a new approach to securing services via their identity, rather than relying on network-specific properties; complicated host-based security groups and network access control lists are eschewed in favor of defining access policies using service names.

Getting Started

Ambassador uses a declarative configuration format built on Kubernetes annotations. So to use Consul for service discovery, you first register Consul as a resolver via an annotation placed within a Kubernetes service:

apiVersion: ambassador/v1
kind: ConsulResolver
name: consul
address: consul-server
datacenter: dc3

Ambassador can now be configured to route to any service using the standard annotation-based configuration format. All Ambassador features such as gRPC, timeouts, and configurable load balancing are fully supported. The example below demonstrates a mapping between /foo/ and the proxy of the foo service (“foo-proxy”) registered in Consul:

apiVersion: ambassador/v1
kind: Mapping
prefix: /foo/
service: foo-proxy
timeout_ms: 5000
resolver: consul-server
tls: consul-tls-cert
load_balancer:
  policy: round_robin 

Although optional, the tls property defines the TLS context that Ambassador uses to communicate with the Consul service’s proxy. Ambassador synchronizes TLS certificates via the Consul API automatically. To guarantee all of the traffic is secure, the service itself should be configured to only receive traffic from the Consul service proxy, for example, by configuring the service within a Kuberntes pod to bind to the local network address, or by configuring the underlying VM to only accept inbound communication via the port the proxy is listening to.

There are several additional benefits from rolling out Ambassador and Consul across your datacenter. The use of a Layer-7 aware proxy such as Envoy at the edge means that modern protocols such as HTTP/2 and gRPC will be load balanced correctly (a great discussion of why the use of Layer-4 load balancers is not appropriate in this use case, see the great post “We rolled out Envoy at Geckoboard”). Consul also provides additional primitives that are useful when building distributed systems, such as a key-value store that also includes the ability to watch entries, distributed locks, and health checks, and it also supports multiple data centers out-of-the-box.

Related Technologies

IBM, Google and several other organisations and individuals founded the Istio project to provide a simplified control plane for Envoy that focused on inter-service communication. The project later added the concept of a “gateway” for managing ingress, which is still evolving. Currently Istio only supports deployment on Kubernetes, but additional platform support is on the roadmap. Buoyant have created the Linkerd service mesh, which although primarily focused on managing east-west traffic, does also provide integrations with popular north-south proxies. The Kong API gateway team also have an early-stage service mesh solution that is powered by NGINX.

Beware of the “Big Bang” Service Mesh Rollout

Through my work at Datawire, I’ve talked with several organisations that have attempted an organisation-wide rollout of a service mesh. Networking operations is arguably one of the last bastions within software delivery that has remained relatively centralised -- even with the adoption of cloud and software-defined networking (SDN) -- and sometimes this leads to the thinking that any networking technology must be centrally managed. In general, this type of approach when deploying a service mesh has not gone well. It appears unreasonable to orchestrate all engineers within a large enterprise to move all applications en masse to a mesh. Instead, an incremental approach to adoption appears more practical, and I believe this should start at the edge. Once an organisation can decentralize the configuration and release of applications and products that are exposed externally, and also learn how to take advantage of the functionality offered by modern proxies, this is an ideal starting point to continue rolling out a service mesh incrementally for internal services.

The first iteration of rolling out a service mesh is typically focused on routing. As I have demonstrated in the Ambassador and Consul configurations above, once you have a modern edge proxy in place, you can selectively migrate traffic routing to your existing Consul-registered services, regardless of where or how they are deployed. Once the routing is complete you can then incrementally add a Consul proxy alongside each of your services, and route traffic securely (using TLS) from the edge to each service endpoint. It is completely acceptable to have a mix of services implementing TLS and some plaintext within a data center. The goal is typically to secure all traffic, and using the combination of Ambassador and Consul, it is possible to roll out the end-to-end encryption of traffic, from end-user to service, incrementally.

Summary

In this article, I have discussed the motivations for decoupling applications from infrastructure as part of an application modernization program. I have explored how deploying an integrated API gateway and service mesh can provide an incremental path to routing traffic from end users to both new and existing services, regardless of where these applications are deployed. If and when applications are migrated to newer platforms, their identity that is used for routing traffic from the edge to the service, or service-to-service, remains the same. In addition, there are several other benefits from implementing this gateway-to-mesh solution, including end-to-end traffic encryption and improved observability of application-level networking metrics, both globally and service-to-service.

Further details on the Ambassador and Consul integration can be found in the Ambassador docs, and a tutorial can be found within the Consul docs.

About the Author

Daniel Bryant works as an Independent Technical Consultant and Product Architect at Datawire. His technical expertise focuses on ‘DevOps’ tooling, cloud/container platforms, and microservice implementations. Daniel is a Java Champion, and contributes to several open source projects. He also writes for InfoQ, O’Reilly, and TheNewStack, and regularly presents at international conferences such as OSCON, QCon and JavaOne. In his copious amounts of free time he enjoys running, reading and traveling.

Rate this Article

Adoption
Style

BT