BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News HashiCorp Releases Consul 1.6 with Layer 7 Dynamic Traffic Management and Cross-Network Connectivity

HashiCorp Releases Consul 1.6 with Layer 7 Dynamic Traffic Management and Cross-Network Connectivity

Bookmarks

HashiCorp released version 1.6 of Consul, their service mesh and key-value store. This release builds on the features added in version 1.5 by introducing layer 7 routing and traffic management. It additionally delivers a new feature, mesh gateway, which can be used to route service traffic across regions, platforms, and clouds.

With this beta release, there are additional configuration types to support advanced traffic management patterns for service-to-service requests when using Consul Connect. These configurations allow operators to divide L7 traffic between service instances using Consul. This provides support for patterns such as canary testing, A/B testing, blue/green deploys, and multi-tenancy. There are three specific stages where L7 traffic can be managed: routing, splitting, and resolution.

L7 traffic support stages

The three stages where L7 traffic can be managed using Consul Connect (credit: HashiCorp)

 

Service routing allows for the interception of traffic using L7 criteria, such as path prefixes and HTTP headers. You can then redirect this traffic to a different service or even a subset of a service. Within the service-router configuration entry, only service-splitter or service-resolver entries can be referenced. The example below details how a request to a service named web can route a subset of requests based on HTTP path:

Kind = "service-router"
Name = "web"
Routes = [
  {
    Match {
       HTTP {
         PathPrefix = "/admin"
         PrefixRewrite = "/"
       }
    }
    Destination {
      Service = "admin"
    },
  }
  ...
]

The next stage is the service-splitter configuration. This config allows for splitting of incoming requests across different subsets of a single service or across different services. This facilitates performing canary rollouts, or supporting managing traffic related to a full rewrite of a service. service-splitter configs can only reference other service-splitter configs or service-resolver. If one splitter references another splitter, the end result is effectively flattened into one splitter entry which reflects the multiplicative union of the two configurations:

splitter[A]:           A_v1=50%, A_v2=50%
splitter[B]:           A=50%,    B=50%
---------------------
splitter[effective_B]: A_v1=25%, A_v2=25%, B=50%

This example shows how a service-splitter can send a percentage of traffic to a subset of services:

Kind = "service-splitter"
Name = "billing-api"
Splits = [
  { 
    Weight = 10
    ServiceSubset = "v2"
  },
  {
    Weight = 90
    ServiceSubset = "v1"
  },
]

The final stage is the service-resolution configuration, which allows for defining which instance of a service should satisfy discovery requests for the provided name. Possible uses for this include:

  • Controlling where to send traffic if all instances are unhealthy
  • Configure service subsets based on the Service.Meta.version values
  • Send all traffic that does not specify a service subset to a particular subset
  • Send all traffic for a service in all hosting centers to one particular hosting center

If there is no resolver config defined, then it is assumed that all traffic should flow to the healthy instances of the service with the requested name in the current data center. The service-resolution configuration can only reference other service-resolution entries. This example illustrates using the Consul API filter language to query on Consul service data to select a specific version of the service:

Kind = "service-resolver"
Name = "admin"
DefaultSubset = "v1"
Subsets = {
  "v1" = {
    Filter = "Service.Meta.version == 1"
  },
  "v2" = {
    Filter = "Service.Meta.version == 2"
  },
}

Also included within this release is a new mesh gateway feature. Mesh gateways enable routing of Connect traffic between multiple Consul-enabled data centers. These data centers can be in different clouds or even runtime environments. The gateway operates by sniffing the SNI (Server Name Indication) out of the Connect session and then routing based on the server name requested.

Architecture of mesh gateway feature

Illustration detailing architecture of mesh gateway (credit: HashiCorp)

 

Mesh gateways are Envoy proxies that are deployed at the edge of the network within a data center. They enable services in separate networking environments to communicate with each other. For service A to connect with service B in a remote location, the traffic is proxied through the mesh gateways, which route traffic to the destination based on the SNI sent as part of the TLS handshake.

While the mesh gateway leverages the SNI as part of TLS, it has no direct access to the data within the payload. This keeps the data safe even if the mesh is comprised. However, HashiCorp indicates that mesh gateways are not suitable for general purpose ingress from non-mesh traffic.

For more details and additional improvements included in this release, please review the official announcement on the HashiCorp blog. The CHANGELOG provides the full list of changes in the release. Consul is available to download on the HashiCorp site.

Rate this Article

Adoption
Style

BT