BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News HashiCorp Improves Consul Service Mesh Integration with Kubernetes

HashiCorp Improves Consul Service Mesh Integration with Kubernetes

This item in japanese

Bookmarks

Hashicorp has released new features to better integrate Consul, a service mesh and KV store, with Kubernetes. These features include support for installing Consul on Kubernetes using an official Helm Chart, autosyncing of Kubernetes services with Consul, auto-join for external Consul agents to join a Kubernetes cluster, support for Envoy, and injectors so Pods can be secured with Connect. These new features can be used to facilitate cross-cluster communication between Kubernetes cluster or within heterogeneous workloads and environments.

As part of this integration, Hashicorp is focused on enhancing instead of replacing Kubernetes. They are utilizing the core Kubernetes workflow components such as Services, ConfigMaps, and Secrets by building on top of these core primitives. This allows for features such as Consul catalog sync to convert external services into first-class Kubernetes services.

Consul now has an official Helm Chart for installation in Kubernetes, allowing for a full Consul setup within minutes. This setup can be either a server cluster, client agents, or both.

After cloning and checking out the latest tagged release, you can install the chart using

$ helm install .

This will install a three-node, fully bootstrapped server cluster with a Consul agent running on each Kubernetes node. You can then view the Consul UI by forwarding port 8500 from a server pod

$ kubectl port-forward consul-server-0 8500:8500

The Helm Chart is fully configurable and allows you to disable components, change the image, expose the UI service, configure resource and storage classes. For more details, please review the full list of configurable options. By adjusting the configuration, you can deploy a client-only cluster which is useful if your Consul servers are run outside of a Kubernetes cluster, as shown in the example configuration below:

global:
  enabled: false

client:
  enabled: true
  join:
    - "provider=aws tag_key=... tag_value=..."

The join configuration will use the cloud auto-join to discover and join pre-existing Consul server agents.

The auto-join feature has been enhanced to utilize the Kubernetes API to discover pods running Consul agents. This is instead of using static IP addresses or DNS to facilitate the join.

Consul will authenticate with Kubernetes using the standard kubeconfig file used for authenticating with kubectl. It will search in the standard locations for this file. Once connected, you can instruct Consul to auto-join based on tags:

$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'

This command will ask the agent to query Kubernetes for pods with the tags app=consul and component=server. Consul will retry this command periodically if no pods are found. This can be used by agents running outside of the Kubernetes cluster.

Kubernetes services now have automatic syncing to the Consul catalog. Consul users can discover these services via Consul DNS or the HTTP APIs. This uses standard Kubernetes tooling to register the services, meaning if they are moved to a new pod or scaled, the Consul catalog will remain up to date. This also allows non-Kubernetes workloads to connect to Kubernetes services through Consul.

Currently this syncing process only includes NodePort services, LoadBalancer services, and services with an external IP set. An example of a standard LoadBalancer service configuration:

apiVersion: v1
kind: Service
metadata:
  name: consul-ui
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8500
  selector
    app: consul
    component: server
    release: consul
  type: LoadBalancer

Once this is run, you can query the service from outside of Kubernetes:

# From outside of Kubernetes
$ dig consul-ui.service.consul.

;; QUESTION SECTION:
;consul-ui.service.consul. IN A

;; ANSWER SECTION:
consul-ui.service.consul. 0 IN A 35.225.88.75

;; ADDITIONAL SECTION:
consul-ui.service.consul. 0 IN TXT "external-source=kubernetes"

To help identify them in the Consul UI, externally registered services are marked with a Kubernetes icon

Finally, Consul services can be registered into first-class Kubernetes services. This allows Kubernetes users to connect to them via Kubernetes APIs. It does however, require that Consul DNS be configured with Kubernetes.This syncing process can be run via the Helm Chart:

syncCatalog:
  enabled: true

Additional options are available for deeper configuration. This functionality is also open-source and part of of the consul-k8s project.There are now native Kubernetes injectors to allow you to secure your pods with Connect. Connect enables secure service-to-service communication with automatic TLS encryption and identity-based authentication. With this, a Connect sidecar running Envoy (a proxy for Connect) can be automatically injected into pods, making the Kubernetes configuration automatic. If the pod is properly annotated, Envoy is auto-configured, started, and able to both accept and establish connections via Connect. Here is an example Redis server with Connect configured for inbound traffic:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
      annotations:
        "consul.hashicorp.com/connect-inject": "true"
    spec:
      containers:
        - name: redis
          image: "redis:4.0-alpine3.8"
          args: [ "--port", "6379" ]
          ports:
            - containerPort: 6379
              name: tcp

We can then configure a Redis UI to talk to Redis over Connect. The UI will be configured to talk to a localhost port to connect through the proxy. The injector will also inject any necessary environment variables into the container.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis-ui
  template:
    metadata:
      labels:
        app: redis-ui
      annotations:
        "consul.hashicorp.com/connect-inject": "true"
        "consul.hashicorp.com/connect-service-upstreams": "redis:1234"
    spec:
      containers:
      - name: redis-ui
        image: rediscommander/redis-commander
        env:
        - name: REDIS_HOSTS
          value: "local:127.0.0.1:1234"
        - name: K8S_SIGTERM
          value: "1"
        ports:
        - name: http
          containerPort: 8081

The Kubernetes Connect injector can be run using the Helm Chart. Every client agent will need gRPC enabled for the Envoy proxies.

connectInject:
  enabled: true

client:
  grpc: true

These new features simplify the process of running Consul on Kubernetes. As well, with the auto-join and syncing features, Hashicorp believes this simplifies running in environments that integrate with non-Kubernetes workloads. In addition to these new features, work is being done to integrate Terraform and Vault with Kubernetes, with release planned in the next few months.

Rate this Article

Adoption
Style

BT