BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Getting Started with Istio Service Mesh Routing

Getting Started with Istio Service Mesh Routing

Bookmarks

Key Takeaways

  • This tutorial demonstrates how to install and use the Istio service mesh in a Kubernetes cluster, and discusses how to best leverage Istio’s routing capabilities.
  • Explore the difference between Layer 4 and Layer 7 network proxies, and understand how best to leverage L7 proxy benefits.
  • Due to Istio’s extensibility, and capabilities of service meshes in general, users can implement routing scenarios that would otherwise require a lot more time and resources.
     
Want to learn more about Service Mesh?
Read our ultimate guide to managing service-to-service communications in the era of microservices and cloud.
Read the guide
Service Mesh

In the following tutorial, we will use Istio to demonstrate one of the most powerful features of service meshes: “per request routing.” This feature allows the routing of arbitrary requests that are marked by selected HTTP headers to specific targets, which is possible only with a (OSI) layer 7 proxy. No layer 4 load balancer or proxy can achieve this functionality.

If you would like to follow along with this article we assume that you have a Kubernetes cluster running. A small cluster with 1 master node and 2 worker nodes should be enough for this tutorial.

What is Istio?

Istio is a service mesh. It is composed of control plane and data plane. For the data plane it uses Envoy proxy. Envoy itself is a L7 proxy and communication bus designed for modern microservices based architecture. Additional information about Envoy proxy can be found the extensive document. A good explanation about istio can also be found in the official documentation.

Figure 1: Using Istio Pilot to inject routing config to the Envoy proxy running as a sidecar to services

Per Request Routing

Istio provides advanced traffic management capabilities. The per-request routing feature allows us to define sophisticated rules against incoming request and decide what to do with the request. The possible use cases are:

  1. Canary testing -- redirect a small percentage of user traffic to a new service version.
  2. Serve different versions to different users -- Users on different plans or from a different regions may be served by separate environments (i.e. this may be useful as part of implementing GDPR requirements)
  3. A/B testing.
  4. Gradual rollouts.

In this tutorial we will show how to do gradual rollout.

Tutorial stage 0: Install a Kubernetes cluster

To create a cluster, you can use any Kubernetes solution. For this tutorial, we deployed a cluster using the free Kublr demo.

Tutorial stage 1: Install the Istio control plane

One option is to follow the official Istio quick-start tutorial in order to install the control plane in your Kubernetes cluster. The installation steps depend on your local machine type (Mac, Linux, Windows), so we will not replicate here the standard instructions of setting up local istioctl application and kubectl, the two CLI tools that will be used to manage Kubernetes and Istio.

For readers already familiar with Kubernetes the less detailed instructions are as follows (if this doesn’t work, then we recommend you use the official instructions step by step):

  1. Setup the Kubernetes cluster (using any method listed above, or use your existing testing/development cluster)
  2. Install kubectl locally (with it you will manage the Kubernetes cluster).
  3. Install istioctl from GitHub release page (that is used to inject Envoy proxy to pods and to set routing rules and policies), the installation is simple:
    1. For Mac or Linux run curl -L https://git.io/getLatestIstio | sh -
    2. On windows just extract the zip, and copy binary to your PATH (can simply copy into c:\windows\system32\) or run all istioctl.exe commands from /bin/ directory.
    3. Navigate to the folder with extracted files, and install with kubectl apply -f install/kubernetes/istio-demo.yaml

You’ll need a Kubernetes client config file and access to the cluster dashboard. How you get them may vary depending on the method used to create the cluster. Since our example cluster was deployed with Kublr, you’ll find the following links in Kublr dashboard and download config file to your ~/.kube/config (%USERPROFILE%/.kube/config in windows), then navigate to the Kubernetes dashboard:

Use the credentials from the config file (locate the “username: admin” and use this user and its listed password to login to dashboard). You should see the dashboard, and clicking “namespace” in the sidebar will reveal the following 3 default namespaces:

Istio components will be installed into their own namespace. Navigate to the folder where you downloaded the Istio release archive, extract, and run:  kubectl apply –f install/kubernetes/istio-demo.yaml

You will see a lot of components being created, each of which is described in the official Istio documentation, or you can open the yaml file to have a look at the comments -- every resource is documented in that file. Then we can browse the namespaces and check if everything was created successfully:

Click the istio-system namespace and make sure there were no errors or issues during components creation. It should look similar to this:

There are about 50 events; you can scroll to see “successful” statuses, and will notice if there’s an error somewhere. In case of errors, you can post a bug report on Istio GitHub issues page and point the developers to the issue.

We need to find the entry point of the “istio-ingress” service, to know where to send traffic to. Navigate to “istio-system” namespace in the sidebar. If it’s not visible among other namespaces right after creation, simply refresh the browser page, then select that namespace, click “services” and find the external endpoint as shown on the following screenshot:

In our case, it is an AWS elastic load balancer, but you might see an IP address, depending on the cluster setup. We will access our demo web service using this endpoint address.

Stage 2 – Deploy a Demo Web Service with Envoy Proxy Sidecar

Now we are finally at the fun part of the tutorial. Let’s check the routing capabilities of this service mesh. First, we will deploy two demo web services, “blue” and “green,” as we did in one of our previous tutorials.

Copy the following into a yaml file named my-websites.yaml:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: web-v1
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: website
        version: website-version-1
    spec:
      containers:
      - name: website-version-1
        image: kublr/kublr-tutorial-images:v1
        resources:
          requests:
            cpu: 0.1
            memory: 200
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: web-v2
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: website
        version: website-version-2
    spec:
      containers:
      - name: website-version-2
        image: kublr/kublr-tutorial-images:v2
        resources:
          requests:
            cpu: 0.1
            memory: 200
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: web-v3
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: website
        version: website-version-3
    spec:
      containers:
      - name: website-version-3
        image: kublr/kublr-tutorial-images:v3
        resources:
          requests:
            cpu: 0.1
            memory: 200
---
apiVersion: v1
kind: Service
metadata:
  name: website
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    app: website

Note that when you want to use the Envoy sidecar with your pods, the label “app” should be present (it’s used in the request tracing feature), and “spec.ports.name” in the service definition must be named properly (http, http2, grpc, redis, mongo) otherwise Envoy will act on that service traffic as if it was plain TCP, and you will not be able to use the layer 7 features with those services!

In addition, the pods must be targeted only by a single “service” in the cluster. As you can see above, the definition file has three simple deployments each using a different version of the web service (v1/v2/v3), and three simple services, each pointing at the corresponding deployment.

Now we will add the needed Envoy proxy configuration to the pod definitions in this file, using “istioctl kube-inject” command. It will produce a new yaml file with additional components of the Envoy sidecar ready to be deployed by kubectl, run: istioctl kube-inject -f my-websites.yaml -o my-websites-with-proxy.yaml

The output file will contain extra configuration, you can inspect the “my-websites-with-proxy.yaml” file. This command took the pre-defined ConfigMap “istio-sidecar-injector” (that was installed earlier when we did istio installation), and added the needed sidecar configurations and arguments to our deployment definitions. When we deploy the new file “my-websites-with-proxy.yaml”, each pod will have two containers, one of our demo application and one Envoy proxy. Run the creation command on that new file: kubectl apply -f my-websites-with-proxy.yaml

You will see this output if it worked as expected:

deployment "web-v1" created

deployment "web-v2" created

deployment "web-v3" created

service "website" created

Let’s inspect the pods to see that the Envoy sidecar is present:  kubectl get pods

We can see that each pod has two containers, one is the website container and another is the proxy sidecar:

Also, we can inspect the logs of the Envoy proxy by running: kubectl logs <your pod name> -c istio-proxy

You will see a lot of output, with last lines similar to this:

add/update cluster outbound|80|version-1|website.default.svc.cluster.local starting warming

add/update cluster outbound|80|version-2|website.default.svc.cluster.local starting warming

add/update cluster outbound|80|version-3|website.default.svc.cluster.local starting warming

warming cluster outbound|80|version-3|website.default.svc.cluster.local complete

warming cluster outbound|80|version-2|website.default.svc.cluster.local complete

warming cluster outbound|80|version-1|website.default.svc.cluster.local complete

This means that the proxy sidecar is healthy and running in that pod.

Now we need to deploy the minimal Istio configuration resources, needed to route the traffic to our service and pods, save the following manifests into a file named “website-routing.yaml”:

---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: website-gateway
spec:
  selector:
    # Which pods we want to expose as Istio router
    # This label points to the default one installed from file istio-demo.yaml
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    # Here we specify which Kubernetes service names
    # we want to serve through this Gateway
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: website-virtual-service
spec:
  hosts:
  - "*"
  gateways:
  - website-gateway
  http:
  - route:
    - destination:
        host: website
        subset: version-1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: website
spec:
  host: website
  subsets:
  - name: version-1
    labels:
      version: website-version-1
  - name: version-2
    labels:
      version: website-version-2
  - name: version-3
    labels:
      version: website-version-3

These are Gateway, VirtualService, and DestinationRule. Those are custom Istio resources that manage and configure the ingress behavior of istio-ingressgateway pod. We will describe them more in-depth in the next tutorial which gets to the technical details of Istio configuration. For now, deploy these resources to be able to access our example website: kubectl create -f website-routing.yaml

The next step is to visit our demo website. We deployed three “versions”, each shows different page text and color, but at the moment we can reach only version 1 through the Istio ingress. Let’s visit our endpoint just to be sure there is a web service deployed.

Find your external endpoint by running: kubectl get services istio-ingressgateway -n istio-system

Or find it by browsing to the istio-ingressgateway service as shown below (we also saw it at the beginning of the tutorial):

Visit the external endpoint by clicking it. You may see several links because one link points to HTTPS and another to HTTP port of the load balancer.

The exact configuration which makes our “website” Kubernetes service point only to single deployment is the Istio VirtualService we created for the website. It tells the Envoy proxy to route requests of “website” service only to pods with label “version: website-version-1” (you probably noticed that the manifest of service “website” selects only one label “app: website” from our pods but says nothing about the “version” label to pick from – so without Envoy logic the Kubernetes service itself would do round robin between all pods with “app: website” label, both version one, two and three).

You can change the version of the website that we see by changing the following section of the VirtualService manifest and re-deploying it:

 http:

 - route:

- destination:

    host: website

    subset: version-1

The “subset” is where we chose the correct section of DestinationRule to route to, and we will learn in depth about these resources in the next tutorial.

Stage 3: Rolling out gradually

Usually when new version of an application needs to be tested with a small amount of traffic (a canary deployment), the vanilla Kubernetes approach would be to create a second deployment that uses a new Docker image but the same pod label, causing the “service” that sends traffic to this pod label, while also balancing between the newly plugged pods from the second deployment. However, you cannot easily point 10% of traffic to the new deployment (in order to reach a precise 10% you will need to keep the pod replicas ratio between two deployments according to the needed percentage, like 9 “v1 pods” and 1 “v2 pod”, or 18 “v1 pods” and 2 “v2 pods”), and cannot use HTTP header for example to route requests to particular version.

Istio solves this limitation through its flexible VirtualService configuration. For instance, if you want to route traffic using the 90/10 rule, it can easily do it like this:

---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: website-gateway
spec:
  selector:
    # Which pods we want to expose as Istio router
    # This label points to the default one installed from file istio-demo.yaml
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    # Here we specify which Kubernetes service names
    # we want to serve through this Gateway
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: website-virtual-service
spec:
  hosts:
  - "*"
  gateways:
  - website-gateway
  http:
  - route:
    - destination:
        host: website
        subset: version-1
      weight: 90
    - destination:
        host: website
        subset: version-2
      weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: website
spec:
  host: website
  subsets:
  - name: version-1
    labels:
      version: website-version-1
  - name: version-2
    labels:
      version: website-version-2
  - name: version-3
    labels:
      version: website-version-3

The source code for the article is available on GitHub.

Wrapping Up

We hope this tutorial provided you with a good high-level overview of Istio, how it works, and how to leverage it for more sophisticated network routing. Istio streamlines implementation of scenarios that would otherwise require a lot more time and resources. It is a powerful technology anyone looking into service meshes should consider.

About the Authors

Oleg Chunikhin, CTO, Kublr. With 20 years of software architecture and development experience, Kublr CTO Oleg Chunikhin is responsible for defining Kublr’s technology strategy and standards. He has championed the standardization of DevOps in all Kublr does and is committed to driving adoption of container technologies and automation. Oleg holds a Bachelor of Mathematics and a Master of Applied Mathematics and Computer Science from Novosibirsk State University, and is an AWS Certified Software Architect.

Oleg Atamanenko is a Senior Software Architect at Kublr with vast experience working with many different technology platforms. Working with Kubernetes since 2016, he is a certified Kubernetes administrator and author of cluster autoscaler support for Azure (based on VMSS). He has worked as a software architect for more than 13 years and lives and breathes all things Docker, Kubernetes, Amazon Web Services (AWS), agile methodologies (Scrum, Kanban) and he is versed in DevOps languages: Go lang, Java/Scala, bash, Javascript/TypeScript.  Atamanenko has worked extensively on cloud-native environments and have broad experience developing distributed systems, containerizing legacy systems and implementing several serverless projects on AWS.

Rate this Article

Adoption
Style

BT