BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Cloud Native Continuous Delivery on Kubernetes with Tekton

Cloud Native Continuous Delivery on Kubernetes with Tekton

Bookmarks
39:01

Summary

Jerop Kipruto introduces the building blocks of Tekton and shows how they fit with Kubernetes. Then she demonstrates how Tekton works and how to use it in an end-to-end continuous delivery process.

Bio

Jerop Kipruto is a Software Engineer at Google working on Cloud CI/CD, specifically Tekton. She works on projects that enable software developers to build and ship cloud native applications. Jerop has a BSc. from Massachusetts Institute of Technology. Jerop is also a speaker with technical talks featured in Continuous Delivery Conference, Open Source Summit, and North American DevOps Group.

About the conference

InfoQ Live is a virtual event designed for you, the modern software practitioner. Take part in facilitated sessions with world-class practitioners. Hear from software leaders at our optional InfoQ Roundtables.

Transcript

Kipruto: My name is Jerop Kipruto. I'm a software engineer at Google, working on cloud native continuous delivery, specifically working on the Tekton project. I'm delighted to speak to you about cloud native continuous delivery on Kubernetes with Tekton. I'm going to start with briefly describing exactly what that means. Cloud native, is a software development approach in which applications are broken down into microservices that are packaged into containers, and dynamically orchestrated in the cloud to optimize for resource utilization. For most people, cloud native means running containers in Kubernetes. Continuous delivery is a software development practice, in which teams release software changes to users safely, quickly, and sustainably. Kubernetes is an open source platform for managing containerized workloads and services. Tekton is a Kubernetes native open source framework for creating continuous delivery systems. It provides Kubernetes custom resources for declaring continuous delivery pipelines.

Tekton

Tekton is a Kubernetes native platform that inherently integrates into Kubernetes facilities, such as around scheduling, typing, decoupling, extensibility, security, among others. It's highly optimized for building, testing, and deploying cloud native applications by abstracting away the implementation details. It defines resources that align well with Kubernetes concepts. Some of the key benefits of Tekton include, one, standardization. Tekton standardizes continuous delivery tooling and processes across many different vendors, languages, and deployment environments. It works well with Jenkins, Jenkins X, Skaffold, Knative, among many other popular continuous delivery tools. Two, built-in best practices. Tekton lets us create continuous delivery systems quickly, giving us scalable serverless cloud native execution out of the box. Tekton is optimized for simplicity and reusability. Three, maximum flexibility. Tekton abstracts away the underlying implementation details so we can choose the build, test, and deploy workflow based on the team's requirements.

Tekton - Kubernetes

Next, let's look at the building blocks of Tekton and how they fit well with Kubernetes. First, a step is a reference to a container image that executes a specific tool on a specific input and produces a specific output. Tekton steps, map to Kubernetes containers. We name a step to identify what the step is doing, in our case, deploy-app. Specify the container image that we want to pull, in our case, foo/base-image:2.7, and define the environment which is accessible to the container. A script is invoked, as if it was stored inside the container image. Second, is a task, which executes a pod on the Kubernetes cluster. A task is a sequence of steps running a sequence of containers. All steps in a task access a shared workspace, which is mounted to a pod as an implicit volume. Tekton task, maps to a Kubernetes pod. A task is a Kubernetes custom resource of kind: Task. It has to have a name so that it can be referenced back and reused. The specifications include a list of parameters with default and descriptions, and then a list of steps there. That task is executed by TaskRun, which is a custom resource of kind: TaskRun, which is also named so it can be referenced back. Then it also provides parameters and other resources needed by the task. Then the task name that's being executed here.

Third, is a pipeline, which is a collection of tasks running a set of pods. A pipeline is a graph which provides flexibility to organize the workflow based on the requirements. A pipeline combines tasks through parameters, results, and workspace. If the data to be shared among the tasks is large, it can be written into the workspace, which is a shared persistent volume, so that the tasks can access that data. We can also provide task or step level isolation of workspaces, if needed. A Tekton pipeline maps to a set of Kubernetes pods. A pipeline is a Kubernetes resource of kind: Pipeline, just like task. Then we also name it so we can reference it back and reuse. The pipeline specification includes a list of parameters, such as API-URL and cloud-region, and then tasks. For example, ours includes git-clone, followed by build, and in the end, we deploy the application. Then this pipeline is executed using a PipelineRun, which is also a Kubernetes resource of kind: PipelineRun, and it's also named for reference and reuse later. Then it also provides parameters and other resources via that pipeline. Then it references the pipeline using the PipelineRun.

Our pipeline is specified as a directed acyclic graph, where each node represents a task and the edges are dependencies. These dependencies are based on resources that are passed between tasks through parameters and results. The dependencies can also be ordering constraints specified when there are no resource dependencies between tasks. Specifying the pipeline as a directed acyclic graph provides the capabilities to solve sophisticated continuous delivery use cases using Tekton. Some of these capabilities include conditional execution of tasks using when expressions, exit handling, and cleanup of pipelines using finally tasks, and graceful termination of pipeline upon failure.

Demo

The next step, we'll look into a demo of how all of these concepts come together. The use case that we'll be looking at is this, where we have parameters coming into this pipeline that has five paths, as a cloning, linting, testing, building, and then running tasks. All of these tasks share a persistent volume claim. In this demo, we'll look at the linting, testing, building, and running of a Hello World Go application using Tekton on Kubernetes. First off, let's look at how the application is set up. We have the hello function, which prints hello to a particular name. In this example, in our application, we are saying hello to the world. Then we have tests here, which is testing that function. In this example, we're testing Hello to the Earth, so Hello Earth is what we want. We can execute this locally to verify that everything works, and it's all passing.

We want to be able to execute this function, be able to clone this repo, test it, lint it, and then build and deploy the application. We've written a Tekton pipeline for this. This is a Tekton pipeline of kind: Pipeline. It takes a parameter package. In this case, it defaults to Tekton demo. Then it has a workspace, workarea, which is where the repo will be cloned. Then this pipeline has a result, commit-sha, which is propagated from the task clone, which has a results named commit. The tasks listed in this pipeline include the following. The first one is clone, which has taskRef, git-clone, and it uses the workspace, workarea. Then it takes a parameter URL, which is the repo that we use in here.

The second is lint, which runs after clone with the taskRef, golangci-lint, which we'll get from the Tekton catalog, and has parameters package and flags. It also uses the same workspace, workarea. The third is test, which runs after lint, and it has a taskRef, golang-test, and parameter, package. It uses the workspace, workarea, similar to the other tasks above. The fourth is build, which runs after test. It has a taskRef, golang-build, and it has parameter package or packages, and uses the same workspace, workarea. Lastly, we have TaskRun, which runs after task build. It has a taskRef, golang-run, and parameter, package, and uses the same workspace as the other task.

All the previous tasks were from the Tekton catalog. However, since golang-run is not available in the catalog, we specify it within our project. Here, we have a Tekton task we've named, golang-run. This task is used to run Go projects. It has a set of parameters. The first one is package, which has been provided, but other parameters have default values such as version, context. It uses a workspace which has the code base that it's executing or executing upon. We've connected all these tasks together, and it's using a workspace, so let's look at the specification of the workspace. This workspace is a persistent volume claim, named PVC. All the tasks will write and read from it.

Let's see all these pieces in action. First off, we'll verify that we have the Tekton pipelines controller, and webhook running. It's running as expected. The next step is to install tasks from the Tekton catalog. First, we'll install git-clone, it's on version 4. Then we'll install Golang tests on version 2, which is the latest. Next, we'll install lint version 2, which is the latest. Then, lastly, we'll install Golang version 2 as well. We've finished installing Tekton tasks from the catalog. Next, we'll install the golang-run task, which is found within our project. I'll copy this over, and we can see that we've created golang-run Tekton task. Next step would be, install the Tekton pipeline. We've installed Tekton pipeline within our cluster. The next thing, we'll make the persistent volume claim that we'll be using across all of these tasks in the cluster. Then, lastly, I want to execute the pipeline by creating a PipelineRun, where we parse in the parameter needed and the workspace. We see that we've created a PipelineRun with the name pipeline-run-4kj84. We're waiting for the logs to be available here. For this case, we're expecting that we'll start with the first task which was cloning. Right now it's running, and it's populating the workarea workspace, with the code base that we provided. That's happening here. It's checking it out, and it's writing the repo. The RESULT_SHA result is written, which is the latest commit that we're using for this project.

The next task that we're expecting to be executing is the golangci-lint. We should be trying to find any errors or any issues in our code base that we can fix. We can see here that linting executed. Everything is ok. We don't have any issues. The linters took 2 seconds with stages and all other details that may be needed. If you caught any errors then we'll get the information logged here with anything that we need to fix in our project.

The next task that we're expecting to execute is a unit test, and we can see the logs here. The unit test is running, everything passed, and we can see the coverage of 50% of statements, and everything is ok. We're expecting building to run, and everything will build, and everything looks good for that one. Lastly, the running executed, and we can see here the run is ok, and then it's printing, Hello World. We want to be able to drill down to the PipelineRun and see its status, we can see that it succeeded. Everything is ok on that front. Let's describe it, where we can see other details. When we look at the description of that PipelineRun, we can see the labels. The status is succeeded. You can see all the TaskRuns that were created as part of this PipelineRun, so for, clone, lint, test, build, and run. For each of these TaskRuns, we can describe them and get further details as necessary. For example, let me just start with the first one and describe the clone task. When I describe it, I can see here that it succeeded. You can see all the labeled, timeout. It started 2 minutes ago, and then it lasted for 28 seconds. The parameter that was parsed to it is Tekton demo, and it produced a result commit-SHA in the URL.

The next step is us probably digging into a particular TaskRun and getting the relevant pod that was created for it. In our example, we were looking at this particular TaskRun, which is a clone TaskRun. These are all the TaskRuns. You can say, kubectl describe tr, and the one that we want to use, this one, which is a clone run. When we describe it, we can see all the steps being scheduled, initialized, ready, and then all the steps completing. What I'm looking for here is the pod name, which is what I'm going to use to get further details from the pod. For me to drill down to that I'm going to, kubectl get -0 yaml pod. This lets us get through, we can see all the labels have been propagated to that particular pod that was created. I can go down to the ownerReferences, the mountPath, and all the volumes that have been added, we can see, initializing the containers. Read through all of these.

The container is ready initially, with all the status that's been added to it. This is the important part that I wanted to show, where we have the ownerReference. This pod is owned by the TaskRun, with this particular name. We'll select, and then you can see the container that's being created here, all the environment variables that were specified for this pod, being propagated from the TaskRun. Get all the volumes and the mounts that you needed, then you can see, initializing the containers. We have Tekton abstracting away the implementation details of Kubernetes, and making it very simple to be able to create our continuous delivery system to test, run, deploy our applications on Kubernetes.

Tekton Custom Tasks - Extensibility

Next, we'll look at advanced capabilities in Tekton that you can use to solve your bespoke continuous delivery use cases on Kubernetes with Tekton. Users can specify custom tasks, which are implemented by Kubernetes controllers that run on the cluster to provide functionality that's not available directly in the Tekton pipelines API. Then they can specify a run to execute the custom tasks. Then the controller will watch and update the runs that reference their type.

Let's take an example where you have a task that runs tests based on a parameter test-type. It updates task, it takes a parameter test-type, and then it will execute it. If you want to run this task for several test types, then you can use a custom task with kind: TaskLoop. Parse on task, kind: TaskLoop, which has an iteration parameter with all the test types that you need. Here, we have iteration to parameter is test-type.

Then you can specify a run, which creates three TaskRuns, one TaskRun for each test type. Here, we have the parameter test-type, and it has three here for the analysis, unit test, and then end-to-end test. Then referencing that TaskLoop with the new testLoop. Lastly, you can specify that custom task along other tasks in a pipeline. In our example, we have a previous task that dynamically determines which test to run, and then produces those tests as a least parameter that can be reused in subsequent tasks. In our case, we have our custom task, which consumes that result as a parameter, and loops through it, creates the test TaskRuns that are needed for that particular pipeline.

Tekton Triggers - Automation

I can manually run my Tekton pipeline, but how do I automatically invoke the pipeline such as when I push a code commit or create a pull request? For that, we have triggers. The trigger binding extracts the relevant information from the event payload. Then the trigger template provides a blueprint for creating a PipelineRun. Here, the EventListener connects the trigger binding to the trigger template. A trigger binding is a custom Kubernetes resource that specifies the fields in the event payload, which you want to extract the data from, as well as the field in your corresponding trigger template that will populate with the extracted values. Here, parameters, we want the repo-url.

Then, secondly, is the trigger template, which is also a Kubernetes custom resource that specifies a blueprint for the resource such as a TaskRun or PipelineRun that you want to execute, when your EventListener detects an event. It exposes the parameters that you can use anywhere within your resources template. Here, we're creating a PipelineRun. Third, is an EventListener, which is a Kubernetes object that listens for events at a specified pod in your Kubernetes cluster. It exposes a sink that receives incoming events and specifies one or more triggers. This is a sink of when a service runs inside a dedicated pod. Each trigger in turn, allows you to specify one or more trigger bindings, that allows you to extract the fields and values from event payloads, and then, one or more trigger templates that receives those values from the trigger binding. Then allows Tekton to trigger and create resources such as TaskRuns and PipelineRuns with data from the event.

Tekton Catalog - Reusability

Instead of everyone creating their own tasks and pipeline, can we share reusable resources such as tasks and pipelines across the organization, or even more broadly with the community? We have the Tekton catalog, which can be shared across entire organizations and with the Tekton community. It already has a ton of resources shared by the community. These resources empower you to create continuous delivery systems quickly, giving you scalable serverless cloud native execution out of the box.

Tekton Chains

Lastly, I'd like to introduce Tekton Chains, which is a custom resource controller that allows you to manage your supply chain security in Tekton. Tekton Chains works by observing all TaskRuns executions in your cluster. Then, when the TaskRun completes, chains takes a snapshot of them. Then, it converts it into one or more standard payload formats, signs them, and stores them somewhere. This enables you to secure your software supply chain.

Questions and Answers

Losio: You mentioned security a couple of times. I wanted to basically understand as well how authentication works for both TaskRuns and the PipelineRuns if it is part of Kubernetes, is not part of Kubernetes. What's basically going on behind the scene?

Kipruto: Tekton is Kubernetes native, so we rely on the features that are inherently available in Kubernetes. For authentication, we actually use Kubernetes Secrets, so the secrets are provided or used using service accounts that are specified within the task definition or on the pipeline definition. For example, you can use it to set up Git authentication using tokens or SSH for cloning private repos. In our case, we're cloning a public one. For example, you can also use it to authenticate access to private registries.

The security I was talking about was software supply chain security, which is something that we are seriously looking into in Tekton. One thing that we have there is Tekton Chains, which signs and stores the TaskRun payload, but also we provide hermetic execution, where builds are self-contained and don't have access to a network or outside the build environment, which is one of the best practices for release engineering. Those steps that are within the TaskRun, will be run without network access, and it makes it reproducible and more secure. We have a lot of efforts, both within authentication, but specifically also in the software supply chain security that we're making progress on.

Losio: Going back to one aspect that you mentioned about the ability to have extension. What you don't have really you can build yourself, basically and add that feature. What part of the product, what part of the feature you still see that is going to develop mainly in the next few years? What do you think is still lacking or could basically be extended in the next few years?

Kipruto: Tekton is about three years now. It's a very new project. We're still in beta and making active developments and making progress towards v1. There are many exciting features that are to come. Some of the things that we're working on include things like pipelines in pipelines. Right now you can write steps, tasks, and pipelines. We have use cases for people to be able to embed pipelines within pipelines. That's one thing that I'm taking a look at. We really have an experimental project using custom tasks that you can use a separate controller, but I'm making progress towards having this as a top level feature. Other things include things like pipeline in a pod, so executing an interpreter in a pod. There are also use cases for that. Another one is looping. We just looked at the custom task for looping, but we're also looking to bring that as a top level feature that we're supporting.

Another thing is using common expression language and having support for that within Tekton pipelines. We're also looking at workflows, which is a top level thing that connects all our pieces between triggers and pipelines, because right now, I showed you as separate components. We are coming up with top level concepts that brings all of these together in a cohesive story, almost like pipeline in a pod. For now, if you have any functionality that's missing to solve for specific use cases, you can easily extend it using custom tasks and writing your own controllers. If you do so, you're also welcome to share them with the community in our experimental project repo. Also, to reach out for any guidance on how to write the custom controllers yourself. To read more about our long term plans, you can visit the Tekton community repo where we have our roadmap, and we've put everything out in the public. If you have interest in a particular project, Tekton Chains or Tekton pipelines, you can go to that particular project's repo, and see the roadmap that we've written there. We're making progress towards v1, hopefully next year, and hopefully some of these things that I've talked about will be available with that release.

Losio: One of the main advantages as well of Tekton is the concept of portability, the concept as well, not really locked to a vendor. What are the advantages of using Tekton over GitLab or GitHub Actions, or any other specific provider? If I'm a developer, I'm already familiar with proto text to do CI/CD, what should really bring me towards doing that step?

Kipruto: A couple of things, the first one is Kubernetes native. If you use your Kubernetes cluster, you're familiar with that already. Tekton is Kubernetes native, so it inherently has those capabilities and easily works together with it. In an example I showed that we're using Kubernetes Secrets, and there's other components that you're already familiar with there, like it just blends in together seamlessly. That's one reason. Another big part is that Tekton focuses on standardizing across the industry. We have so many CI/CD tools, many of them will be vendor specific, so like GitHub Actions only provide GitHub. We're trying to standardize across and have one common tool that you can use across different languages, vendors, and environments. For example, Tekton is used as the execution engine right now for Jenkins X. You can write your execution logic with Tekton, and then execute them in GitHub Actions. IBM Cloud OpenShift also uses Tekton. We're trying to have a common baseline and standardize across all of these providers.

Many times, it's not us versus them, or Tekton versus the other tools. It's very specific, given your use cases. Considering your use cases, the control you need to have, security and other considerations, identifying which mix and match of which tools makes sense for your case. You may find that for you, GitLab makes sense, or GitHub Actions makes sense. For others, probably the control that's available in Tekton, is what makes sense for them. I'm not trying to prescribe anything, it's very specific based on your considerations.

Losio: I would like to come back to this topic from a very different angle, that is, one case is I'm using GitHub Actions or whatever I use. The other case is that I'm an old-school developer that until yesterday morning, I was running in my own data center or on the cloud, I was running my virtual machine with my Jenkins or whatever. Finally, I'm moving to Kubernetes, or I have my first Kubernetes cluster running. I still have my Jenkins because that's my background. That's what I have. What should I do? How should I move? What should be my first step? Would it make sense to move? Should I wait? Should I do nothing?

Kipruto: In that case, it makes sense to have a migration strategy. That's also very specific on what you have at hand, what your use cases are. What skills you have. First, noting that the building blocks, for example, of Jenkins and Tekton are not equivalent. I would recommend starting by understanding, what are the building blocks of Tekton? More broadly, I can briefly say for Jenkins, that Jenkins step map to Tekton step, Jenkins stage map to Tekton task, and then Jenkins pipelines map to Tekton pipelines. This is not equivalent.

Then the second thing is, as you're migrating from Jenkins pipelines to Tekton pipelines, consider reusing the Tekton tasks pipelines and other resources in the hub or the catalog, so that you can quickly get up to speed and get running quickly. Then, lastly, if there's any function, which I've already mentioned through custom tasks, if there's any functionality in whichever other tool you're migrating from, that may be missing. We already have an easy plugin mechanism using custom task controllers that you can use to implement any logic that you may need. For any guidance, you can also reach out to us through our Slack or other communication channels we have in our community repo for guidance on that.

Losio: Basically, if you have a project or anything like that, you can suggest that is using Tekton today.

Kipruto: Some of the big projects using Tekton is OpenShift, which uses Tekton under the hood. We also have Jenkins X. Jenkins X actually uses Tekton as its execution engine. We also have Kubeflow with Tekton. Beyond continuous delivery, some people have found use for Tekton in machine learning applications. Kubeflow is also using it. I think beyond specific projects, some subset of our companies that actually use Tekton, have come out to list themselves in the Tekton Friends' repo. If you go to the Tekton Friends, you should be able to find other lists of people working on Tekton.

Losio: In the beginning, you mentioned about graphs and conditional execution that you can do in Tekton. I was wondering, of course, the case of rolling back in a specific error or whatever, if you can suggest or explain some real life use cases where it could really be powerful to do some conditional execution using graphs.

Kipruto: When you have more bespoke or complex continuous delivery use cases, you may encounter cases where you need to have some guarded execution of tasks or pipelines. In that case, I'll give you three examples. For example, we want to execute a deployment task only if the branch on which you got the commit is main, or where it happened is main. Here, you guard based on the parameter that the branch is main. The second one is you want to execute a manual approval task that's guarding based on probably specific files that were changed. Here, you will guard based on results from a previous task. The previous task would check if that file was changed, and then you guard based on that. Then the last one is exit handling. Let's say at the end, we have finally tasks which happened regardless of what happened previously, and you want to send a notification based on the status of a particular task, such as a Slack notification, then you can guard based on the status of a previous task. You can guard based on results, parameters, execution status, or any other thing that makes sense for you.

Losio: When we talk about CI/CD, we always think about the successful case. I'm old school, I always like to have that red button, if things go wrong and negative and something goes terribly wrong, I want to roll back. What's the best way? What's the standard approach to do it using Tekton?

Kipruto: Tekton gives you the building blocks to create your own continuous delivery pipeline. You won't have that button but you have the tools to create the rollback task. Here, you'd write a Tekton task that provides the roll back that you can include in your pipeline. Then you can have this rollback task triggered, or as part of your final task, as part of your handling if something went wrong, that it will clean up or do the rollback as you need. Then, you can also manually trigger these if needed when something happens. Basically, Tekton provides the building blocks to enable you to write that rollback task that you can plug into pipeline to trigger manually or automatically. Beyond that any other functionality that you need, you can be able to compose that yourself. In this case, we can also look into the hub to see if that is already available for you to use.

 

See more presentations with transcripts

 

Recorded at:

Feb 03, 2022

BT