Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News James Ward and Ray Tsang on Knative Serverless Platform

James Ward and Ray Tsang on Knative Serverless Platform

This item in japanese

At this year's QCon San Francisco 2019 Conference, speakers James Ward and Ryan Knight hosted a workshop on Serverless technologies using the Knative framework. James Ward and Ray Tsang facilitated the same workshop at the QCon New York 2019 Conference, and InfoQ sat down with them to learn more about Knative.

Kubernetes has become the popular choice for managing and orchestrating container-based applications. Service mesh technologies like Istio can be used to manage service-to-service communications and monitoring. With the introduction of Knative, a platform built on top of Kubernetes and Istio, development teams can now build, deploy, and manage workloads using serverless architecture.

InfoQ spoke with Ward and Tsang in order to discuss the role of serverless in developing cloud native applications, and also understand how Knative may help with these goals.

InfoQ: Can you tell our readers what the Knative framework is and how it fits in with Kubernetes container management platform?

James Ward and Ray Tsang: Knative is a layer on top of Kubernetes which provides the building blocks for a serverless platform, including components for building an app from source, serving an app via HTTP, and routing events with publishers and subscribers.

Knative extends Kubernetes features like scale-to-zero, crash-restarts, and load balancing, and lets you run serverless workloads anywhere you choose: fully-managed on Google Cloud, on Google Kubernetes Engine (GKE), or on your own Kubernetes cluster. Knative makes it easy to start with platforms like Google Cloud Run and later move to Cloud Run on GKE, or start in your own Kubernetes cluster and migrate to Cloud Run in the future.

Originally built at Google, Knative is open source and you can find details on the website.

InfoQ: In terms of use cases, when should and shouldn't we use the serverless-based solutions, compared to other architectures like microservices or monolith apps?

Ward and Tsang: Serverless is an operational model that scales up and down based on demand. In cloud environments, this allows billing to be entirely based on actual usage. In self-managed environments this enables the underlying server resources to be returned to a shared pool for use elsewhere.

Serverless can fit with both microservice and monolithic architectures, however most monoliths don't do very well in a serverless world where apps should start quickly and not use global state.

InfoQ: You discussed the CNCF Buildpacks in your workshop. Can you talk about how these Buildpacks help with serverless apps in general, and Knative based apps in particular?

Ward and Tsang: The open source Tekton project (Github repo) is a complement to Knative for transforming source into something that can run on Knative. Tekton runs on Kubernetes and provides a serverless experience as resources are allocated on demand to run builds and CI/CD pipelines. 

CNCF Buildpacks is a standard for detecting a project type and running its build to create a container image.  There are buildpacks available for tools including Maven or Gradle (for Java projects), Python, Ruby, Go, Node, etc.  In the Tekton extension catalog you will find Buildpack support that makes it very easy to go from source to a container image that can run in Knative.

InfoQ: Can you use microservices and Knative functions in the same application or use case?

Ward and Tsang: The deployment unit for all invokables in Knative is a container image which ultimately runs on a Kubernetes pod. Invokables includes apps served via HTTP and events which are sent a Cloud Event via HTTP. So you definitely could have a single container image which can handle both web HTTP requests, REST, and Cloud Events.

InfoQ: Service mesh technologies are getting a lot of attention lately. Can you talk about how Knative applications can work in a service mesh-based architecture?

Ward and Tsang: Knative is currently built on top of Istio and Kubernetes. Knative applications are automatically running in an Istio service mesh environment. What that means is you automatically get all the benefits from Istio like distributed tracing sampling, and monitoring metrics. When you configure traffic splitting in Knative, it automatically generates the corresponding configuration in Istio to implement the traffic split.

InfoQ: You also talked about Cloud Events in your workshop. Can you discuss how Cloud Events can be leveraged in serverless applications?

Ward and Tsang: Cloud Events are a recent Cloud Native Computing Foundation (CNCF) specification for RESTful event payload packaging. Microservice and Function-as-a-service frameworks are beginning to support Cloud Events to make it easier to parse incoming messages and provide routing metadata.

Knative supports Cloud Events in the Eventing component by sending Cloud Events to backing services. If the response from a service handling a Cloud Event is another Cloud Event, it can then be handled by the Eventing system for further routing. Event consumers are services like those that handle HTTP requests in the Serving component of Knative. This means they can scale up and down (even to zero), just like other services.

InfoQ: What are some tools developers can use when working Knative-based applications?

Ward and Tsang: Knative can run any Docker container image on Kubernetes, but these images can alternatively be run directly in Docker for local development and testing. So you may not need to actually use Knative at development time. Also, since Knative is an extension of Kubernetes, the tools that work in Kubernetes also work with Knative. So you can use tools like Prometheus for monitoring, Fluentd for logging, Kibana for log search, and many others.

InfoQ: Can you recommend any best practices for our readers who want to learn more about serverless and Knative technology, what to consider and what to avoid?

Ward and Tsang: Learn the technology, its use cases, when to use it, and when not to use it. Even though Knative operates at a higher level, it's layered on top of technologies like Istio and Kubernetes. To run Knative, it's important to understand the underlying layers, so that when there is a platform issue, you would know where to look and troubleshoot.

For more information on Knative, check out their website and the documentation on how to install and use it.

Rate this Article