BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Evolution of Distributed Systems on Kubernetes

The Evolution of Distributed Systems on Kubernetes

Bookmarks
41:03

Summary

Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.

Bio

Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Ibryam: I'm Bilgin. Today, I'm going to share with you how I see distributed systems evolving on Kubernetes. Why me? Because I work for Red Hat. I have been a consultant and architect there, working with distributed systems using Apache Camel. Apache Camel is a very popular framework in the Java ecosystem for doing integrations. I'm a committer. I have a book about Apache Camel. In the latest years I've used Kubernetes. I also have a book about it. I've been at the intersection of distributed systems and Kubernetes.

First, I want to start with a question, what comes after microservices? I'm sure you all have an answer to that and I have mine too. You'll find out at the end what I think that will be. To get there, I suggest we look at what are the needs of the distributed systems and how those needs have been evolving over the years starting with monolithic applications to Kubernetes, and with more recent projects such as Dapr, and Istio, Knative, how they are changing the way we do distributed systems. We will try to do some predictions about the future.

Modern Distributed Applications

To set a little more context, on this talk, when I say distributed systems, what I have in mind is systems composed of multiple components, hundreds of those. These components can be stateful, stateless. They can be serverless. Components created in different languages running on different environments. Components created using open-source technologies, open standards, interoperable. I'm sure you can create such systems using closed source software. You can create them on AWS and other places. For this talk specifically, I'm looking for the Kubernetes ecosystem and how you can create those on Kubernetes ecosystem.

Let's start with the needs of distributed systems. What I have in mind is we want to create an application or service. We want to write some business logic. What else do we need from the platform from our runtime to create distributed systems? At the foundation, at the beginning is we want some lifecycle capabilities. What I have in mind is, when you write your application in any language, then we want to have the ability to package, to deploy that application reliably, to do rollbacks, health checks. Be able to place the application on different nodes, and be able to do resource isolation, scaling, configuration management, all of these things. These are the very first things you would need to create a distributed application.

The second pillar is around networking. Once we have an application, we want it to reliably connect to other services, whether these are within the cluster or in the outside world. We want to have abilities such as service discovery, load balancing. We want to be able to do traffic shifting, whether that's for different release strategies or for some other reasons. Then we want to have an ability to do resilient communication with other systems, whether that is through retries, timeouts, circuit breakers, of course. Have security in place, and get good monitoring, tracing, observability, and all that.

Once we have networking, the next thing is we want to have ability to talk to different APIs and endpoints, so resource binding. Be able to talk to different protocols, different data formats. Maybe even be able to do transform from one data format to another one. I would also include here things such as light filtering. When we subscribe to a topic, maybe we are interested only from certain events.

What do you think is the last category? It is state. When I say state and stateful abstractions, I'm not talking about the actual management of state such as what a database does or a file system. I'm talking more about developer abstractions that behind the scenes rely on state. Probably, you need to have the ability to do workflow management. Maybe you want to manage long running processes. You want to do temporal scheduling, so some cron job to run your service periodically. Maybe you want to also do distributed caching. You want to have idempotence, be able to do rollbacks. All of these are developer-level primitives, but behind the scenes they rely on having some state. You want to have these abstractions in your disposal to create good distributed systems. We will use these frameworks to evaluate how these needs have been changing on Kubernetes and with other projects.

Monolithic Architectures - Traditional Middleware Capabilities

If we start with the monolithic architectures and how we get those capabilities, the very first thing is when I say monolith, I have in mind, in the context of distributed application that is the ESB. ESBs are quite powerful. When we check our list of needs, we would say that ESBs had very good support for all stateful abstractions. You could do the orchestration of long running processes. You can do distributed transactions, rollbacks, idempotence. They also have very good resource binding capabilities. An ESB will have hundreds of connectors. They can do all transformation, orchestration, and even networking capabilities. An ESB can do service discovery. It can do load balancing. It has all things around resiliency of the networking connection, so it can do retries. Probably, because by nature, an ESB is not very distributed, it doesn't need very advanced networking and release capabilities. Where ESB lacks is primarily around lifecycle management. Because it's a single runtime, the first thing is you are limited to using a single language and that's typically the language that the actual runtime is created in, whether that's Java, or .NET, or something else. Then because it's a single runtime that means we cannot easily do declarative deployments, we cannot do automatic placement. The deployments are quite big, quite heavy, so it usually involves human interaction. Another difficulty with such a monolithic architecture is around scaling. We cannot scale individual bits. Last but not least, around isolation, whether that's resource isolation or fault isolation. None of these can be done with the monolithic architectures. From our needs' framework point of view, the ESBs monolithic architectures don't qualify.

Cloud-native Architectures - Microservices and Kubernetes

Next, I suggest we look at cloud-native architectures and how those needs have been changing. If we look at a very high level, how those architectures have been changing, cloud native probably started with the microservices movement. Microservices allow us to split a monolithic application by business domain. It turned out that containers and Kubernetes are actually a good platform for managing those microservices. Let's see some of the concrete features and capabilities that Kubernetes becomes particularly attractive for microservices.

At the very beginning, that is the ability to do health probes. That's something that Kubernetes made popular. In practice, it means when you deploy your container in a pod, Kubernetes will check the health of your process. Typically, that process model is not good enough. You still may have a process that's up and running, but it's not healthy. That's why there is also the option of using readiness and liveness checks. Kubernetes will do a readiness check to decide when your application is ready to accept traffic during startup. It will do a liveness check to continuously check the health of your service. Before Kubernetes, this wasn't very popular, but today almost all languages, all frameworks, all runtimes have health checking capabilities where you can easily start an endpoint.

The next thing that Kubernetes introduced is around managed lifecycle of your application. What I mean here is you are no longer in control when your service will start up and when it will shut down. You trust the platform to do that. Kubernetes can start up your application, it can shut it down, move it around on the different nodes. For that to work, you have to properly implement the events that the platform is telling you during startup and shutdown.

Another thing that Kubernetes made popular is around deployments and having those declaratively. That means you don't have to start the service anymore, check the logs whether it has started. You don't have to manually upgrade instances. Kubernetes with declarative deployments can do that for you. Depending on the strategy you chose, it can stop old instances, start new ones. If something goes wrong, it can roll back.

Another thing is around declaring your resource demands. When you create a service, you containerize it. It is a good practice to tell the platform how much CPU and memory that service will require. Kubernetes uses that knowledge to find the best node for your workloads. Before Kubernetes, we had to manually place an instance to a node based on our criteria. Now we can guide Kubernetes with our preferences, and it will make the best decision for us.

Nowadays, on Kubernetes, you can do polyglot configuration management. You don't need in your application runtime anything to do configuration lookup. Kubernetes will make sure that the configurations end up on the same node where your workload is. The configurations are mapped as a volume or environment variable ready for your application to use.

It turns out those specific capabilities I just spoke about, they are also related. For example, if you want to do automatic placement, you have to tell Kubernetes what are the resource requirements of your service. Then you have to tell it what deployment strategy to use. In order for the strategy to work properly, your application has to implement the events coming from the environment. It has to implement health checks. Once you put all of these best practices, once you use all of these capabilities, your application becomes a good cloud-native citizen and it's ready for automation on Kubernetes. This represents the foundational patterns for running workloads on Kubernetes. Then there are other patterns around structuring the containers in a pod, doing configuration management, and behavioral.

The next topic I want to briefly cover is around workloads. From the lifecycle point of view, we want to be able to run different workloads. We can do that on Kubernetes, too. Running Twelve-Factor Apps and stateless microservices is pretty easy. Kubernetes can do that. That's not the only workload you will have. Probably you will also have stateful workloads, and you can do that on Kubernetes using a stateful set. Another workhorse you may have is a singleton. Maybe you want an instance of an app, to be only one instance of your app throughout the whole cluster. You want it to be a reliable singleton. When it fails, it should be started up. You can choose between stateful set and replica sets depending on your needs, whether you want the singleton to have at least one or at most one semantic. Another workload you may have is around jobs and cron jobs. With Kubernetes, you can do those as well.

If we map all of these Kubernetes features to our needs, Kubernetes satisfies really well the lifecycle needs. In fact, the list of needs I created are primarily driven by what Kubernetes provides us today. These are expected capabilities from any platform. Kubernetes can do for you deployment, placement, configuration management, resource isolation, failure isolation. It supports different workloads except serverless on its own.

Then, if that's all Kubernetes gives for developers, how do we extend Kubernetes? How can we make it give us more features? I want to briefly talk about two main ways that are used today.

Out-of-process Extension Mechanism

The first thing is the concept of a pod. Pod is an abstraction used to deploy containers on nodes. The pod gives us two sets of guarantees. The first set is deployment guarantee. All containers in a pod always end up on the same node. That means they can communicate with each other over a local host or asynchronously using the file system, or through some other IPC mechanism. The other set of guarantees a pod gives us is around lifecycle. Not all containers within a pod are equal. Depending if you're using init containers or application containers, you get different guarantees. For example, init containers are run at the beginning. When a pod starts they are run sequentially, one after another. They run only if the previous container has completed successfully. They are good for implementing some workflow logic driven by containers. Application containers, they run in parallel. They run throughout the lifecycle of the pod.

This is where the sidecar pattern comes in. Sidecar is the ability to run multiple containers that cooperate with each other and jointly provides value to the user. That's one of the primary mechanisms we see nowadays for extending Kubernetes with additional capabilities.

In order to explain the next capability, I have to briefly tell you how Kubernetes works internally. It is based on the reconciliation loop. The idea of the reconciliation loop is to drive the desired state to the actual state. Within Kubernetes, there are many bits that rely on that. For example, when you say I want two instances of a pod, this is the desired state of your system. There is a control loop that constantly runs and checks if there are two instances of your pod. If two instances are not there, if there is one or more than two, it will calculate the difference. It will make sure that there are two instances. There are many examples of this. Some are replica sets, stateful set. The resource definition maps to what the controller is. There is a controller for each resource definition. This controller makes sure that the real world matches the desired one. You can write your own custom controller. You have an application that's running in a pod and it cannot load configuration file changes at runtimes. You can write the custom controller that detects every time a config map changes, it restarts your pod so that your application is restarted. It can pick up configuration changes at startup. That would be an example of a custom controller.

It turns out that even though Kubernetes have a good collection of resources, that they are not enough for all the different needs you may have. Kubernetes introduced the concept of custom resource definitions. That means you can go and model your requirements and define an API that lives within Kubernetes. It lives next to other Kubernetes native resources. You can write your own controller in any language that understands your model. You can design a ConfigWatcher implemented in Java that describes what we described earlier. That is what operator pattern is. Operator pattern is a controller that works with the custom resource definitions. Today, we see lots of operators coming up and that's the second way for extending Kubernetes with additional capabilities.

Next, I want to briefly go over a few platforms that are built on top of Kubernetes, and they are heavily using sidecars and operators to give additional capabilities to developers around distributed systems.

What is Service Mesh?

Why don't we start with the service mesh? What is a service mesh? We have two services, service A that wants to call service B. It can be in any language. That's basically our application workload. What service mesh does is, using sidecar controllers, service mesh injects a proxy next to our service. You end up with two containers in the pod. The proxy is a transparent one. Your application is completely unaware that there is a proxy that is intercepting all incoming and outgoing traffic. The proxy also acts as a data firewall. The collection of these service proxies represents your data plane. These proxies are small. They're stateless. In order to get all the state and configuration, they rely on the control plane. The control plane is the stateful part that keeps all the configurations. It gathers metrics, takes decisions, and interacts with the data plane. They are a good choice for different control planes and data planes. It turns out, we need one more component. We need an API gateway in order to get data into our cluster. Some service meshes have their own API gateway and some use third party. All of these components, if you look into those, they provide the capabilities we need.

An API gateway, it is primarily focused to abstract the implementation of our services. It hides the details and provides borderline capabilities. Service mesh does the opposite. In a way, it enhances the visibility and reliability within the services. Jointly, we can say that API gateway and service mesh provides all the networking needs. In order to get networking capabilities on top of Kubernetes using just the services is not enough, you need some service mesh.

What is Knative?

The next project I want to mention is Knative. It's a project started by Google a few years ago. It's getting very close to GA. It is basically a layer on top of Kubernetes that gives you serverless capabilities. It has two main modules: serving and eventing. Serving is focused around request-reply interactions. Eventing is more for event-driven interactions.

Just to give you a feel what serving is? In serving, you define a service but that is different than a Kubernetes service. This is a Knative service. Once you define a workload with a service, you basically get a deployment but with the serverless characteristics. You don't need to have an instance up and running. It can be started from zero when a request arrives. You get serverless capabilities. It can scale up rapidly. It can scale down to zero.

Eventing gives us a fully declarative event management system. Let's assume we have some external systems we want to integrate with, some external event producers. At the bottom, we have our application in a container that has an HTTP endpoint. With Knative eventing, we can start a broker. We can start a broker that's mapped by Kafka, or it can be in-memory, or some cloud service. We can start importers that connect to the external system and import events into our broker. Those importers can be, for example, based on Apache Camel, which has hundreds of connectors. Once we have our events going to the broker, then declaratively with YAML file, we can subscribe our container to those events. In our container, we don't need any messaging client. We don't need a Kafka client, for example. Our container would get events through HTTP POST using cloud events. This is a fully platform managed messaging infrastructure. You as a developer, all you have to do is write your business code in a container and don't deal with any messaging logic.

From our needs' point of view, Knative satisfies few of those. From lifecycle point of view, it gives our workloads serverless capabilities, so ability to scale to zero, and activate from zero and go up. From a networking point of view, if there is some overlap with the service mesh, Knative can also do traffic shifting. From a binding point of view, it has a pretty good support for binding using Knative importers. It can give us Pub/Sub, or point-to-point interaction, or even some sequencing. It satisfies the needs in a few categories.

What is Dapr?

The next project that is using sidecars and operators is Dapr. Dapr was started by Microsoft only a few months ago, but rapidly getting popular. It is basically a distributed systems toolkit as a sidecar. Everything in Dapr is provided as a sidecar. It has a set of what they call building blocks, or set of capabilities.

What are those capabilities? The first set of capabilities is around networking. Dapr can do service discovery and point-to-point integration between services. Similarly, to service mesh, it can also do tracing. It can do reliable communications. It can do retries, recovery. The second set of capabilities is around resource binding. It has lots of connectors to cloud APIs, different systems. It can also do messaging, basically, publish/subscribe and other logic. Interestingly, Dapr also introduces the notion of state management. In addition to what Knative and service mesh gives you, Dapr also has abstraction on top of the state store. You can have key-value based interaction with Dapr that is backed by the actual storage mechanism.

At a high level, the architecture is you have your application at the very top, which can be in any language. You can use the client libraries provided by Dapr, but you don't have to. You can use the language features to do HTTP and gRPC called the sidecar. The difference to service mesh is that here this Dapr sidecar is not a transparent proxy. It is an explicit proxy that you have to call from your application and interact with over HTTP or gRPC. Depending on what capabilities you need, Dapr can talk to other systems, such as cloud services.

On Kubernetes, Dapr deploys as a sidecar. Dapr also works outside of Kubernetes. It's not only Kubernetes. It also has an operator. Sidecars and operators are the primary extension mechanism. There are a few other components to manage certificates, to deal with actor based modeling, and for injecting the sidecars. Your workload interacts with the sidecar. The sidecar does all the magic to talk to other services, which can give you some interoperability with different cloud providers. It gives you additional distributed system capabilities.

If I were to sum up what these projects are giving you, we can say that ESB is the early incarnation of distributed systems where we had the centralized control plane. We had the centralized data plane so it didn't scale really well. With cloud native, there is still centralized control plane, but the data plane is decentralized. It's highly scalable with good isolation. We always would need Kubernetes to do good lifecycle management. Then on top of that, you would need probably one or more add-ons. You may need Istio to do advanced networking. You may use Knative to do serverless workloads, or Dapr to do integration. Those frameworks play nicely with Istio and Envoy. From Dapr and Knative point of view, probably you have to pick one. Jointly, they are providing what we used to have on an ESB in a cloud-native way.

Future Cloud Native Trends - Lifecycle Trends

For the next part, I have basically done an opinionated list of a few projects where I think interesting developments are happening in these areas. I want to start with lifecycle. With Kubernetes, we can do good lifecycle of your application. That might not be enough for more complex lifecycle management. For example, you may have scenarios where the deployment primitive in Kubernetes is not enough for your application if you have a more complex stateful application. In these scenarios, you can use the operator pattern. You can use an operator that does a deployment and upgrade where it also backs up maybe the storage of your service to S3. Another thing is that you may find out that the actual health checking mechanism in Kubernetes is not good enough. If liveness check and readiness check is not good enough, you can use an operator to do more intelligent liveness and readiness check of your application, and based on that to do recovery.

A third area would be auto-scaling and tuning. You can have an operator to better understand your application and do auto-tuning on the platform. Today, there are primarily two frameworks for writing operators, the Kubebuilder from Kubernetes special interest group, and the Operator SDK, which is part of operator framework. Operator framework is created by Red Hat. It has a few things. There is the Operator SDK that lets you write operator, that is, Operator Lifecycle Manager, which is about managing the lifecycle of the operator itself, and OperatorHub where you can publish your operator. If you go there today, you will see there are over 100 operators that manage databases, message queues, monitoring tools. From lifecycle space, probably operators are the area where most active development is happening right now on the Kubernetes ecosystem.

Networking Trends - Envoy

The next project I picked is Envoy. On the networking side, we've seen this morning what's happening with service mesh and Istio. There is the introduction of service mesh interfaces specification that will make it easier for you to switch different service mesh implementations. There has been some consolidation on Istio architecture in the deployment. You don't have to deploy 7 pods for the control plane, now you can just deploy once. More interestingly, is what's happening at the data plane in the Envoy project. We see that more and more Layer 7 protocols are added to Envoy. Service mesh is adding support for more protocols such as MongoDB, ZooKeeper, MySQL, Redis, and the most recent one is Kafka. I see that the Kafka community is now further improving their protocol to make it friendlier for service meshes. We can expect that there will be even more tight integration, more capabilities. Most likely there will be some bridging capability. In your service, you can do an HTTP call locally from your application and the proxy will, behind the scene, use Kafka. You can do transformation, encryption outside of your application in a sidecar for the Kafka protocol.

Another interesting development has been the introduction of HTTP caching. Now Envoy can do HTTP caching. You don't have to use caching clients within your applications. All of that is done transparently in a sidecar. There are tap filters, so you can tap the traffic and get the copy of the traffic. Most recently, the introduction of WebAssembly, that means, if you want to write some custom filter for Envoy, you don't have to write it in C++ and compile the whole Envoy runtime. You can write your filter in WebAssembly, and deploy that at runtime. Most of these are still in progress. They are not there. This gave me an indication that the data plane and service mesh have no intention at stopping just supporting HTTP and gRPC. They are interested in supporting more application-layer protocols to offer you more, to enable more use cases. Especially, with the introduction of WebAssembly, you can now write your custom logic in the sidecar. That's fine as long as you're not putting there some business logic.

Binding Trends - Apache Camel

Next project I want to talk about is Apache Camel. That's a project I love. It is a project for doing integrations. It has lots of connectors, hundreds of connectors to different systems. It is using enterprise integration patterns. The way it's related to this talk is in the latest version, Camel 3 is getting deeply integrated into Kubernetes. It's using the same primitives we spoke so far, such as operators. In Camel, you can write your integration logic in languages such as Java, JavaScript. Here, the example is with using YAML. In the latest version, they have introduced a Camel operator. That's something that runs in Kubernetes that understands your integration. When you write your Camel application, deploy it to custom resource, the operator then knows how to build the container or how to find dependencies. Depending on the capabilities of the platform, whether that's Kubernetes only, whether that's Kubernetes with Knative, it can decide what services to use and how to materialize your integration. There is quite a lot of intelligence that is going outside of your runtime but into the operator. All that happens pretty fast. Why would I say it's a binding trend? Mainly, because of the capabilities of Apache Camel with all the connectors it provides. The interesting point here is how it integrates deeply with Kubernetes.

State Trends - Cloudstate

The next project I picked is Cloudstate. It's around state related trends. Cloudstate is a project by Lightbend, and it is primarily focused on serverless and function-driven development. With their latest releases, they are integrating deeply with Kubernetes using sidecars and operators. I think they also have integration with Dapr, Knative, and all of this. The idea is, when you write your function, all you have to do in your function is use gRPC to get state, to interact with state. The whole state management happens in a sidecar that is clustered with other sidecars. It enables you to do event sourcing, CQRS, key-value lookups, messaging. From your application point of view, you are not aware of all these complexities. All you do is a call to a local sidecar and the sidecar handles the complexity. It can use, behind the scenes, two different data sources. It has all the stateful abstractions you would need as a developer. A project definitely to follow and see what's happening.

We have seen what the current state of the art is in the cloud-native ecosystem, and some of the recent developments that are still in progress. How do we make sense of all that?

Multi-runtime Microservices Are Here

If you look at how microservice look like on Kubernetes, you will need to use some functionality from the platform. You will need to use Kubernetes features for the lifecycle management primarily. Then, most likely, transparently, your service will use some service mesh, something like an Envoy to get enhanced networking capabilities, whether that's traffic routing, resilience, enhanced security, or even if it is for a monitoring purpose. On top of that, depending on your use case, you may need Dapr or Knative, depending on your workloads. All of these represents your out-of-process, additional capabilities. What's left to you is to write your business logic, not on top, but as a separate runtime. Most likely, future microservices will be this multi-runtime composed of multiple containers. Some of those are transparent. Some of those are very explicit that you use.

Smart Sidecars and Dumb Pipes

If I look a little bit deeper, how that might look like, you write your business logic in some high-level language. It doesn't matter what it is. It doesn't have to be Java only. You can use any other language. You develop your custom logic in-house. Then all the interactions of your business logic with the external world happens through sidecar. That sidecar integrates with the platform, does the lifecycle management. It does the networking abstractions for external systems. Gives you advanced binding capabilities and state abstraction. The sidecar is something you don't develop. You get it off the shelf. You configure it with a little bit of YAML or JSON, and you use it. That means you can update sidecars easily because it's not embedded anymore into your runtime. It makes patching, updating easier. It enables polyglot runtime for our business logic.

What Comes After Microservices?

That brings me to the original question, what comes after microservices? If we see how the architectures have been evolving, application architectures, at a very high level. This is a simplification. Hopefully, you get the idea. We started with monolithic applications. Microservices gives us the guiding principles on how to split a monolithic application into separate business domains. After that came serverless, and function as a service where we said, we can split those further by operation. This gives us the ability for extreme scaling, because we can scale each operation individually. I would argue that maybe FaaS is not the best model. Functions are not the best model for implementing reasonably complex services where you want multiple operations to reside together when they have to interact with the same dataset. Probably, multi-runtime, I call it MEC architecture, where you have your business logic in one container, and you have all the infrastructure related concerns as a separate container. They jointly represent a multi-runtime microservice. Maybe that's a more suitable model, because it has better properties. You get all the benefits of microservice. You still have all your domain, all the bounded contexts in one place. You have all the infrastructure and distributed application needs in a separate container, and you combine them at runtime. Probably, the closest thing that's getting to that right now is Dapr. They are following that model. If you're only interested from a networking aspect, probably using Envoy is also getting close to this model.

 

See more presentations with transcripts

 

Recorded at:

May 25, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT