BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Liz Rice on Programming the Linux Kernel with eBPF, Cilium and Service Meshes

Liz Rice on Programming the Linux Kernel with eBPF, Cilium and Service Meshes

Bookmarks

Charles Humble and Liz Rice discuss eBPF, a way of making the Linux kernel programmable. They talk about why it exists, how it works under the hood, and what you can and can’t do with it. They also talk about Cilium, an open source library for observing network connectivity between container workloads, and the new Cilium-based service mesh currently in beta.

Key Takeaways

  • eBPF is a technology that can run sandboxed programs in the Linux kernel, allowing application developers to add additional capabilities to the operating system at runtime.  It is somewhat analogous to JavaScript in the browser or Lua in a gaming engine.
  • eBPF code can be written either as bytecode or in a higher-level language such as C or Rust.  It is compiled with the aid of a Just-In-Time (JIT) compiler and verification engine. Programs can share state and other data with user space applications via maps.
  • Whilst it has its origins in networking, eBPF has a wide range of other use cases including security and observability.  Many of these use cases are supported via projects such as Cilium and Hubble, allowing Linux sysadmins, as well as kernel developers, to take advantage of the technology.
  • The Cilium team is currently experimenting with a Cilium-based service mesh that is more efficient than the sidecar proxy model currently used in other service meshes.

Transcript

Introductions [00:36]

Charles Humble: Hello and welcome to the The InfoQ Podcast, I'm Charles Humble, one of the co-hosts of the show, and editor-in-chief of Cloud Native consultancy firm Container Solutions. My guest this week is Liz Rice. Liz is Chief Opensource Officer with cloud native networking and security specialists Isovalent, creators of the Cilium eBPF based networking project. She is also the chair of the CNCF's Technical Oversight Committee, and is the author of Container Security, a book published by O'Riley. For today's podcast, the focus is on eBPF, we'll explore what it is, how it works under the hood and what you can and can't do with it. We'll also talk a little bit about the Cilium project. Liz, welcome to The InfoQ Podcast.

Liz Rice: Hi, thanks for having me.

What does the  Linux kernel actually do? [01:22]

Charles Humble: I thought a good place to start would be with a couple of definitions, because I think some of our listeners might not be that familiar with some of what we're going to talk about, and it's maybe an obvious thing to start with, but could you just briefly describe what it is that the Linux kernel actually does?

Liz Rice: Yeah, I think it's really important to level set that because eBPF allows us to run custom programs in the kernel, but if you're not completely familiar with what the kernel is, that doesn't make a lot of sense. And I think the kernel is one of those things that a lot of developers, a lot of engineers take for granted, and they know, maybe, that there's a thing called user space and a thing called the kernel, but maybe after that, it starts getting little bit wishy-washy. I certainly remember not having a clear understanding of that in the past.

So what does the kernel do for us? The kernel is the part of the operating system that lets applications do things with hardware. So every time we want to write something to the screen, or get something from the network, or maybe read something from a file, even accessing memory, it all involves hardware. And you can't from user space, where our application's run, you can't directly access that hardware.

We use the kernel to do it on our behalf and the kernel is privileged and able to get at the hardware for us. So our applications, every time they want to do one of these things like read from a file, they have to make what's called a system call, and that's the interface where our application is saying, please read a number of bytes from a file, for example. And most of the time, we're not really aware of that because the programming languages that we use day to day give us higher level abstractions, so we don't really need to get involved with the syscalls, typically, for most developers. But it's good to know, good to have an understanding of that.

Charles Humble: And then in a cloud native context, it's perhaps worth of saying as well that there is only the one kernel, so regardless of whether you're running with virtual machines or with bare metal, however many applications you are running, they are all sharing that same privileged kernel.

Liz Rice: And that kernel is doing all of the communication between the underlying hardware and the applications running on it.

eBPF stands for extended Berkeley Packet Filter. What's a packet filter? [03:45]

Charles Humble: While we're doing this level setting, we should also mention something else. So eBPF stands for extended Berkeley Packet Filter. What's a packet filter?

Liz Rice: Yeah, it's a great question. I quite often say that eBPF initials don't necessarily help us understand very much about what eBPF is today, but it really reflects the history.

So, packet filtering is this idea that if you have network traffic flowing in or out of your computer, you might want to look at individual packets, maybe do something interesting with individual packets, a lot of energy in networking, a lot of the time people are debugging networking issues, and in order to do that, you might need to filter out and look at just the traffic going to a particular destination, for example. This idea of filtering is that you've got this enormous stream of packets, but you can just look at the ones you are interested in. So initially, the idea of Berkeley Packet Filtering was to be able to specify what kinds of packets you're interested in, perhaps by looking at the address, and you'd get a copy of those packets that you could then examine and use for debugging purposes without having to wade through this enormous stream of everything that's happening on the machine.

Charles Humble: And then, in order to do the actual packet filtering, what is it that we have?

Liz Rice: We have a little bit of code that says, "Does this packet match my filter?" And kernel developers started to think, well, this is like a little, almost like a little virtual machine here. What if we could do more powerful things with these programs that we are running to look at network packets? What if we could run them in other contexts, not just looking at a network packet, but maybe some other operations that the kernel's doing.

What is the extended bit? [05:38]

Charles Humble: Right, and that's how you get to the extended bit. It's again, perhaps worth saying the original Berkeley Packet Filter itself is quite old, in tech terms. The first release was 1992. It's the original packet filter from BSD. So what is the extended bit? What does the "e" bit add?

Liz Rice: At one level, extended just means we're going to use this concept of running custom programs somewhere in the kernel. It also, I think in that bucket of things that gets... Differentiates what we might call eBPF from BPF, there was a change to the kind of instruction set that was used for writing those eBPF programs, and there was also the introduction of a thing called maps.

Charles Humble: What are they?

Liz Rice: Maps are data structures that you can share between different BPF programs or between a BPF program and user space, so it's how we can communicate information between an application and the eBPF programs that is interested in. So for example, if you're getting observability data from an eBPF program that's running in the kernel, it's going to throw the data, maybe all the events that you want to observe, it's going to throw that information into a map and the user space application can read the data out of the map at a later point. So yeah, maps, instruction sets and the variety of different places that we can hook programs in. I think that really characterizes what makes the "e" of eBPF.

Maps give you a way of handling state.  Are there different types of maps? [07:10]

Charles Humble: Right, yes. And then the maps, of course, give you a way of handling state, which is interesting. Are there different types of maps?

Liz Rice: There's a variety of different types of map, but they're essentially all key value stores, and you can write into them from user space and from kernel space, and you can read from them in user space and kernel space. So you can use them to transfer information between the two, you can also share them between different BPF programs. So you might have one BPF program attached to one event and another BPF program attached to a different event, perhaps at two different points in the networking stack, and you can share information between those two programs using a map. So you might, for example, correlate knowledge of the networking end points between programs that attach at the socket layer, which is as close as it can be to the application and another program acting at the XDP layer, which is as close as it can be to the network interface.

Is eBPF analogous to JavaScript in the browser or Lua in a gaming engine? [08:10]

Charles Humble: When we think about a programming language inside another environment, I might think about, say Lua in a gaming engine, or maybe JavaScript in a web browser. Are those useful points of comparison, do you think?

Liz Rice: Actually, they really are. There's a chap called Brendan Gregg who did a lot of the pioneering work for using eBPF for observability, and I believe it's quote from him that describes... It says that eBPF is to the kernel as JavaScript is to HTML. It makes it programmable and it allows you to do dynamic changes to what... In the JavaScript world, you would have a static webpage, suddenly it becomes dynamic. We have this programmable ability and that's a nice analogy for eBPF in the kernel.

What's the underlying motivation for this?[08:59]

Charles Humble: What's the underlying motivation for this? Linux is open source, so couldn't you just get the changes you need into the kernel if you need them?

Liz Rice: I think there's a couple of angles to this. One is, Linux is enormous and very complicated. I think it's 30 million-odd lines of code in the Linux kernel, so if you want to make a change to it, it's not going to be a trivial undertaking, and you are going to have to deal with convincing the whole community that a change that you want to make is appropriate and that it's going to be useful for the Linux community as a whole. And maybe you want to do something really bespoke, and that wouldn't necessarily be useful for everybody. So it might not be appropriate to accept a change that you might want to make for a specific purpose into the general purpose Linux kernel.

Liz Rice: Even if you convince everyone that your change is a really good idea, you would make the change and it might get accepted. Maybe you are a super fast programmer and it takes virtually no time to get that code written and get it accepted into the kernel, but then there is this huge delay between code being added to the mainstream Linux kernel repos, and actually being run in production.

It literally takes years. You don't just download the kernel onto your machine. You'll typically take a Linux distribution like RHEL, or Ubuntu, or Arch, or Alpine, or whatever. And those distributions are packaging up stable releases of the kernel, but quite often, and they're years old. They're often using time to convince themselves of the stability of those versions of the kernel. The latest RHEL release... I'm trying to remember, I think it's using a kernel that's from 2017 or 2018, I think 2018. We can check on that. So it's three to four years between code making it into a kernel release and actually being included in the distribution that you might run in an enterprise environment. So you probably don't want to wait three or four years for your change to the kernel.

Yeah, if we can just load it dynamically, we don't even have to reboot the machine, you can literally just load a program into the kernel dynamically, and that can be amazing for running custom codes, for making custom behaviors and also for security mitigations. There's a really great example of the idea of a packet of death.

What's a packet of death? [11:33]

Charles Humble: What's a packet of death?

Liz Rice: If you have a kernel vulnerability that, for whatever reason, isn't able to handle a particularly crafted network packet, perhaps there's a length field that is incorrectly set up and the kernel, through vulnerability, maybe doesn't handle that correctly and reads off the end of the buffer or something like that.

And with eBPF there have been at least one case of this, where to mitigate a packet of death vulnerability, you can just have an eBPF program distribute that immediately, load it into production environments, and you are immediately no longer vulnerable to the packet of death that would have otherwise crashed your production machines. Literally in a matter of minutes, rather than waiting for a security patch, can mitigate this kind of security issue. I think that's a really powerful example of how eBPF can be really useful from a security patching perspective.

How does the verifier work in eBPF? [12:32]

Charles Humble: It's an interesting thing that you bring up a security angle, because obviously, if I'm running custom code in kernel space, there's not a lot I can't do. There are an awful lot of things I could do as a malicious actor in a system once I've got code loaded there. So how does the verifier work in eBPF? How safe is it? How secure is it?

Liz Rice: When you load a BPF program into the kernel, it goes through a step called verification. And this is checking that the program is safe to run. It's got to be safe from the perspective of not crashing, so it's going to analyze the program and make sure that it's going to run to completion, that it's not doing any null pointer de-references. When you are writing programs, you have to explicitly check every single pointer to make sure that it's not null before you de-reference it, otherwise the verifier will reject your program.

And you are also limited in what you're allowed to do in terms of accessing memory, and in order to get information about the kernel, there are a set of, what are called BPF helper functions. So for example, if you want to find out the current time or the current process ID, you'd use a helper function to do that, and depending on the context in which you are running a BPF program, you are allowed to use a different set of helper functions. You wouldn't be allowed to access helper function related to a network packet if you weren't in the context or processing a network packet, for example. And these helper functions make sure that you are only accessing memory, that the particular process related to this function is allowed to access. So that helps from a security point of view, ensure that one application can't use BPF to read data from another application's process, for example.

So that BPF verification process is very strict, can be quite a challenge to get your BPF programs to pass the verification step, but its one of the, I guess, arts of programming for eBPF, but it's an extremely powerful way of sandboxing what your different BPF programs can do. That said, BPF programs are very powerful and a correct BPF program could still be written by a malicious user or loaded by a malicious user to do something that might be totally legitimate in one scenario and completely malicious in another. So for example, if I let you load a packet filter into my running system, you can start looking at all my network traffic. That's not necessarily what I want you to be able to do. You could be sending it off to a different network destination. So you no more allow someone to run BPF code than you would allow them to have root access to your machine, it's something that comes with great privilege and great responsibility.

There are quite a lot of other non-network use cases that you can use eBPF for, right? [15:35]

Charles Humble: We've hinted at this already, but we should probably make it explicit, just because I think the name is slightly confusing. So this isn't purely about networking, there are quite a lot of other non-network use cases that you can use eBPF for, right? Things like tracing, profiling, security and so on.

Liz Rice: Absolutely, it's a really great point, and there's been a lot of useful observability. I mentioned Brendan Gregg and the work that he did, and does at Netflix. He and others have built this huge array of tools that allow you to inspect what's going on and measure what going on in... You mention it, across the kernel, there will be a tool to measure it, looking at what files are being opened, looking at the speed of IO. There's dozens of these little command line tools that you can run to get data about how your system is performing. Really, really powerful and show the breadth of what you can can do from an observability perspective.

And then if we start thinking about observing what's happening in a running system, we can look at what an application's doing. I mentioned the idea of observing what files are being opened, and you could use that from a security perspective. You can say, well, is this file that this application's looking at, is that legitimate? For example. Another really great example of security use of BPF is seccomp.

Charles Humble: Okay, so the word seccomp is a contraction of secure computing, or secure computing mode, and that's the Linux kernel feature that allows you to restrict the actions available within a running container via profiles. How does that work in this context?

Liz Rice: You associate a seccomp profile with an application to say this application is allowed to run this set of system calls. So we mentioned syscalls before this, this interface between user space and the kernel. And occasionally there are system calls that it doesn't make sense for many applications to have access to.

Charles Humble: Can you give an example?

Liz Rice: Very few applications that you're running day to day should be able to change the system time on your platform. You want that system time to be fixed and known. So a lot of seccomp profiles would disallow the setting of time on the machine, so seccomp is a really commonly used use of BPF, that a lot of people don't realize is actually using BPF to do it.

But you can take that a lot further and we can write eBPF code to, not just look at system calls, but look at a much broader range of events. There's some really great work being done around the Linux security module interface. So it's called BPF LSM, this is the interface within the kernel that tools like AppArmor use to police whether or not operations are permissible or not, from a security perspective. You have a security profile, and with BPF LSM, we can make those profiles much more dynamic. We can be a lot more driven by the context of the application. And so, I think there's going to be some really powerful tools built on BPF LSM, and those are not just about observing whether or not behavior is good or bad, but they can actually prevent, for example, I'm not going to let this application access that file above and beyond what the file permissions would permit.

eBPF has two distinct classes of user - end users and kernel developers [19:04]

Charles Humble: There's an interesting thing here, which I think is worth just pulling out and making explicit. When I think about eBPF, I think about people writing custom code to run in the container, in the kernel, in production. But actually, probably a more common use case, like the seccomp example we were just talking about, is something more like an end user, someone who's deploying a pre-written eBPF program in order to modify the behavior of a Linux kernel in some way.

Liz Rice: Yeah, I think for most users, they will find it quite challenging to write BPF programs. Now, personally, I'm someone who loves to get involved, write some code to really understand how things work, and I know enough to know that I can write some basic BPF code, but I can rapidly get to the point where you are dealing with Linux kernel structures and events that happen in the context of the kernel, so you quite quickly need some knowledge about how the kernel's operating, and that's pretty in depth knowledge which most of us don't have.

Although I'm really quite excited about getting in there and looking at eBPF code to understand it, the reality of it is that, for most of us, that's an intellectual exercise rather than something we'd really want to build. I think the use of eBPF as a platform, it's going to be based on people using tools, using projects, using products that are already written, and perhaps they can define particular profiles that are using eBPF to implement that tool.

Cilium would be a really great example of that. So, Cilium is a networking project, it's known as a Kubernetes CNI, although it can be used for networking in non-Kubernetes environments as well. And as an example of a profile, for example, Cilium can enforce networking security profiles, and you would write your profile in terms of what IP addresses or even domain names a particular application is allowed to access. Cilium can convert that into eBPF programs that enforce that profile, so you don't have to know about the eBPF code in order to use that security profile.

Do you write eBPF programs as bytecode, or do you write it in some high level language and then compile it across? [21:22]

Charles Humble: I want to come back to Cilium in a second, actually, but before we go there, if you to write programs for eBPF, obviously the underlying format is bytecode. It looks a bit like x86 assembly maybe, or perhaps Java bytecode, if people are familiar with that, but how do you actually write it? Do you write it as bytecode, or do you write it in some high level language and then compile it across?

Liz Rice: There are people who do write bytecode directly, I'm not one of them. So you have to have compiler support to compile to the BPF bytecode, and today, that compiler support is available in Clang. If you want to write code in C and, more recently, you can also compile Rust code to BPF targets. So those are your limited choices for writing the code that's going to run, actually within the kernel itself.

There are certainly circumstances where you really just want to write the BPF code and then you can use, there are things like bpftool, which is a general purpose, you can do quite a lot of things with bpftool, but one of the things you can do is load programs into the kernel. So you would necessarily have to write your own user space code, but most of the time we do want to write something in user space as well, that's going to perhaps configure the eBPF code, perhaps get information out of that BPF program.

So we're often going to be writing, not just the kernel code, but also some user space code, and there's a much broader range of support for different languages that there are libraries for Go, there are libraries for Python, Rust, there's a framework called BCC, which supports C and Python and Lua, to my recollection. So yeah, you have a lot more choice for your user space language than you do for the code that eventually becomes the bytecode.

How does my eBPF program actually hook into the kernel? [23:10]

Charles Humble: And then, how does my eBPF program actually hook into the kernel? Are there predefined hook points that I have, or how does that work?

Liz Rice: You've actually got a huge range of places that you can hook your BPF programs too. There's what's called kprobes and kretprobes, which are the entry point and exit point from any kernel function, so if you know the name of the function, you can hook into it.

You can also hook to any trace points. Some of those trace points are well defined and not going to change from one kernel version to the next, other trace points might move around, so you maybe have to know what you're doing a little bit.

There are events like network events, so the arrival of a network packet, that takes me to a thing called XTP, which I think is just brilliant. So XDP stands for Express Data Path, and the idea of this was, well, if we've got network packets arriving from an external network, they're coming through a network interface card, and then they get to the kernel, we want to look at those packets as quickly as possible. Maybe we want to run a program that's going to drop packets, so the earlier we can drop them, the less work had to be done to handle that packet. So the idea of XDP was, well, wouldn't it be cool if we didn't even have to get that packet as far as the kernel? What if the network interface card could handle it for us? So XDP is a type of eBPF program that you can run on the network. Not all cards support it, not all network drivers support it, but it's a really nice concept, I think, that you could offload or program to run, actually on different piece of hardware.

What's the overhead like if I'm running an eBPF program in my kernel? [24:49]

Charles Humble: That's really cool, actually. I love things like that. What's the overhead like if I'm running an eBPF program in my kernel?

Liz Rice: This is one of those questions that comes up and I always feel like, well, it's as long as a piece of string. I could write a pathologically poor eBPF program and attach it to every single possible event, and you would definitely notice the difference, but typically, performance is excellent, and in a lot of cases, what your eBPF program is doing on an individual event basis is very small. Maybe we're looking at a network packet and dropping it, or saying, no, send this one over here. Maybe not doing a lot per event, but doing that event millions of times. It's typically going to run very quickly, because it's running in the kernel, there's no context switch between kernel and user space. For example, if we can handle that event entirely within the kernel, dropping network packets, being a really great example of something that, if you don't have to transition to user space, that's going to be way more performant. So typically, eBPF tooling is dramatically more performant than the equivalent in user space because we can avoid these transitions.

How do Cilium and eBPF relate to each other? [26:07]

Charles Humble: You mentioned Cilium earlier, so changing tack slightly, let's talk about Cilium a little bit. So how do Cilium and eBPF relate to each other?

Liz Rice: I think this is something that, maybe it's not always obvious to people when they see Cilium, the project. The people who created the Cilium project were involved in the early days of eBPF as well, people like Thomas Graf and Daniel Borkmann, who were working on networking in the kernel, and still do, and specifically looking at eBPF in the kernel and realizing how powerful this could be for networking, for the ability to sidestep some of the things like IP tables, which as your IP tables grow, they become less performant, particularly in an environment like Kubernetes, where your pods are coming up and down all the time, which means your IP address is coming up and down all the time, which means if you are using IP tables, you have to rewrite those tables all the time and they're not designed to make small changes related to one endpoint.

So, Thomas and Daniel and others had realized that this was a really great opportunity for eBPF to rationalize the way that networking could work. So Cilium has its roots in networking. It does go hand in hand with eBPF development because some of the Cilium maintainers are also making changes in the kernel, the kernel maintainers as well. So we can see the development of Cilium and eBPF stepping up together over time.

Liz Rice: But today, Cilium is primarily known, I think, for networking, but also providing a bunch of observability and security features as well, that are sometimes less well known. We have a component called Hubble that gives you really great network flow visibility, so you can see where traffic is flowing within your network and within Kubernetes identities, because it's aware of the pods of services or the Kubernetes identity information is known by Cilium, so we can very easily show you, not just that this packet went from IP address A to IP address B, but also what Kubernetes entities were involved in that network flow.

And also security, I mentioned network policy earlier. There's also some really interesting work that we've been experimenting with. Let's say an application makes a network connection, we know what Kubernetes entity was involved, but we also know what process was involved and we can use that information to find out, well, what was the executable that was running at the time, and was that an expected executable? Did we expect that executable to be opening that network connection. Does this look like a cryptocurrency miner, for example, do we expect a pod to run for the days and then suddenly start creating network connections, or is that perhaps a sign that the pod has been compromised in some way? So combining the network information with some knowledge about what the application is that's running, can provide some really powerful higher level runtime security tooling as well.

Why is Cilium a good solution to the problem that service mesh is trying to solve? [29:15]

Charles Humble: Now Google announced that they were using Cilium for a new data plane for GKE, and I know that you've also now introduced a beta for eBPF based service mesh as part of Cilium 1.11, so can you talk a little bit about that? I'm presuming there's some efficiency gains there, as against the conventional sidecar proxy model that we typically use, but can you talk a little bit about that? Why is Cilium a good solution to the problem that service mesh is trying to solve?

Liz Rice: Yeah, so different people's interpretation of what service mesh is varies from one person to the next, but if we look at some of the individual capabilities of service meshes, well, one thing it's doing is, it's load balancing traffic. Here are three different versions of the same application, but we're going to canary test between these three different versions, for example. That's load balancing, it's a network function that we already had in Cilium. Getting observability into traffic—Observability is a big part, I think, of people expect from a service mesh. For some time, Cilium has worked with the Envoy Proxy. We have observability at layer three, four within Cilium itself, and then we can also use Envoy to get observability at layer seven, so that kind of observability angle was already almost there, things like identity awareness, Cilium is already identity aware.

TLS termination and Ingress capabilities, we were already at like, well, we've got kind of 90% of what people expect a service mesh to be, how do we take it that last step? So what we're doing in this beta is really saying, well, here is Cilium as the data plane for your service mesh, how do users want to configure that? What's the control plane, what's the management interface that people want to use to configure that?

And I think one of the reasons why it's really, really compelling is, we talked before about how eBPF allows you to avoid these transitions between kernel and user space, and if you look at the path that a network packet takes when you're using a sidecar model service mesh, which is the model that service meshes have all used thus far, every single pod's got its own sidecar, so if we imagine traffic flowing between two different pods, a network packet has to go through the networking stack on the kernel, up to the proxy that's in user space, in the sidecar, back down into the kernel to be then rooted to the application. And if it's coming from the application to another application pod, it's going to do that transition into the sidecar as it leaves one pod, and then again as it enters another pod. So we've transitioned that packet in and out of the kernel endless times.

And because Cilium is inherently involved in the networking at either end of the pod, we don't have to keep passing it backwards and forwards through these user space proxies. We can have a single instance of the proxy running on the kernel and take that network packet straight from one pod, through the kernel, to that proxy, proxy can decide what to do with it, and then it transitions into the kernel just that one more time. So the early indications, performance wise, are really, really good. I think that was what we expected to see because of the greater... Well, much fewer transition points, and it's good to see that's actually turning out to be true.

Charles Humble: It's working out in practice. Yeah, that's excellent.

Liz Rice: Yes.

What else is new and exciting for you in Cilium 1.11? [33:02]

Charles Humble: Cilium 1.11 came out in December of last year, what else is new and exciting for you in that release?

Liz Rice: I think I mentioned that Cilium is not exclusively used in Kubernetes environments, and a lot of the additional features that we've been working on over the last couple of releases have been enabling, particularly, large scale networks to use Cilium, either in a combined Kubernetes and BGP environment, or perhaps in a standalone networking environment.

So some of the interesting things involve, what if you've got two different Kubernetes clusters running in different data centers and you own the BGP connection between the two, so we can enable Cilium, understanding the endpoints of the IP address management at either end of that and advertising it across your BGP network. So some of these, more on-prem, high scale capabilities that some of our users have really been asking for.

Where is a good place for listeners to learn more? [33:59]

Charles Humble: That's fantastic. If listeners want to go and learn more about either eBPF or Cilium, where is a good place for them to go and maybe get started?

Liz Rice: If they want to look at web pages, cilium.io and ebpf.io are a great place to start. If they want to find knowledgeable people and communicate with them, there is a really great eBPF and Cilium Slack community. I mentioned before about how Cilium and eBPF had grown together in lockstep, and that's really why the Slack community covers both eBPF and Cilium, we have this history of both the eBPF implementation and the Cilium implementation community there.

It's a really great community and there's a lot of helpful people, if you want to come and ask questions and learn about eBPF. Maybe I'll also mention, myself and my colleague, Duffie Cooley, host a weekly, we call it eBPF and Cilium Office Hours, which loosely stands for ECHO, so on Fridays you can come and join us on YouTube. We explore lots of topics related to eBPF tooling and Cilium and that whole world, and we very much welcome people coming and getting involved, chatting with us, asking us questions, particularly on the livestream.

Charles Humble: Fantastic, and I'll make sure that all of those links are included in the show notes for this episode when it appears on infoq.com. Liz, thank you so much for joining me this week on The InfoQ Podcast.

Liz Rice: My absolute pleasure, thanks for having me.

Mentioned

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT