BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Phil Estes on Containerd, Including K8s Deprecation of Dockershim, Container Runtime Architecture

Phil Estes on Containerd, Including K8s Deprecation of Dockershim, Container Runtime Architecture

Bookmarks

The container runtime is software that executes containers and manages container images. Today, when many people think about a container runtime, they're likely thinking of Docker. However, Docker is more a set of tools for building, packaging, sharing, and running a container via Docker Daemon that then makes syscalls to another tool like containerd. Containerd, in turn, makes calls to an implementation like runc that lays down the file system for the container and is the executor for the process. Today, on The InfoQ Podcast Wes Reisz talks with Phil Estes, one of the containerd maintainers, about container runtimes. The two discuss the significance (in detail) of the announcement that dockerhsim will soon be deprecated in Kubernetes, the complete container runtime stack, work the Open Container Initiative (OCI) is doing today on a third container spec around registries, and more.

Key Takeaways

  • Docker offers a set of end user focused capabilities (helps work with volumes, networking or in some cases orchestration). Docker talks to containerd for managing the complete container lifecycle (including image transfer/storage to container execution and supervision). Runc is the linux/unix executor of that contained process.
  • Dockershim is a legacy piece of code that has since been replaced with the Container Runtime Interface. The drive to depreciate dockershim was really an effort to remove the duplication of effort in maintaining both dockershim and the CRI.
  • CRI-O vs containerd: Both are similar, but attack the runtime space from different points of view. CRI-O was purpose built for Kubernetes (the CRI-O API is the CRI). Containerd was built with extension points for use cases other than K8s. So containerd may make more sense if you have use cases apart from Kubernetes.
  • The OCI is working on defining a third spec, a distribution spec. The distribution spec focuses on how we actually transfer an image and talk to a registry.

Transcript

Wes Reisz: The container runtime is software that executes containers and manages container images on a node. Today, when most people talk about or think about a container runtime, you're likely to immediately start thinking of Docker. However, Docker is more a set of tools for building, packaging, sharing, and running a container via Docker Daemon, making syscalls to containerd. Late last year, buried in a release notes for Kubernetes 1.20, was the statement, "docker support for kubelet is now deprecated and will be removed in a future release." That statement as you might've imagined, or can recall, kicked off quite a bit of concern from the larger community in a myriad of posts and discussion. Since then, we've learned deprecating Docker and K8s is not invalidating our Docker skillset. It's simply moving towards a more CRI compliant runtime and K8s of which there are many without leveraging or relying on dockershim.

Wes Reisz: Today on the podcast, we're talking about containerd with Phil Estes, one of the maintainers for containerd. Hello and welcome to InfoQ Podcast. My name is Wes Reisz, one of the co-hosts for the podcast, and one of the chairs for the QCon software conference brought to you by InfoQ. The next QCon will be an online format May 17th to 28. So check us out at qcon.plus for more info. As I mentioned today, we're talking about containerd with Phil Estes. Phil is a principal engineer at Amazon Web Services. His role there is working on core container technology in the container compute organization at Amazon. He is currently a maintainer for the CNCF Containerd Project, the chair for the Open Container Initiative (OCI) technical oversight board, and a member of the Open Source Security Foundation (OpenSSF) technical advisory council.

Wes Reisz: Today on the podcast, we're going to be talking about that announcement, deprecating Docker, and K8s, we're going to be talking about container runtime in general, and we'll be talking a bit about the Open Container Initiative and a whole bunch more as we go through. So as always, thank you for joining us on your jogs, walks, and commutes. Phil, thanks for joining us on the podcast.

Phil Estes: Thanks, Wes. Thanks for having me.

Wes Reisz: We've been setting this up for a while, so it's good to finally get here. So I appreciate all the time spent with me. So you were at IBM for 20 something years and just recently made the move to AWS. How's that going? What's that all about?

Phil Estes: Yeah, I think I surprised a lot of people given my IBM career was extremely long, all the way back to even my college days I interned at IBM in the summers. So yeah, I had a long and actually great career experience at IBM. I think it's hard to say all the reasons why you make a change, but I think the core of it was just the realization that as I worked on containers and open source, it had really broadened my horizons across the industry, made a lot of connections.

Phil Estes: And I think the realization for me was if I want to do something else, try another aspect of working in the industry, this is probably the time where else I should just sit tight and finish out my years at IBM. And so, a bunch of things kind of came together at the right time. And so, it was an opportunity to try something new. And the fact that AWS was building a lot around containerd and their cloud container offerings really excited me and interested me to keep working on the things that I cared about. So yeah, it's been good so far.

Wes Reisz: The follow up question there is you're still going to be working in containerd? Still doing things, with the Open Container Initiative, all those things are staying the same?

Phil Estes: Absolutely. Yeah. That was part of the discussions and they were fully on board with me continuing in those roles.

Why do you think the announcement of the deprecation of Docker in Kubernetes caused so much concern? [03:38]

Wes Reisz: That's awesome. So I kind of kicked off this podcast kind of talking about the deprecation of Docker with Kubernetes that was announced in the 1.20 release notes. That caused a lot of discussion, a lot of turmoil, a lot of concern, I guess. Why do you think that was so much concern in the community when it was first announced?

Phil Estes: Yeah, it was interesting. And I think a bunch of us across the industry are CNCF ambassadors, so we have our own kind of channels for chatting and regular meetings. And I think a lot of us who work in that Kubernetes container runtime space were surprised at what seemed like an out sized response to what, I guess when you work in an area you're kind of used to the ideas of switching things out, but operationally, and kind of with the enterprise side of things, any talk of deprecating something that seems like a reliable piece of the stack that you depended on brings concerns of, "What does this mean for me at how are we going to deal with this?" So I think, again, depending on where you sit in that realm, it seemed like an out-sized response, but I think people are still learning about how these pieces fit together, so to speak.

Phil Estes: I mean, I've been talking at conferences about OCI and CRI for years, and I always think, "Oh man, this must be boring." And every time it's like, "Oh, I didn't know how that worked." You keep reaching a new set of people who really kind of didn't know how these pieces play together. So part of it's just, I think people grappling and understanding how these pieces fit together and understanding, "Okay, this is a major shift to how I operate or how my tools work. And I think that's going to help people understand this shift

Docker is much more than a runtime. It’s a tech stack. Can you talk about that? [05:13]

Wes Reisz: Same with me. I remember we did our end of the year podcast that we always do with all the podcasts. And it was right after those release notes came out and I literally addressed it there. So it kind of caught me off guard too, even though I've been listening to your talks for years about CRI. So it kind of caught me off guard. There was an article or blog posts, don't panic, Kubernetes, and Docker from a bunch of Kubernetes community. And it specifically said something I thought was just really clear to me at least. It said, "well, when we talk about Docker, it's not really one thing. It's more of an entire tech stack." And that includes containerd. So for the people in the back, can you kind of walk through, I guess, what we think of Docker and how it relates to community?

Phil Estes: We could take this pretty far. I'll start hopefully with something simpler than the entire story, history of computer runtimes, but Docker itself has been on a journey across the last six plus years. I was trying to think if there's a good analogy, it's hard to think of something that people can fully understand and kind of get. But if you think of web browsers, there's the actual implementation of how HTML is parsed displayed and one very popular implementation is WebKit, but there are multiple browsers that you can download. You can use Google Chrome, I think Safari is built on WebKit if you have a Mac. So no one thinks, "Oh, I should use WebKit for my browser." You actually choose kind of a more commercially end-user focused browser, but it happens to use WebKit. So that's really what happens.

Phil Estes: We all use Docker for years, but many people will continue to use Docker the tool, kind of the end user focused set of capabilities, but similar to how WebKit was abstracted out, it's own project, with its own life cycle and versions, containerd and runc are these sort of more not end-user focused pieces of this maturing, evolving container runtime stack. And so as people have continued to use Docker, they may not even be aware that they're also using containerd, which is also using runc. And so, effectively today, there is kind of this rich stack where Kubernetes as a community can choose to say, "I'd like to plug directly to containerd because Docker has all these other tools that can build things, it can run its own networking, it has its own volume, plugins, all those things aren't required by Kubernetes. So why not use a lower piece of the stack?

Phil Estes: And so that's really how CRI came to CRI was instead of plugging directly into Docker with this legacy dockershim piece, how about we run an interface so that anyone who wants to plug it to Kubernetes as the container runtime could do so. That's where we are today with CRI-O and containerd. And earlier in the life cycle, Rocket also played to that space from CoreOS. So I don't know if that gives a little bit of an analogy that maybe people can understand that you're using a tech stack with multiple layers, some of those layers you didn't even know about, but they were there and doing most of the grunt work, so to speak of actually how containers run.

Wes Reisz: When you talk about kind of like an end user facing set of tools, we're talking about like that developer able to build images, create images and those kinds of things, right? What is the life cycle? Where does, I guess, Docker the end user facing tools stop, and then the OCI compliant runtime engine takeoff?

Phil Estes: Even that has evolved a little bit over the years where Docker remains today is kind of around that original pre CNCF containerd which really was just helping Docker manage the process life cycle of your container. So, when you typed, Docker was assembling that OCI spec. So it looked at the image, it said, Oh it needs this environment, here's the actual executable to run, and here's a bunch of other settings. And it formed that into an OCI runtime spec. So the JSON that you passed to runc, and it would hand that to containerd and say, "Here's the container I want to run and here's its name. And containerd did the work of actually managing that life cycle with runc being the executer of that contained process. And so today, while there have been some attempts for Docker to use containerd more fully.

Phil Estes: Now containerd has push and pull, it has snapshooters, which again, hopefully not jumping ahead too much, but that's how your image is actually laid out on the file system and how that's assembled into a mounted root file system for your process to run inside. All those pieces are used by Kubernetes when containerd use the CRI, but when Docker uses containerd there's a lot of work to do with the Docker engine to extract all the ways that Docker does that itself and start using containerd. And so, Docker today still uses containerd just as that process monitor. I guess, is the easiest way to think of it or supervisor just like, "Oh, is this container running? What's it state? Containerd stop this container." So that's the piece of containerd that Docker is using today.

Wes Reisz: Okay. But there's other tools, right? There's Cloud Native build packs there's built on podman. How do those tools interact differently with containerd or another CRI as compared to like Docker? I didn't quite follow that part.

Phil Estes: Yeah. This is probably where it's good to remind ourselves that Docker is not just a container runtime in the traditional sense. There's tools like docker build, there's plugins for volume and network. All those things are built at a higher layer. And so obviously others in the industry, especially the area of build, I think build is probably one of the most interesting areas where there are tons of different tools for assembly containers. And some of them don't even use Dockerfiles, which is actually advised by a great advancement that Docker's creation of the Dockerfile was magical in many ways. And that is such a simple way to assemble a container image with very understandable and easy commands. But there's a lot of complicated things people want to do with containers. And they've come up with other interesting methods, their build services and the various clouds, like you said, there's Cloud Native Buildpacks.

Phil Estes: There's other tools like Red Hat's Buildah project. BuildKit itself has been extracted as a new project within the Docker ecosystem that Docker now uses. But other people use BuildKit. If you search around, there's like so many tools built around BuildKit today that have nothing to do with Docker's UI or UX. And so to me, that's really the beauty of what the OCI standardization brought us is that you can build any tool you want to assemble containers, as long as it's OCI compliant.

Phil Estes: That's really the magic of having a standard is that now there could be a ton of interesting tools that build things in different ways, but once you assemble it, push it to a registry, we all agree what that looks like, how it's assembled that any container runtime, CRI-O, containerd, Docker, they can all use those images no matter how you built it. So, I think one of the last articles I actually got published at InfoQ kind of pictured that world where there's a bunch of different developers. They're all using different tools and nobody's having issues with interoperability because the container image format or the container runtime spec, we've all agreed that that's how we define what these pieces are.

Can you talk a bit about the differences between containerd/CRI-O and runc? [12:30]

Wes Reisz: Okay. So we talked about building, we talked about pushing kind of getting things to a container registry and then actually running it. You've talked about this high level and low level runtime. So something like containerd, you mentioned CRI-O. And then there's this lower level runc. So what's the relationship of containerd, or CRI-O with something like runc.

Phil Estes: I think the important point there is that because we've evolved to this stack of unique components, it's hard to know which one we actually call the container runtime, but there's people that want to come up with different terms. Is it an engine, is it a runtime? To be honest, I stay away because I don't know the magic way that we're going to figure out how to define these things. So runc came out of that OCI specification process. So when the OCI was founded, Docker took the code that did that low-level operating system level interaction that said create the namespaces, set up the cgroups, here's the process I'm going to run, here's the root file system. It was that very kind of at the OS level isolation of what we actually call linux container. All that was given to the OCI to become the program, runc. And so runc has none of the understanding of what a container image or a registry is-

Wes Reisz: Its the operating system.

Phil Estes: Yeah. The point that runc is pointed at a location on the file system at a spec file, everything else has to be done. Volumes, mounter, all that is outside of the purview of runc. And so, you can think of runc as the lowest layer of the stack that only knows what an OCI runtime configures, and how to see a bundle on the file system, which is really just that file system of the container. And so everything that has to do with pulling layers for registries, and assembling them, and mounting them, all that has to happen at a higher layer. And so that's what CRI-O does, that's what containerd does, is really doing all the work to get it ready, so that runc says, "Oh, here's a config, here's a file system. I know how to do those sort of core steps."

Wes Reisz: Then is it correct to say runc is maybe core to Linux, it's a Linux implementation and say for like windows, there's a different runtime that's operating at that OS level?

Phil Estes: Absolutely. So runc is absolutely Linux specific or more generically Unix specific that I think some FreeBSD folks and Solaris even had a port at one point. But yeah. So, when you go to the Windows world, they have something called Run HCS, which again, understands the Windows curdle and all the work Microsoft has done to have isolators that simulate the same idea of a container in the windows curdle. And then there's actually interesting runc replacements. I'm sure you're well aware of all the Sandbox ideas, gVisor, lightweight hypervisors. So there have been interesting replacements for runc, like runq or runv, I forget what gVisor's is called, Run something S. But yeah, once you're at that layer and you say, "Oh, I like this idea that someone hands me a config, a root file system, but I have another idea. The namespaces or cgroups for how to isolate it, I can actually replace runc with that.

Wes Reisz: That makes sense. And then just stating the obvious then containerd or CRI-O, they just see the interfaces. It's just interacting with things all the same at that lower level. It doesn't see it as any different, it's just typical software.

Phil Estes: Both contained and CRI-O have a interface to know how to call runc, how to set up all those things to the right places based on the way they were called and then call runc that manage, obviously that life cycle management to check a process status, to kill it, to pause it, to unpause it.

In the Kubernetes space, how do you choose between containerd and CRI-O when selecting a runtime? [16:14]

Wes Reisz: We've talked about kind of runc and we've talked about UX, UI for tools like Docker. So let's focus specifically on containerd and something like CRI-O. How do you pick? How do you decide what I'm going to set higher-level runtime? And know you didn't suggest shared the right to call it, but I'll call it a higher-level runtime. How do you pick? How do you distinguish between the two? Obviously security is something that you're going to pay attention to, but how do you kind of like dive in and pick between at least these two? I know there's more too.

Phil Estes: Yeah, so they're the main implementers of the CRI. And so in the Kubernetes world, at least, which is a fairly significant kind of end user community of container runtimes, CRI-O, and containerd are your main core options today. There are a few others, but it's interesting in the sense that CRI-O and containerd kind of came from different ideas. So again, containerd didn't have it to direct, focus, and desired as being a Kubernetes focused runtime. And so containerd makes more sense if you have use cases apart from Kubernetes, because it's built with extension points and ways to be embedded, it has a very clear API that is not just the CRI, I mean, it supports the CRI via plugin, but people have built other higher layer tools above containerd by using the containerd API. CRI-O on the other hand was purpose built to be a Kubernetes runtime.

Phil Estes: It's API is the CRI. And so it has some benefits in that because it has no other purpose in life, it only needs to think about pods, and pod specs, and implementing the CRI sort of directly as a core Kubernetes runtime, whereas containerd we've had plugability for the shim layer for how it calls runc. And so that's allowed us to have the Windows support very easily added by the Microsoft team, but we've had gVisor and Firecracker and others come along and implement their shims to call their runtimes. And so CRI-O has remained very focused around Kubernetes runtime. And that's what I do best, especially if you're going to be the Red Hat world with OpenShift, I mean, it's going to be a very clear choice because Red Hat has done all the work to validate that stack and it plugs it directly to that Kubernetes interface. So for containerd it's been more evolving because containerd was used by Docker long before it was a CRI implemented runtime. So I think that's kind of one of the big differences.

Can you talk a bit more about Dockershim and its role with using Docker in Kubernetes? [18:50]

Wes Reisz: Yeah. that makes a lot of sense. So before we kind of dove in a little bit into kind of the architecture of container runtimes in general, we were talking specifically about Docker and how it was being deprecated from Kubernetes. There's this concept of a dockershim. What is dockershim? That's what's being removed. How is that allowing Docker to run in Kubernetes? And what is the implication, I guess of removing that in 23?

Phil Estes: If there's one thing I wish I could have been more clear on my talks the last few years is that I was a little too hand wavy when I talked about the creation of the CRI as this container runtime interface. So if you think about the Kubernetes architecture, you ended up having nodes that run a piece of software called the kubelet and the Kubernetes masters, or the API server, that's what receives these requests, like place this pod make three replicas of it, place it on whatever nodes it matches and maybe there's labeling and various constraints for where you want it placed. But at some point that's going to end up at the kubelet on a physical node or virtual one. And at that point, the kubelet needs a container runtime to actually do that work of starting a container, because again, Kubernetes never had its own runtime.

Phil Estes: It always relied on Docker from the earliest implementation days. So dockershim was this legacy piece of code for how the kubelet do to call Docker to do that work. And so when a few years later the CRI was created... What I haven't been as clear about is that dockershim didn't get rewritten as a CRI implementation, it just lived alongside the fact that, "Oh yeah, now there's a CRI." It's a much cleaner way to connect your container runtime into the kubelet. And so what it created was kind of this dual maintenance for the kubelet, Kubernetes node community to say, "Oh, if we fix this, oh, we've got to fix that dockershim. And we have to do this with the CRI."

Phil Estes: And so as the CRI has grown or there's interest in features, there's this tension now, do we have to fix dockershim to be able to do that? Or is dockershim not going to be able to do or user namespaces or whatever the feature is? And so the deprecation is a sign that, okay, this is actually too difficult to have a CRI and have the dockershim that we maintain to run Docker. And with the advent of CRI-O and containerd being fully supported runtimes, it was always like, why would you need to continue having this other legacy piece of code that only knows how to talk to Docker.

What drove the decision for deprecation? [21:18]

Wes Reisz: I'm curious. Driving that decision for deprecation, was it just the overhead of maintaining kind of two different pieces of software? I've also read some things about it's more secure, that there's performance reasons. Are those byproducts, or were they also driving concerns for a deprecation?

Phil Estes: I think the main discussion around deprecation was really that maintenance burden of the Kubernetes maintainers. Again, when do you think about security of performance, if you have to think about it in two totally different code basis, so say, "Hey, I want a kubelet to run X percent faster." And now you have to think about that in terms of like, well, how do I make that true with dockershim and the CRI? You could imagine that there's duplication of effort all over that became a fairly hefty burden on the maintainers.

From a high-level what should you watch out for if you’re replacing Docker with containterd? [22:06]

Wes Reisz: From a high level, so what does it look like to replace the CRI? What does it look like to actually kind of switch over to start using containerd and then kind of follow up to that is then when I do that, is there anything that I need to be thinking about?

Phil Estes: The actual technical act of changing the runtime underneath the kubelet is extremely simple. Simple, again, engineers are horrible at saying how easy it is. You just switched this one config and you're good. But in the core technical sense, there've been plenty of demos, a conference talk like, "Hey, look, I can switch the runtime. Even like, I don't even have to stop Kubernetes. I could like restart the kubelet and then point it a new runtime, but it just keeps going. So the technical aspect of it is rather straightforward. What has complexity potentially is finding out, "Okay, how many of my developers, dev ops folks have kind of broken through what should have been an abstraction that said, Hey, I actually know Dockers on this node and I'm going to go interact directly with Docker from things that are road via my pod in Kubernetes." And we felt that paid when IBM shifted our Kubernetes offering from Docker to containerd.

Phil Estes: We worked with customers, we work with vendors who all had kind of tentacles that were reaching down because they knew the node ran Docker that were directly interacting with the Docker API, the Docker socket. So that's the harder work is have I implemented the system I run inside Kubernetes to actually depend on a very specific runtime? Again, the abstraction should have led people to not do that, but again, it's the real world. And I did mention vendors. So vendors, we're not doing anything wrong. If your Systig, if your security vendor, you needed to know what was running because you were interacting with it to maybe pull some statistical information. And so if that runtime changed then your tool needed to know how do I interact with this new thing to do the same thing, to get stats, to get information. It's actually been cool to see the vendor community.

Phil Estes: I'd say they've spent the last two years getting to that place where they say, "we don't care what CRI runtime you use. We support them all. And so today versus two years ago, it was a totally different picture. And so I think that also plays it to this deprecation. It's a safer time. You could have said, "Oh, well, we had the CRI two years ago, why not deprecate it then? Well, now today, hopefully you're going to have a lot less pain dealing with the fact that your vendors you've chosen only supported Docker. Only knew about Docker as a Kubernetes runtime. So we're in a much better position now.

Wes Reisz: If you're underlying relying on something like Docker socket, that's something that you need to be looking at because that's an area that is obviously going to cause some concern for you?

Phil Estes: Part of that is something you can automate. You could run tools across your duds to say, is anything trying to contact the Docker socket that's not the kubelet? Scared all your pod specs. Am I mounting the Docker socket into various pods and given the permission to do so? So yeah, people can kind of make an assessment and see how tractable a problem might be for them.

What’s next for the Open Container Initiative? [25:19]

Wes Reisz: Phil, I know I've been watching talks with you probably for two, three years now, through QCon for at least what? Four or five years? You've been out championing OCI, been talking about OCI, is your work done there? I mean, people know what the OCI is. You've got containerd, it's now defaulted in every major cloud provider. Is the work done with OCI and even kind of the reference implementation type work?

Phil Estes: That's a great question because I do feel like the OCI is kind of at an inflection point, where almost like phase one it is done. I called the things that we wanted to solve back in 2015, we're in a great spot. People are attracted to the idea that there's specifications, that they can use a tool that says it's OCI compliant and the interoperability that's come from that. And we all know what a container image looks like and how we define a configuration for a container. So we're kind of at the cusp of what I'm calling phase two, in fact, just a couple of weeks ago at our weekly meeting, we kind of had a reset of like, there's a bunch of new ideas and there's almost too many people wanting to come to our weekly call, all of a sudden presenting all kinds of new ideas and new thoughts.

Phil Estes: And so most of the revolve around... We're just now finalizing the third spec within the OCI, which is the distribution spec. So we know what an image is, how you configure a runtime, how you give all these details for how a container is run. The distribution spec is how do I actually transfer that and talk to a registry about transmitting the layers of the config, the manifest? And so that's spec is at its final release candidate. And so I expect within the next month, there'll be distribution spec 1.0

Wes Reisz: I'm curious about that. I mean, there's multiple container repositories out there, right? And the apps I've just assumed there was already a spec. And I guess I'm ignorant. I didn't realize there wasn't one. How does that even work to this point?

Phil Estes: As usual with these things because of the broad use of Docker and Docker's initial registry implementation that the existence of Docker Hub, there was a defacto standard. I mean, Docker had a defined way that you talk to a registry, lots of great documentation on it. So everyone kind of said, "well, if my registry works with Docker, push and pull, that I'm good. So we've had that defacto standard. And so I took that as a starting point, just like they did with the runtime spec and image spec, and now have come to this point where we're finalizing the last few details of that, so that registries will actually have a conformance suite that they can say, "Hey, I'm, testably conforming to the distribution spec." So it's not based on, "Well, I worked with Docker pull. So if your tool doesn't work, go figure it out." So now we'll have a spec that says, here's how we could validate how you talk to a registry.

Wes Reisz: You said you took kind of what Docker Hub was doing today, use that as the kind of foundation, has it deviated in any way that people should be aware of or anything coming up that's worth talking about?

Phil Estes: I think most of the things people care about, nothing has really changed. There are nuances that if you're a registry operator, all the major clouds come to these calls that had been evolved because there's nuances about how you can do searching or whether we need a catalog API. And there have been definitely debates over very specific things that would impact you if you're a registry operator, but as an end user, all the same semantics for push and pull and how that's actually implemented are basically the same as they were the Docker definition of those things.

Anything you’d like to comment about with the containerd community?[28:52]

Wes Reisz: Yeah. Very nice. We're coming up on the end here. So I thought we'd just might wrap up with a quick chat out to the community or so. Anything you want to comment/talk about or talk about the size or anything with the containerd community.

Phil Estes: It's interesting. We've always had a good mix of folks from all over the industry, from different cloud providers, from different vendors, individuals involved. We've had a very steadily growing number of maintainers and reviewers. That's a great cross section of the community, but it has been interesting with the dockershim deprecation. It seems like we've had a burst of kind of new interest in what is this containerd thing, and people reporting issues, and starting to help out with fixes, and changes, better documentation. So it's been a great community. It seems to be growing even at a faster pace in 2021. Again, it seems like just the whole deprecation discussion got a lot of people curious and interested to where we've seen a definite uptick in interest. So yeah, we have an awesome community, a great group of maintainers. Some of them involved, cross cutting with Kubernetes, with vendors, with cloud providers. So yeah, we work well together. We have some of the original Docker folks who now have moved on to other companies, they're still evolve. So it's a really strong group of folks.

Wes Reisz: Well, Phil, it took us a while to get to this podcast. I think we started talking about it probably around Christmas I think. It's been a while. So thank you for hanging with me and thank you for sitting down and chatting about containerd and the Open Container Initiative.

Phil Estes: Absolutely. Thanks for having me.

Wes Reisz: All right. Look forward to talking to you soon. Cheers.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT