Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Podcasts Rancher on Hybrid Cloud, Kubernetes at the Edge, and Open Standards

Rancher on Hybrid Cloud, Kubernetes at the Edge, and Open Standards


In this podcast, Shannon Williams, co-founder and president at Rancher Labs and Darren Shepherd, co-founder and CTO at Rancher Labs sat down with InfoQ podcast co-host Daniel Bryant. Topics discussed included: the adoption of hybrid cloud across organisations, the evolution of Kubernetes as a key abstraction for portability and cross-cloud security, running thousands of Kubernetes clusters at the edge, and the value of open standards.

Key Takeaways

  •  Organisations are adopting hybrid cloud strategies. The use of containers to package and run applications across clouds has seen large adoption over the past five years. Containers and Kubernetes are everywhere: the datacenter, the edge, embedded systems, and other locations.
  • Two enterprise use cases for Kubernetes stand out: providing standardised abstractions and APIs to increase portability across vendors and cloud platforms; and providing a framework and homogenised foundation on which to build and implement (cross cloud) security solutions.
  • Open standards support interoperability and drive innovation. The CNCF is becoming the natural home for open cloud technologies. The Rancher team have donated Longhorn, their cloud-native distributed storage platform for Kubernetes that was recently announced as generally available, to the CNCF.
  • With the success of lightweight Kubernetes distributions, such as Rancher’s K3s, engineers are starting to deploy standalone Kubernetes clusters “by the thousands” to edge locations. Rancher has recently released Fleet, a new open source project that is focused on managing large collections (“fleets”) of Kubernetes clusters.
  • Many developers and end users of Kubernetes simply want a platform-as-a-service (PaaS)-like experience. The next 12 months will see the community focus on the simplification of the Kubernetes ecosystem.



00:05 Bryant: Hello, and welcome to the InfoQ podcast. I'm Daniel Bryant, news manager here at InfoQ, and products architect at Datawire. I recently had the pleasure of sitting down with Shannon Williams, Co-founder and President at Rancher Labs, and Darren Shepherd, Co-founder and Chief Technology Officer at Rancher Labs. I was keen to talk about all things hybrid cloud, managing data via in class storage, and Kubernetes at the Edge.

00:25 Bryant: Over the past several years, I've watched the Rancher team pivot successfully from offering an iOS platform to going all in on Kubernetes platforms. They have had a front row seat in watching organizations navigate the challenges of building a platform on Kubernetes and they've been pushing the Kubernetes platform to embrace the use cases that weren't originally thought about. Along the way, the Rancher team have created technology and contributed to community standards. They have driven innovation in this space.

00:48 Bryant: I wanted to get their opinion on the state of the cloud platform world, and also understand where they see this going over the next few years. Hello, Shannon. Hello, Darren. Welcome to the InfoQ podcast.

00:56 Williams: Hey Daniel. Great to be here.

00:58 Could you briefly introduce yourselves, please?

00:58 Bryant: Hey, could you briefly introduce yourselves please and share a bit about your background as well?

01:02 Williams: Sure. I'll go first. Hi everyone. I'm Shannon Williams. I'm one of the founders here at Rancher Labs, along with Darren. We started the company back in 2014.

01:11 Shepherd: Hi, I'm Darren shepherd, I'm a co-founders, CTO. I'm into all the techie stuff. I'm more of the opinionated technical guy.

01:17 Bryant: I love following your Twitter account, Darren. I do it for the snark. It's really good. So I've worked in the same space as both of you actually pretty much a similar kind of timeframe and I've really enjoyed watching Rancher Labs evolve over the years and you've definitely pivoted and evolved in a really positive way, which I think is super interesting.

01:32 Bryant: One thing that's always struck me is you've always got this nice balance between solving real world problems and innovating in interesting areas. So today I was keen to look at some of the hybrid cloud use cases, which I think is the practical space, dive a bit into the Kubernetes platform, bit more innovator. And then at the end, look at some of the cool tech and the future bets you're working on too. I think that'd be most interesting.

01:52 Can you share your current insights into what you're seeing in the industry around hybrid cloud?

01:52 Bryant: So I want to fire off with, can you share current insights into what you're seeing in the industry around hybrid cloud? Is it a thing? Are folks going multi-cloud with Rancher? How many folks in the enterprise are adopting this? What do you think is interesting? That kind of thing.

02:06 Williams: Yes. Daniel, that's a great question. The short answer is yes, everyone's doing everything. When we started Rancher, I think we had this idea that containers were going to allow us to basically get a more uniform layer that we could then develop and build applications on. And obviously over the course of about three years from about 2014 until about 2017, there was a lot of, what's that going to look like, what's the software that's going to actually enable this? And by the end of 2017, it certainly felt like we'd all kind of agreed on Kubernetes. And now kind of three years later, sort of three years of Kubernetes as sort of an agreed upon orchestrator, it certainly feels like we're entering the phase where we're now actually seeing real world use cases happen and containers are running everywhere.

02:51 Williams: And I think hybrid cloud almost kind of does it short shrift because what we really see now is containers everywhere, Kubernetes everywhere. Kubernetes in the data center, Kubernetes at the Edge, Kubernetes in the cloud, Kubernetes in the embedded systems, Kubernetes in devices. I mean, if you can run software on it, we're probably talking to people about how to run Kubernetes and containers on it. So the challenge still is more around how you actually manage all that. The fact that you can do it, you could set up Kubernetes yourself on anything, but you start to get to a point where complexity and sprawl become the bigger challenges versus the, how do I build and run a cluster?

03:27 Shepherd: Yes, I wanted to add to it. When containers came out, when Docker came out, it was actually the hybrid use case that I pretty much got really excited about it because before that we were working in the IaaS space, that OpenStack, CloudStack, that type of stuff. And so hybrid was this big selling point, we're all going to be able to burst to the cloud or whatever. And honestly, it was just a terrible story. It just didn't work. And there were so many technical issues and limitations.

03:47 Shepherd: And so when containers came out, I mean, I was just super excited about the possibilities. I'm like, "This technology will actually work. This makes a lot more sense than what we were trying to do with VMs, because we never got a portable format with the VMDK or whatever. There was no portable format." So it's exciting to see hybrid cloud is in fact happening, but honestly, it's gone so much farther than that to me, because it's like, I almost don't want to talk about hybrid cloud because really Kubernetes is just going everywhere. It's kind of crazy. So it's like the amount of demand and interest we have in the Edge space is pretty exciting. And so it's like insane. It's just got Kubernetes everywhere. We're just kind of anywhere there's a computer, we're running it. So it's like, yeah, hybrid's happening, but it's almost more than just hybrid. Yes, it's pretty exciting to me.

04:28 Bryant: Yes. I'm definitely keen to cover the Edge space because that's super interesting.

04:31 Williams: One of the things we asked actually recently, we talked to a lot of users and customers and we kind of asked them 'why' questions about like, why are they using Kubernetes everywhere? Why are they moving to this approach of containerization across all their software? And it's interesting, the three big things we hear are, one portability, as Darren said, just having a common API, a common framework so that we have really the ability to move something. But a bigger one is actually security. Security comes up over and over again as it's really hard to secure lots and lots of different platforms. They all have different capabilities, footprints, OSs, each cloud has different approach to networking, actually standardizing on something like Kubernetes makes implementing security policy dramatically easier.

05:16 Williams: So you have kind of this broad, "I want hybrid cloud because I want to be able to go anywhere and I want portability," but then as you start to get closer, you're like, "Oh my gosh, this is saving us so much effort in terms of how to secure what we run, who can change it, what can it talk to." And as OPA and other pieces of technology become popular inside of Kubernetes, I think we're making it easier and easier to build policy models and apply them to any infrastructure.

05:42 Williams: So the other really kind of interesting one is we're at this point, as most organizations, we're the fastest growing role in IT is DevOps, right? We're constantly looking for SREs and DevOps and people who can actually help run applications, not necessarily just write applications. And one of the challenges is today, those DevOps teams have to know whatever the platform is that you're going to run against, right? So you get an AWS expert or a vSphere expert or somebody who knows Edge or whatever.

06:06 Williams: So now with Kubernetes, all of a sudden you can start to standardize again the overhead required on this DevOps team as they begin to get to continuous delivery, canary deployments, it's all programmatic, there's a big opportunity to take some of the burden that's just sort of exploded in terms of cloud engineering, platform engineering, ops engineering, SREs, DevOps teams, whatever you want to call them. All these people whose job it is to deliver a highly available resilient platform.

06:32 What are your thoughts around GitOps?

06:32 Bryant: Mm-hmm (affirmative). Very nice. And I was actually going to dive into some of the sort of DevEx questions around this. I saw actually, Darren, you made an interesting tweet around GitOps a while ago and sort of GitOps seems to be coalescing as sort of the best practice of how you configure and sync infrastructure between say environments. What's your thoughts around GitOps?

06:51 Shepherd: I believe actually a lot in the model. It makes a lot of sense. And I really liked the kind of the trend. Infrastructure as code has been kind of something for a while, but there's actually a newer trend of infrastructure as data, that term has kind of come out. But basically I do think the model makes a lot of sense and this is why kind of we've actually introduced another project, Fleet, is actually to kind of facilitate that model. But if you follow my Twitter, I just complain about everything.

07:15 Shepherd: Even though I believe actually a lot in GitOps, There's a lot of complexities and issues with it. Just like with any of these technologies, it's not a silver bullet because it can actually add a lot of complexity and reduce a lot of visibility. And so that's what we're seeing is, this is a really good model, it makes a lot of sense to manage, but then as you scale up, there's some complexities that come with it. But all in all, I think the direction, it definitely makes sense and we're heavily kind of pushing in that direction with Rancher.

07:40 How should engineers go about choosing and evaluating parts of their cloud stack?

07:40 Bryant: Very nice. So moving back onto this building a platform on something like Kubernetes, how do folks, or how should they perhaps, go about choosing and evaluating parts of their stack these days? I love the CNCF, when I look at their landscape, it's a bit of an eye chart, right? There's so many things on there. Obviously Rancher Catalog probably brings that sort of choice down a bit, but how do you advise folks on the core elements of their stack of their platform they need and how they integrate them together?

08:07 Williams: Yes, we sort of split that into two big areas. There's the work that Darren and the engineering team does to you kind of decide what becomes part of Rancher. So for people who aren't familiar with Rancher, Rancher is an open source Kubernetes management platform. So it's very popular. It's used by thousands of teams, tens of thousands of teams, to basically take and consolidate management of all of their containers, all of their Kubernetes, everywhere it's running, cloud, Edge, wherever you've got it.

08:33 Williams: And when we started, the main function was creating and managing clusters, managing policy, sort of providing operational support to the teams using Kubernetes. And over time, now four or five years in of developing Rancher, it's grown to provide also a lot of the critical ecosystem components around Kubernetes. So tools like Prometheus for monitoring or Istio for service mash or OPA for policy management have become baked in integrated components of the platform. And we're able to do that in a lot of ways because Rancher is open source, it's really easy to bring open source components in and then implement them in a way that's also open source. So while we as a company provide enterprise support for are about four or 500 customers, the vast majority of people who are going to touch Rancher are going to use the open source implementation. In fact, everyone uses the open source implementation, they just don't take support.

09:23 Williams: So what we've found is that bringing the ecosystem together, the key has been really the opinions that Darren and our other co-founder Will Chan have, and our team develops about, how would you use these things? How would you actually put them into production? How do you implement them in a way that's scalable and can be maintained? And I think that's been part of the secret sauce of Rancher is taking a lot of the complexity and solving it for people and then taking new things as they come in.

09:48 Williams: And as you said, taking them through a life cycle. So the first phase of something is, it just exists out there. You can install it on Kubernetes and try it yourself. At some point, it gets to enough scale where it'll show up in the catalog of helm charts and things that are implemented by the community. And then at some point, if it gets beyond there, it'll start to become a supported component of Rancher. So projects like OPA and like Istio have all gone through that. Even our own projects like Longhorn basically go through that same journey where they start as something that's available, it's a beta project, it's used by people. And then over time it comes in.

10:20 Williams: But that final step of sort of bringing things in, that really is where we start to make judgment calls on what we think is stable, what we think is critical, what we think is the best solution. We always do it in a very light-touch way. In any solution, for example, bringing in Istio, we will consider, should we bring in Istio or should we bring in Linkerd, right? Two great projects. They offer pros and cons , there's things we like about both of them, but from an engineering perspective, we're probably won't bring them both in as fully integrated pieces.

10:50 Williams: So we'll base that decision on our opinion about which one offers the most value, fits the best with our customer understanding, but we never make it a hard choice. It's something that you have to turn on, or if you want, you can easily go use the other option or another option for that. So for everyone who chooses to use Datadog instead of Prometheus, it works pretty well. So it's really been the key to kind of have a light coupling and then provide easy paths that make it so the teams can get started, get going, but not necessarily have to use whatever we chose. The super opinionated approach really hasn't worked.

11:22 Are we all trying to create a Heroku-like workflow with our cloud platforms?

11:22 Bryant: It's interesting, because it's a lot of folks I chat to, they just want Heroku on Kubernetes, right.? They just want that gear push kind of workflow. I get it, right? I cut my teeth on Heroku, I love it. Ruby and Rails, super easy, but you seem to bump into use cases that it doesn't quite work. And I guess with the power of Kubernetes and the flexibility, as you said, Shannon in the ecosystem, we are always looking to folks like yourselves to share your opinions on what mashes together well to create that Heroku-like experience on Kubernetes.

11:46 Shepherd: And that's kind of the beauty and Downfall of Kubernetes, it's really much more of a platform and an ecosystem than it is a specific solution like Heroku. So I'm optimistic that we're eventually going to enable a simpler experience of something like Heroku, but the ecosystem we've spent a lot of time on basically just getting the platform and the framework right. Honestly, I think it's taking a little longer than I would have expected to get back to kind of the original dream that Docker had in the very beginning of this very simple platform for end users, but I'm very optimistic that we're going to get there and in a much better route because we're building something that it's not just a pointed solution, but is in fact, a very versatile platform that we can cater to a lot of use cases and then also address that more narrow use case.

12:31 Could you provide the brief pitch of what Rancher Labs’ Longhorn storage system is?

12:31 Bryant: Would you mind just giving us the brief pitch of what Longhorn is?

12:34 Williams: Sure. Longhorn is an open-source project for creating block storage in Kubernetes environments. So it basically kind of fills the same role as EBS on a Kubernetes cluster, utilizing the disks that are available to that cluster to then create reliable storage that can be snapshot, backup, recovered, and is distributed across multiple networks. So it's an open-source project, it's part of the CNCF. It's just now hit 1.0, We released it this spring to 1.0 and in a lot of ways, we think of it as kind of a starting point for how we build storage into our clusters as just a standard capability. The reality is that block storage, especially for Kubernetes, has loads and loads of use cases. I mean, already, we're seeing tons of databases and other things running, but also just for scratch, you need this stuff.

13:21 Williams: And so we anticipate it's going to continue to grow. And at the same time, while there's a really fantastic proprietary of closed-source products like Portworx that has really led the space and continues to provide a fantastic implementation. We think it's pretty likely and important that an open-source version of something like that grows and becomes part of it. There's some other interesting projects like OpenEBS, the work that's been done around Rook to take ceph into Kubernetes. We think there's some other approaches, but Longhorn is something we've been working on for a few years, probably four years now. And we've just kind of steadily made it more featureful and easier to use. And we see this as a longterm project, but we're really happy about where it's gotten to. Darren, I don't know if you want to add anything to that.

14:02 Shepherd: Yes. So Longhorn is kind of unique in its kind of technical approach and design and kind of reflects, I don't know, kind of our perspective of view on a lot of things. It is not designed as basically one large storage system. Most clustered file systems or distributed storage systems are kind of like one large system and you have to figure out how to scale that. And so the approach of Longhorn is actually that every volume, every storage that we're exposing is actually its own microcontroller. So we don't have one massive storage controller that's managing everything. So because we've kind of come from the perspective of it's significantly easier to scale things that basically have no interaction with each other and our ability to manage, we can manage millions of things, but it's very hard to build one system that can scale to millions of things, right?

14:50 Shepherd: So the idea of Longhorn, it is kind of presented as very much the same as any of these storage technologies, but under the hood, actually every kind of volume has its own very isolated failure boundary, and it's kind of orchestrated as its own unit. So it opens up a lot of interesting things. There's different use cases, like what you talked about, like DR replication, or just kind of like your standard EBS typing if you just want reliable storage, but there's also interesting use cases where you kind of want to package the data with the storage technology with the application. I'll give you one example as something like Prometheus, people want to be able to deploy Prometheus very easily and run that. That's a persistent system, but kind of the persistence is not really at the same exact level as what you would say a database like I'm running my SQL or something like that. The requirements aren't exactly the same and people don't really view it the same. It's like, they want it to be persistent.

15:44 Shepherd: So something like that where it's like, if I can package that application as a Kubernetes component, but it kind of brings it storage technology with it and it can satisfy its own requirements. So there's a lot of really interesting use cases and stuff that we're exploring with Longhorn, but with any storage technology, it kind of takes time to kind of develop these things. And then also just as the ecosystem matures, the trust state for workloads. And so the exciting thing is that we're getting there, that's why we've GA'd the product, we're supporting it, now is that the ecosystem has matured as the technology has matured also. And so I'm really excited to kind of for the future of Longhorn, you'll see further integration into the rest of the Rancher stack of what we're going to be doing with Longhorn and kind of opening up different use cases and stuff. So it's pretty exciting.

16:26 What are your thoughts on open standards, for example, the work being undertaken in the CNCF?

16:26 Bryant: Very nice. What's your thoughts on open standards? Because again, we see CNCF, we see the CDF now, Continuous Delivery Foundation sort of trying to drive standards. I'm a Java person myself, so I worked in the JCB for a little bit. There's that kind of balance of only really standardizing the boring stuff, the stuff that we really understand well, but also providing enough interop between things. So have the team at Rancher got an opinion on where open standards fit into this space?

16:51 Williams: I think in general, yeah. I sit on the board of the CNCF so I'm obviously somewhat biased. I think it's a great foundation that does really important work, but I really love the direction it's going, which is, it feels like a home where there's room for everything from early project trying to get their legs to very critical, important things that enormous numbers of teams rely on to do everyday work. I think the challenge that we've struggled with a little bit through this is finding that middle ground, finding a place where I think you probably hear this if you follow the TOC within the CNCF, is we don't really want to be King makers. We don't want to be choosing between two approaches. We'd rather sort of let the market choose, but we want to give them all a place to live.

17:31 Williams: And I think that's actually a more important part of what the CNCF does than any kind of standards body, because I've been involved with a lot of standards bodies and I would say standards bodies tend to try and come to consensus that this is the right way. I don't see that happening inside of the CNCF. There's very little idea that we should come to consensus. It sort of shows up in that i-chart you talk about when we show the whole ecosystem, it really is saying, "Hey, there's lots and lots of ways to do things, let's give a place for open source things to be held in a way where companies can comfortably use them without worrying that the license model will change."

18:05 Williams: And so one of the reasons we decided to donate Longhorn to the CNCF was because we've talked to lots of teams who would like to embed Longhorn and other products. They'd like to use it in a way and maybe contribute to it, grow it and putting it in the CNCF, I think it's been great. And I don't know that it's this huge, massive marketing boost or anything, but it gives people who would have otherwise used it an extra level of confidence that nothing's going to change. We've just now submitted, K3s our lightweight Kubernetes distro to the CNCF to become an incubation project, our sandbox project as well. There's all sorts of debate over whether something like the K3s should or shouldn't be in the CNCF. But I think in the long run, the CNCF is better served by being a more open place because its biggest value is providing confidence to other users, other developers that an asset, an important piece of software that they may want to use is not just open source in name only, but actually is open source and it will remain open source for a long time.

19:00 Williams: I think there's a lot of concern right now that key projects in the space, you think of Istio and Knative, are not part of the CNCF. And certainly those are things as the CNCF, I think we'd love to see that change if they do want to come there. But I think in the long run, if they don't change, it will certainly impact how people think about using those tools especially when we look at something like embedding service mesh, it certainly was a big part of our thinking was, well, where is Istio going to land? Is it going to move out of Google?

19:26 Bryant: Yes. Very interesting point there, Shannon. Anything you want to add there, Darren, on open standards?

19:29 Shepherd: Yes. I think my opinions there are constantly evolving because as kind of a practitioner and somebody who always wants to innovate and try new things, it's like, I want to move as fast as possible. And sometimes standard is viewed as this slow process and the ivory tower or whatever, but I don't have that negative of a view on it. There's an interesting thing if you look at, because you can go the route of standardizing and coming up with a specification, and so I think that's worked really well for OCI, for example, they've turned that into an actual specification and I think that's been extremely beneficial. That's a very core technology. And I think out of all of this, OCI is going to be one of the things that's going to stick around for the longest time.

20:07 Shepherd: But then on the flip side, if you look at Kubernetes, Kubernetes has no real official specification. It's more of a defacto implementation that we're all collaborating on. So I kind of think there's room for both models because sometimes you just come up with the specification, but we don't have a good implementation of it or vendors fight over implementation. So Kubernetes has provided one approach where it's like, well, we can just kind of all collaboratively work on the defacto standard and it's not officially a standard, but it's just becoming the standard. So I think it's kind of weird. And so I like the approach that the CNCF is taking right now and that we didn't go too heavily down the... Because I mean, standards I think have worked out pretty well for browsers and that's an example, but I'm happy didn't go too heavily in that area.

20:48 Shepherd: So I think right now that ecosystem is striking a good balance and Kubernetes' ecosystem seems to be growing and we seem to be getting a good amount of interoperability between components. It seems significantly healthier than let's say like OpenStack where I think vendors had a really hard time of competing and working together and Kubernetes, everyone's kind of finding their little space and I don't know, I think whatever we're doing is working. So I think I'm just kind of evolving there.

21:13 Bryant: That's the best way, right? As in strong beliefs loosely held, as they say, as things change and something I've learned in the hard way in the cloud space, my opinion of last year has completely changed this year. And I think that's a sign of maturity, right? I like to think so sometimes. So super interesting framing there, thanks so much, really appreciate the input on the CNCF and where you're going.

21:30 Could you talk about Rancher’s new “Fleet” project, and the Kubernetes multi-cluster use case?

21:30 Bryant: So I wouldn't mind to going to dive into perhaps some of the use cases, some of the tech now, and you actually mentioned a bunch of things I was already interested in. I am intrigued by the Edge use case. I'm keen talk about K3s later on, but I bumped into your blog post, Darren, about Fleet, and that really caught my interest. San Diego last year, KubeCon San Diego, I saw a few folks, probably yourselves actually, but a few other folks as well, talking about the cattle versus pets thing, moving from instances to clusters. And it kind of strikes me as the Google's data center is a computer type thing, yeah? What's your thoughts on why you created Fleet and where's this going in the future? Do you see a cluster for everyone so to speak?

22:09 Shepherd: Yes. So the thing is the initial driver for Fleet really had to do with kind of the success of K3s of suddenly people had thousands of clusters and it's like, well, how do we manage thousands of clusters? And so we had to kind of look at it a little differently, but I definitely do see, if we want to get theoretical or kind of big picture, whatever, I do see a transition. I really do think the Kubernetes cluster is going to be somewhat like the Linux server. We're just kind of move from administrating servers to administrating clusters. And so I honestly see a world where we're going to have tons of these and they're going to be all different sizes and shapes. And we're seeing that because there's use cases for very large clusters, there's use cases for medium size or small clusters, there's ephemeral clusters, there's a lot of use cases.

22:49 Shepherd: And the way the Kubernetes technology is built, I mean, it's not really opinionated, it's trying to address so many things, which can be a bad side, but it's also very good in it's extremely flexible. And so it's extremely flexible same way as like Linux is. And so this is just kind of the distributed equivalent of kind of Linux. So it's like, I definitely see the amount of clusters that we're going to be managing is going to be significantly higher than anyone imagined. So this is definitely a problem we need to solve.

23:14 Shepherd: But Fleet represents, to me, kind of a different approach on how to look at how to manage these clusters and because we are, I think, getting to that evolution that was talked about kind of like in the early days, Kelsey Hightower talked about this, when we're talking about the Kubernetes Federation and stuff of before I was managing a server, now it's a cluster now, how do I manage clusters of clusters? How do I keep getting to the next level of obstruction?

23:38 Shepherd: And so there's one approach, which is Kubernetes Federation and then Fleet takes a different approach. And so when we looked at it, we're just looking at what people are doing and how they're actually managing and what's working and what's not working. And so Kubernetes Federation takes the idea of, how do I make a series of clusters look like one cluster? So it's kind of like, how do I manage one clusters that federates all these things? Fleet is taking a much different paradigm, which is more like configuration management to be perfectly honest. It's just, how do I do configuration management? So it's much more oriented towards, what are the packages I have installed? So in Fleet there's a concept of bundle, but it ends up being actually a Helm package when it's installed.

24:19 Shepherd: So we solve the analogies and it works very well for people. It's just kind of, what do I have installed in my cluster? It's a model that people can really understand as opposed to, how do I describe an application which then spreads across multiple clusters? It's kind of a different use case because we're looking from the perspective of, how do I manage a bunch of clusters? Not how do I deploy applications, which are then going across clusters because we actually don't see huge amount of demand for that right now where a lot of people doing that have one app that really spans multiple clusters.

24:50 Shepherd: Yes, so it's a little different paradigm, but at the end of the day, the technologies and the approaches, they end up being very similar. It's just kind of how you tweak it and kind of present it a little bit. So Fleet is definitely a little more forward. Like everything in Rancher, we announce very early. Once we have the idea, we throw it out there. If it's any good, then we get some reaction. If it's not, then we throw the project away. And so Fleet got a good reaction from it and good feedback that seems to match people's use case, but it's a very early alpha. So that's something we're actively working on, that project.

25:20 Do you hear much talk about running failover clusters or how to implement communication between multiple clusters?

25:20 Bryant: Nice. Nice. You mentioned Istio a few times, Shannon, you mentioned Istio and Darren talked there about the Federation. Do you see much in the way of interop between clusters? I'm not saying I see people splitting their apps up between clusters say, because that is an interesting model, but more a case of maybe for failover, like we do in clouds, we have availability zones. Do you see much talk of maybe running failover clusters or much communication between those clusters?

25:43 Shepherd: Yes. So there's always kind of a high demand for the DR use case. People always want something for DR. And so when we talk about cross-cluster communication, because I think Istio is kind of the right way to go to be able to that, but there's still kind of a lot of issues because it becomes a much harder problem with the frameworks or whatever we can solve, kind of how do I get communication? How do I write traffic? I can do GSLB, I can do Istio to cross clusters and whatnot.

26:09 Shepherd: The difficult things with those solutions are always the data, how do you replicate the data? How do you move the data around? And things like that. And I would say we don't have that solved, but that's one of the use cases that is particularly interesting with Longhorn and why we continue to invest there is to be able to do things like replicating data across availability zones and those types of use cases. And so I'm not super comfortable saying, "Oh, we've got that use case nailed down," but we still do see demand for it and we are continuing to work as the technologies are evolving there.

26:41 Williams: I mean, there's use cases, Daniel, just coming in from every direction right now. It's like, use cases for very small footprint, use cases for very big footprint, use cases to, I don't know, connect clusters with networking. It's kind of that exciting moment where a lot of real world use cases for things like stores or things like oil rigs or oats or cars are all kind of colliding with the vast majority of people who are trying to run this stuff in the data center in the cloud and in VPCs and the like. There's kind of the practical side that we're working on every day, which still feels very data center centric and that's where 85, 90% of our business comes from. But all of the adoption that's sort of, you look out and you see a year later where things are going to be at, it looks really distributed and certainly things like running Kubernetes on real time operating systems and such is a really big interesting prize and I keep talking about how much we're more interested in Linux even today than we were four or five years ago and how this all converging.

27:36 Shepherd: Yes. It's like, Kubernetes has well moved beyond just kind of the platform for Greenfield apps or stateless web app, kind of the simple workloads. People are putting more mission critical, more stateful applications, or just various use cases. I mean, even more traditional older things, like applications you wouldn't immediately have thought would move into containers, but it's actually not super hard to move a lot of apps into a container and then with the capabilities that we have and the kind of the flexibility of how you can manage things in Kubernetes, the use cases are just expanding.

28:10 What are you both looking forward to over the next year, say both from a technology point of view, and from an adoption point of view?28:10 Bryant: So final question from me, what are you both looking forward to over the next year, say both from a technology point of view, and from an adoption point of view?

28:16 Williams: I'm having the time of my life. There's really nothing better than building a company and all the work you put in five years in, and then the stuff actually happening, it's amazing how fast our team is growing. I think last year we grew by like 170% and we're seeing that that growth is turning into just really great experiences for our clients. We're finding that the technical opportunities to innovate have, if anything, expanded as we got a standard out of Kubernetes that we could build around. So it's kind of this wide open market.

28:46 Williams: And even for us on a business level, there's a real embrace of open source. I think we tend to compete with companies like Red Hat and IBM and VMware and companies that have enormous teams, enormous organizations, and a long history of kind of delivering proprietary platforms to organizations and seeing the enthusiasm from really large companies to embrace, not just open source at the Kubernetes level, but actually an open platform around it and what that would mean for their businesses. That's been really cool. And so, yeah, every day is fun. I mean, it's just a fun way to work. Darren, what are you looking forward to?

29:20 Shepherd: I'm actually really optimistic and super excited about a lot of things in the Kubernetes ecosystem right now. Again, if you follow me on Twitter, I'm very negative and I complain about everything and one of my biggest frustrations is just always just how hard everything is. And it just drives me nuts because I just want to accomplish something and it's just so hard. And K3s has kind of renewed my faith in kind of Kubernetes and the ecosystem in that what we were able to accomplish with K3s was basically take something ridiculously complicated and package it in a very simple way. And I'm really excited to see that, one, it actually worked and it has resonated with users and people kind of appreciate that and it's really just kind of exploded. And so what is the thing that I'm actually most excited about kind of over the next year, is actually a simplification of the Kubernetes ecosystem.

30:07 Shepherd: It's too complicated to me and we've been working in this ecosystem enough and we've been working with these technologies enough and we've seen enough patterns and enough projects and whatnot that there's some very simple practices that we're seeing and very simple approaches that work very, very well. So as you see our product going forward, it's like, we're trying to simplify this even further of just some very basic approaches. I think GitOps is a good indication of a very simple approach that works very well. There's some bumpy parts around it, but it's there's a couple patterns like that where it's like, we can actually make this kind of base layer of running Kubernetes and running applications on Kubernetes significantly easier.

30:48 Shepherd: And it's like when people say like, "Oh, I just want to Heroku-like solution." It's like, I would love that too. That's what I've always wanted to. It's like, I'm a developer, I don't want to care about any of this. I just want to run something. But we kind of had to put a lot of hard work into getting a good platform that we can do that. And so that's what I'm really excited is that I really do think as Kubernetes continues to grow and we continue to have more and more users coming in, that there's a bigger demand for simpler solutions. It's not as much technologists who love the complexity or love the theory behind it or whatever. It's just people who want to get their job done. And they're just getting more and more demand for it. I honestly think it is possible to produce simpler solutions on top of the Kubernetes. So that's what I'm excited about. It's really just simplification of the system.

31:31 Bryant: I think that's a perfect way to end the podcast. Thank you both for your time today.

31:34 Shepherd: Thank you.

31:35 Williams: This was a lot of fun.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p