BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Day Two Kubernetes: Tools for Operability

Day Two Kubernetes: Tools for Operability

Bookmarks
47:08

Summary

Bridget Kromhout discusses what containers and Kubernetes clusters are at a high level, looks into the practical application of open source tools to simplify cluster management, and shows how to deploy Kubernetes clusters in a repeatable and portable fashion.

Bio

Bridget Kromhout is a Principal Cloud Developer Advocate at Microsoft.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

You're probably here because you want to Kuber some netes. And I might dissuade you or I might give you some ideas about tools that will help you do that. The traditional second slide, you have to have the second slide to establish bonafides, bona fides, not sure how to say that, so you can check that off as she does not know how to say bona fides. But I'm Bridget, I live in Minneapolis. I work for Microsoft on the cloud advocacy team. I co-host the Arrested DevOps podcast with Matty Stratton, which is wonderful, because he is here and I sent him to live tweet Bryan Cantrill's talk, because, uhh, tragically, a talk that I really would like to go to is at the same time as mine. So if any of you decide this Kubernetes is not for you and you want to hear about Rust, Bryan Cantrill is very funny.

And I also have the honor/privilege/foolhardiness of running the global devopsdays organization. If you've been to a devopsdays somewhere, I probably helped somewhere along the way with that organization. And actually though, in Silicon Valley here, Jennifer Davis, she was at Chef and a startup, and just joined Microsoft, runs the Silicon Valley one. So I'm not sure if they've set their dates for next year or not, but you can take a look at devopsdays.org and see about that.

I feel when people put an outline up for their talk, the problem with that is it's basically like you're getting spoilers. So, spoiler alert, we're going to talk a little bit about what even are these container things. And then we're going to talk about some open source tools in the ecosystem, and well, tools, mostly open source. And then we got to, of course, if somebody gives you a mic and lets you stand on a stage, you should definitely prognosticate about the future. So we're going to do a little of that too.

What Even Are Containers & K8s?

Starting with the what even are containers, how did we even get to this place? Quick show of hands, how many people are using containers in some regard right now? I'm going to say 80% of the room. Awesome. Keep your hand up if you're using them in production. Close, maybe 65% to 70%. And how many of you are using Kubernetes in any regards right now? Maybe, it's 40%. And in production? Yes, maybe 25%. And I think this is very natural hype cycle stuff. The future is here, like William Gibson tells us, it's just not evenly distributed.

And I think that probably the most important thing to know about Kubernetes is something my colleague Jessie Frazelle, who worked at Docker for a long time, and then worked at a couple of other places and is at Microsoft now, she said when she was keynoting GitHub Universe recently … And actually, I'm not even positive- I love this picture of her from GitHub Universe- I'm not positive if she talked about containers not being real there, though, she does say that pretty frequently. I think it's valuable for us to pay attention to what containers actually are. And they're basically cgroups and namespaces.

So when people get really excited about the containers that are essential to your buzzword transformation project, maybe just take them down a notch and say, "Look, we're just talking about controlling what a process can see and what it can use." That's what we're actually doing. But it's valuable and it's important, and we use it a lot, but just the reality check of what it is, I think is valuable. And I think it's also, at the same time, it's important to pay attention to the fact that we use containers for a reason, I mean, not just because we super are into the Linux kernel, though maybe we are, I'm not going to judge, if you are. I kind of like FreeBSD myself but, you know, I use Linux when people pay me to. But I think it's valuable for us to recognize that we use containers for a reason, because they actually do solve problems.

A few years ago, I worked at a startup that started running Docker in production in October 2013, when I think the main thing on Docker's website was giant letters in blink that said, "Under no circumstances should you run this in production." And my boss at the time was like, "YOLO." And I mean it actually solved problems that we had; problems around our deployments not being repeatable, or development not being as consistent as we would like. And if you want to, if you're trying to solve the problem of "it works on my machine," containers are a great way to solve that problem. And, by the way, that startup got acquired, it's part of Warner Brothers now. So, you know, sometimes YOLO works out. That doesn't mean I necessarily recommend that.

But, yes, I also feel we need to have the caveat that containers also sometimes cause problems, or at least they are not going to solve all the problems that you have in your organization. And I think this is where people wave their hands and say "DevOps" and they think that that will definitely solve every kind of deep seated problem, technical and social, that exists inside your organization. And spoiler alert, so sorry, but that does not magically happen. And so containers are going to give you something new to manage, and give you new failure modes to worry about. So, just level setting, before we even get Kubernetes, you should definitely look and see if containers solve a problem that you actually have. Because if what you have is a bunch of COTS software that you don't modify in any way, it's possible that lifting and shifting that into a container is not necessarily going to solve the problem that you have. Or it might, but again, this is going to depend on your use case.

I also think taking the hype cycle down a notch just a little bit- I got my CS degree back in the '90s, back when underfunded computer science departments thought it was a good idea to hire undergraduates and give them root on their faculty members' machines. That worked out okay, I'm not sure I would recommend that today. But we were using containers then. And maybe some of you folks are old enough that you have also used, some of these things.

I'm not old enough to remember anything about chroot from the '70s. I remember the '70s, but I don't remember anything about chroot. But I did use FreeBSD jails and Solaris Zones. And you can definitely go watch Bryan Cantrill's video later if you want to hear some quality ranting about Sun and Solaris, because he's good at that. But what I would point out here is that the history of containers is all about moving towards more usability for more people. And if we look at LXC, that became a lot easier to use containers on Linux. A bunch of Google engineers put that in the kernel.

And when containers started becoming mainstream, and I definitely think it was probably 2014 to 2015 that Docker, Docker, Docker was required to be put in every conference presentation, but again, they weren't the only folks working on it. I think the genius of Docker is that they made containers significantly more usable and accessible to a wider variety of people. So people wouldn't have to be a kernel engineer or an operations professional. They could be just a developer who wanted to learn about it, take Docker, and start using it. And I think that was pretty valuable. I also think that it's important pointing out the Open Containers Initiative, just because just having one vendor be in charge of the spec for something that's pretty important is not the best use of anyone's time or energy. So it was nice to see, about 2015, things kind of leveled out and became less stressful from the point of view of, are we going to have vendor lock in on this whole container thing.

But even before 2015, and this is one of those things when I started looking at things for this presentation, I was surprised, because I thought Kubernetes kind of hit the scene in about 2015. I remembered something about it at OSCON in 2015, but actually that was the one year anniversary party. Kubernetes has actually been around since 2014. And I think that's valuable to consider because a lot of people in this room didn't raise their hands at the "using Kubernetes" question. And at the same time, the hype cycle would have you believe that, well, if you're not already using Kubernetes and possibly some sort of serverless with it or whatever, you're already behind. And I think that that's a false narrative in our industry that we should just stop listening to. Like, you're not behind if you're accomplishing your goals, but it is valuable to look at what this stuff does and what it can do for us.

So I think the summary there, before we get into specific tools is that these are great tools, we should use them if they solve a problem that we have. And when people inside your organization are- I was just down in a wonderful panel that Jessica was running, and one of the things we talked about is, if a tool doesn't solve a problem you actually have, or if it solves the problem of, yes, so I once worked with a young man who, he was very convinced that the next thing we should do was write a custom Erlang message bus. And I was like, "Interesting. What problem are you trying to solve with this project?" And without blinking, he said, the problem was that he wanted to write a custom Erlang message bus. I was like, "Okay, at least you're honest about it." But you probably don't have infinite time and money and staff in your organizations to solve a problem you don't have. So you should definitely use containers if they solve a problem that you have; problems around your reproducibility and around your consistency. But it's a good idea to do minimum viable complexity to meet your organization's actual goals.

Tools in the K8s Ecosystem

I haven't scared you off, you're not all running for the exits, you want to Kuber some netes. Let's talk about some tools in the wider ecosystem, because if you've started looking at Kubernetes, you probably have noticed that it doesn't actually even have all of the pieces for Kubernetes built into it. You start using it, and you're like, "Oh, we need to pick an overlay network now." Okay, first of all, what's an overlay network, and also how do we decide? Answer: you got me. I think Weave works fine, but at the same time, I have not thoroughly investigated them all, but you could. There's a lot of rabbit holes you can go down there.

But I think I do have to put a caveat up there that says that introducing new tools, introducing new complexity, is not going to magically solve all your problems. My old boss, Tim Gross from DramaFever, the little streaming video company that was much like Netflix, if Netflix were much smaller and mostly Korean soap opera, and started running Docker in production in 2013, you know, as one does. But what he likes to call it is conservation of complexity, which is to say, say you break your monolith down into microservices. So you've exchanged IPC for network latency. You've just moved the complexity around, and that's fine, as long as you don't fool yourself into thinking that the complexity will somehow magically disappear. It super won't. All of these exciting new architectures are going to actually make things harder, and you have to make sure that the trade offs are worth it, in your use case, which no vendor can answer that question for you. Only you can answer that question.

But, that said, say that you decide you're going to definitely use an orchestrator like Kubernetes. And again, Kubernetes isn't even the only orchestrator out there, so you could look at Nomad from HashiCorp. You could look at Mesos and Marathon from the fine folks at Mesosphere. Those are just a couple of the options out there. I mean Docker has Docker swarm. There are a lot of choices out there and you kind of have to look at the complexity and your use case and figure out what is actually right for you. Again, a vendor isn't going to be able to answer that question for you, because only the people who actually know your architecture and your needs can answer that question.

I would say it's probably a good idea to look at if you have these sorts of needs, then you might need an orchestrator that looks suspiciously like Kubernetes. If you need to schedule your jobs to run, but you don't necessarily care where they run in your cluster, or, you do care where they run, but you need to be able to specify that they don't run, or that they do run, near this other job, in some way, maybe sharing resources with it. You probably have some sort of health monitoring needs. It would be nice if you don't have to bolt that on as a aftermarket addition. That spoiler, it's just never going to fit well with your cluster.

And all of the things around how you do your failovers and your scaling, if you're like me and you've done the homegrown platform thing, you probably had a bunch of Docker and Packer and Chef all glued together, maybe on a cloud provider, hitting their APIs. There was definitely janky bash underlying your home grown platform. If someone tells you there's no janky bash under their homegrown platform, they are lying. There's definitely bash in there.

But if you have components where you've built a lot of things, and you say to yourself, "They work, for some values of work," I would say the thing you have to always ask yourself, and it's kind of humbling to ask yourself, "Should we replace this thing that we lovingly handcrafted?" And I would say the good test is how horrible is it to explain it to a new hire, and how face-palming is it to even talk about it when you're interviewing them? If you don't really want to show them your horrible architecture until they have started, until they've signed on the line that is dotted, yes, that right there is probably a smell. We should probably say goodbye to the thing we spent a lot of time lovingly hand crafting, because there are industry standard things that didn't exist when we did all that work, but that we could move to now.

Sarah Wells from the Financial Times, the FT, just did a really good talk at Velocity New York last week that you can definitely go and watch the videos if you participate in O'Reilly's whole Safari thing. Or just go look online. I'm sure the video's out there somewhere. She's given that talk a few times. But they moved hundreds of microservices from their homegrown platform to Kubernetes, and it took them a long time. And in their retro at the end, they said "Honestly, we probably should have done it faster, and we should have done it sooner." And so this is just the incidental complexity that you have when you have built all of these things, or at least some of these things and you're kind of gluing them together with duct tape and baling wire. If you're in that position and you're wincing a little bit, yes, you probably want to look at a standard orchestrator.

I would say the coordinated app upgrade things, that's one of those- you have your carrot and you have your stick, you have the stick of "Oh man, it's really hard to onboard new people, new participants in something that is a little bit hand whittled and terrifying." It can be so much easier to do upgrades if you don't have to have quite as much manual coordination. People like them some blue-green deployments, they like them some canary releases, and it's that's stuff you get built in for free with something like this.

So that's worth looking at, as well as again, the service discovery. If you've ever had the "Oh, right after the release, we got paged because we didn't update the whatsit.” Yes, having this built in is good. So basically the summary is if you're choosing orchestrators, the portability is nice. I work at Microsoft so I work at a cloud provider, and I would say the most common word I hear in discussions with customers is multi-cloud. Whether or not they realistically are going to migrate or bridge their entire infrastructure over multiple public and private clouds, people like to think that they at least have an option. And so choosing an orchestrator that you can use in your own data center or data centers that you have as well across your cloud providers of choice is generally considered something that people like.

As it turns out, open source projects being worked on by the world, but not just you, may not be perfect for your use case. But that doesn't mean just go build your own orchestrator, build your own platform that is perfectly customized. Again, then you have the trade offs of it's harder to onboard people or find people who know how to use it. But if you get something like Kubernetes, you very well might need to do some custom resource definition work. That is a well-known pattern that there are good options for building custom pieces that meet that other 10%, 15%, 20% of your workload needs, so that's worth looking at.

And then I think that it's pretty much table stakes at this point. If you're going to use any kind of orchestration layer, you definitely want it to have auto remediation of issues. And this is, of course, a work-life balance thing if you're not getting paged a lot, but I think it's valuable in that you have more resiliency in terms of how many nines do you want. And, I mean, if your answer is, "All the nines. I would like all the nines," okay that's super. Every single nine, in terms of every single increase in availability that you want is going to be harder and harder to get, but you can get pretty far with Kubernetes out of the box.

So, a quick look at architecture stuff. People talk about your master, which I think is not the best term for the obvious reasons, but also because your control plane can and often should be multi-node. And again, if you want your control plane to be highly available, which most people would, on the other hand, your control plane, and these diagrams are from the fine folks at The New Stack by the way. There's a lot of diagrams out there, but I was like, I want really simple ones, because I'm just going to cover some stuff at a pretty high level. So you do want your control plane to be highly available, probably, however, all of your worker nodes aren't going to suddenly stop working just because you have a problem with your control plane. So that is one benefit to Kubernetes.

And your control plane is going to expose an API, which is the main thing that your apps and your nodes in general are going to talk to. If you are using a hosted option from a cloud provider, you actually do not operate, and in most cases- I can't speak for every cloud provider- but in most cases that I know about, you don't actually even pay for those nodes that are on the hosted control plane, just because they're considered part of the underlying service. So it's again, a different model of thinking about, you're not going to, even if you're operating the control plane yourself, you're probably not usually going to log into it to do things. You're probably going to mostly interact with the API, which is pretty rich and featureful.

And then it manages the cluster, the nodes have a container runtime and overlay network. They have something called kubectl that they're controlled with. In terms of what you see with the master versus the worker nodes, you're going to have your Kubernetes objects, like your pods, your replica sets, etc., or possibly could be running on the master, or on the worker nodes, depending on the taints and tolerations that you set. And if that sounds like a bunch of gobbledygook, it basically just means, you know how when you have a cluster that you've assembled and if there are a lot of jobs happening at once, it might overwhelm the master or it might overwhelm your control plane in some way, you can set it so that some things, like cluster control things, only run on your control plane. And some things like say, the jobs that are being launched by your software only run on your worker nodes. And then you can set exceptions. So this is all very, very configurable, which is also a bad thing. I mean it's a good thing, and it's also a bad thing in terms of it's very complex.

But if we look into a node more specifically, you do need an image registry. You can use one from a cloud provider, you can run the Docker image registry, you can run the registry container yourself. It's not generally recommended to run the registry container on your production cluster. You probably want it to be outside the cluster. But when your pods launch, they're going to need to pull your container images from somewhere, so you have to have a container registry.

There are different levels of abstraction, like there’s the pods, which contain your application and then any sidecars or anything you're going to wrap with it. And then replica sets are controlling what pods have been launched. And then there's another level of abstraction in Kubernetes that you'll probably use them pretty frequently, called a deployment, and that just governs how many replica sets there are. But you don't have to use a deployment, there are other things you could use called DaemonSets, if you want to run something on every node in the cluster, or StatefulSets, if you are very brave, or foolhardy, and want to put your state inside Kubernetes itself. There are a bunch of choices out there.

But, TL;DR, if you're using something like Kubernetes, if you're trying to sell it inside your organization, this is probably your reasons slide. Assuming you get it all set up correctly, it can be a lot faster and easier to deploy your applications the same way every time. Is it the right way? Well, you have to decide that, but it'll at least be consistent, maybe consistently terrible, I don't know. And scaling, again, auto-scaling is not magical, and you're probably not going to auto scale your way out of say, inadequate data stores or inadequate any kind of backing service that your state is in. So whether it's your temporary or your permanent state. So you can scale up your front end worker nodes all you want, and then you have to determine is that actually going to solve the problem, or is it just going to put a lot more back pressure on the things that we were scaling. But it is possible. You’re rolling out of new features. Again, it becomes trivially easy to do your A/B testing or your blue-green deployments, your canary releases, however you want to implement that, it becomes a lot easier.

And then, and this is one of the things that again, if you're using a cloud provider, you might be very excited about, is you can put hard limits in for how far and how fast things will scale. And so you can, I mean, wait, I work in a cloud provider, you should definitely scale as much as possible. But seriously, though, whether if you're running a private cloud in your data center, people do have to rack, someone has to rack and stack more stuff. So you need to set your cloud that you've created such that it's not going to try to scale past what actually exists, because I've been told that reality is actually not completely subject to our will. We'll have to work on that.

So I put in the title of this talk the idea of "day 2." I like to think of it this way. A lot of times, you do a launch of new software. Day one, it's really exciting, everyone has a party, maybe some slow burning tire fire is on fire in the back, but maybe some ops people are really sad, but mostly everyone is excited about the new release. However, once you've successfully launched a project, I bet some of you have experienced this, some stakeholder says, "Excellent. That was great. Please do it 18 more times." And if you're wincing and going "Oh, dear." You need to actually be able to upgrade everything that you just did, and you need to be able to keep it patched forever and you need to be able to launch 17 more of them to serve the other differentiated stakeholders who want something slightly custom. And if you're trying to do that, you got something working once, now keep it working forever, because the longer it keeps working, the longer it's providing business value. And I feel like, yes, day one is short, day two lasts until the heat death of the universe, or until you turn that software off. It's not done until you've decommissioned it.

Operable K8s: Next Steps

So, if you want to build your Kubernetes so that day two is not going to be painful, I would say there's a few steps that we're going to look at. I feel like this talk needs many caveats, so I'm just going to say this is a big and complex space. You should definitely do your own research. I'm going to show you some tools. That does not mean they are the right tools for you. Again, for all tools, do they solve a problem you actually have? But I am going to illustrate a few tools that you might want to consider. Getting started with Terraform and a hosted Kubernetes on your cloud provider of choice. Managing your configs, because so much YAML. And also you're launching your apps. And then, event-driven scripting. We'll talk a little bit more about that if you're not familiar with that.

So the first thing, a lot of times, especially if you're working across public and private clouds, it gets frustrating trying to keep your configs synced if you're not using a tool that works across all the providers. So if you're using something like ARM templates or CloudFormation or whatever right now, I would start, and if you're not using Terraform from HashiCorp, I strongly suggest you take a look at that. It is a free open source tool. It is worth checking out, specifically because you can write your configs to work across clouds, which means that- with some caveats, of course, there's some changes you have to make- but this can make it a lot easier to, especially if you're doing the sort of thing where you have your private data center, but you're pretty sure you're going to either do some cloud bursting, or you're going to maybe have cloud down your roadmap and you don't want to have to do as much rework. Something like Terraform is a really good way to do your configs, especially for Kubernetes, and this is actually kind of cool. If you use the Kubernetes provider inside Terraform, you can deploy pods, as a service, to whatever Kubernetes clusters you have. And then you can also use Terraform to deploy your services on top of the Kubernetes cluster.

So take a look at that, just because there's probably, especially if you're used to using Terraform just to stand up infrastructure, or cloud-based infrastructure or whatever, there's more there that you might not be aware is possible. So that's pretty cool. And for example, because I work at Microsoft, I'll say that there are providers for Azure and Azure Stack, but there are also providers for all of your cloud options. And in most cases it's people who work at the cloud providers who are working with Hashi to make the Terraform resources excellent. So it's worth looking at what they have there.

I'll point out, this is an example from Azure, but if you are not an Azure user, if you're a Google Cloud user, GKE is their equivalent to this. If you are an Amazon user, EKS is their equivalent to this. This is what I mentioned earlier, where the cloud provider runs the control plane for you, and you can just launch worker nodes. And it's probably worth looking at these, especially if it's your first foray into testing, to see whether or not you want to use Kubernetes for a specific application. Just because it obviates a lot of the extra undifferentiated heavy lifting of becoming an expert in setting up the Kubernetes control plane, which if that's not a big value add for your organization, that might not be where you want to start. And so this is an example of the kind of commands that again, your cloud provider of choice would have, a managed Kubernetes service, where you just have a few commands, and you suddenly have a Kubernetes cluster up and running. So again, you do not have to use this cloud provider, but you probably should at least look at the cloud provider of your choice in terms of what's available for the managed option.

And another thing that your cloud provider of choice would be able to do is upgrading your clusters. Which again, upgrading your Kubernetes cluster can be a non-trivial task, so it's at least worth starting, until you build up enough expertise, to be, again, not expertise to run self managed, expertise to decide whether or not you want to run self-managed. It's worth looking at. That is again something that's available in whatever cloud you're using.

I want to talk a little bit about Helm. I got to tell you, I just spent eight days at two different events in Europe and I was only home for 17 hours, but I was home long enough to open the mail and get my Helm T-shirt, which is exciting. The CNCF was just printing these. I'm not sure how many they have left but if you're into that, you can go to the CNCF website and look at their Helm T-shirts. Helm is your package manager for Kubernetes. If you've started playing with Kubernetes, as maybe 25% or 30% of you have, you may have noticed that there's so much YAML. I gave a three-hour Kubernetes workshop, I most recently gave it on Halloween in London, I brought candy for them as well because it's a Halloween workshop. But one thing that I point out is that you start having so much YAML that I feel like we need to have a content warning for YAML. If YAML is upsetting to you, just so you know, Kubernetes has lots of it.

And the hard part is if you're running your dev and test and whatever, UAT and pre-production, staging and all the things that you have all, your various lower environments, and production, you don't want to mix up which YAML goes with which and what all the versions of the YAML are. And that just ends up being a pain. So this open source project that some colleagues of mine, as well as people across different employers, all work on is a package manager that makes it easier for you to deal with all that YAML, basically. So the four things that it does, and I'll just go into a tiny bit of detail on each one, because I'm excited about it, and it's a cool project, is managing all that YAML, and making it easier to update stuff. Making it- and this is really key- easier to share all of the YAML that you've packaged with other parts of your org or with other people across the open source ecosystem.

And then I feel like the word rollback always, I mean, I spent 15 years on call for production, so the word rollback just kind of makes the back of my neck itch. Because I'm like “yes, but rollbacks are a lie, because you don't have a time machine, so what you can actually do is roll forward to something that you really hope is what things were like in the past when they were working. Ooh, I hope there wasn't a bad schema migration in there.” So rollbacks are kind of not true, but you can at least very easily re-implement the version that you were using before.

So more detail on this. The charts, which is just the collection of all the information about the YAML, in YAML, because why not. The chart is what you can use to describe your applications. This is one of those things where if your application also needs this stateful service and it needs these backing data stores and it needs this, session store, if you need to stand up a lot of things that go with your application, you can describe them all, in, you guessed it, YAML. And it gives you the single point of authority of how this is installed.

You want to be able to do in place updates because, and I don't know why this is, but every once in a while I go to pay a bill or something, and it's 1 p.m. on a Sunday and it's down for maintenance. And I'm "Yes, this is like a dinosaur." Because really, nobody, if you say to your employer, "We want to take a maintenance window," they're just "Is this retro day?" No, we don't do that. So you do want to be able to do in-place upgrades. And the sharing thing is cool actually. If you go check out Helm, one of the things you'll see is, say you want to install Prometheus. You could read all the Prometheus docs, or you could just use Helm and install it. So there's a lot of public Helm charts out there for open source packages that you want to use. And then the rollback thing. Basically, you can re-release an older version pretty easily. So that's Helm.

Draft is, it basically makes the app development and deployment easier, and this is another open source project. And all of these by the way, the team registered the .sh, so it's helm.sh and draft.sh, so you can go look at all of the websites. But this one, I would say it's nice because it auto detects the language that you're working in, and generates a Dockerfile and a Helm chart for you. So it's one of those things that it's kind of an up and running sort of thing. It's a lot easier to get started. And then, again, that can all be versioned, so. And yes, because it detects the language that you're writing in, it can become a lot easier.

Another open source project is called Brigade. And if you just saw the announcements out of GitHub Universe, you might be thinking, "Well, how is this different than GitHub Actions?" And well, they have some similarities, there's some parallel evolution there. And this is worth looking at if, basically, if you say, “YAML is great, but I need to actually write some code that does a thing when I'm deploying this.” Because with Kubernetes, it's not imperative; say most of the configuration management tools we're used to, it's declarative, which is to say you define a desired state, and then your controllers just continue to iterate to bring your cluster to that state. But sometimes you actually need to launch specific events. And with something like Brigade, you can do that. It's a JavaScript runtime and you can define pipelines in it. So again, if you don't have those needs, cool, but if you do have those needs, this is one fairly decent way to do that.

And then especially if you're using something like Brigade, another open source project called Kashti is a simple UI that you can basically just see how your deployment went, how did your event-based task go. If you want a dashboard, we got a dashboard. Again, none of these things are built into Kubernetes because even though Kubernetes sounds like it's a giant monolithic thing, it actually isn't. It's a whole bunch of choices that you have to make all the time. And these are some opinionated choices of projects that are open source and pretty good, and that I think that people might like.

The Future

So now we have all sorts of building blocks for how to create our deployments. And they're all repeatable, and we can move all our environments to that and we have definitely dev'ed some ops and we're done, right? Let's talk about the future. And, by the way, let me just say that if I am quoting Jed Bartlet from the West Wing, who is a fictional president, who doesn't make me sad, I will just say that if any of you are eligible voters and are in a location tomorrow where you can vote for the U.S. election, I strongly recommend everyone exercise their franchise and do that. I early voted weeks ago, and I will be on a plane all day tomorrow. So I will be hitting refresh a lot and being worried. So I recommend everyone make sure that you do what you think is right for you.

But when I'm talking about the future, and I definitely, with apologies to Ian Fleming, I will say that diamonds may be forever, but I really think whatever you put into production right now, your children will probably be supporting it after you're sipping a cocktail on a beach somewhere. I mean, assuming that we all make it past- what did they say we have, 12 years left before climate change kills us all? I used to worry about the 2048 problem, I really did, or sorry, 2038 problem. I used to worry a lot about the end of epoch time. And I've now moved to worrying about climate change, which is a lot sooner than 2038, because 2038 I was "Unix time will run out, but I'll be retired. This is your problem." And I'm “oh, everything is my problem now.” But I feel everything that we build, we do need to build it to be maintainable. We need to build it to be sustainable. Because if it works at all, if it generates business value for your organization at all, they're going to want to keep building on it, and then later it looks like that "what have you built?" meme, because things are going to keep changing, which means we want to preserve our optionality. We want to build things that are open source, that are flexible, that we can keep iterating on.

And I also want to point out that like this Kubernetes thing is getting really real. I mean, you were four and a half years in, and this is not just trendy conference talk stuff. I am happy, or regretful, to inform you that your bank and airline and government are probably using it right now. How do you feel about those breaking changes now? Hmm. So it's valuable to think we want to build stuff that is sustainable. So I would say to teach yourself things about Kubernetes. Container.training is the free open source workshop that Jérôme Petazzoni of Docker fame created, that I've been helping him with and running this workshop around the world. I recommend going and try running through some of the exercises yourself. And I also recommend considering a managed option if that works for your business case, for your needs, just because there's so much complexity in Kubernetes that any of it that you can offload to someone else becomes their problem and not yours. So that is a kind of a future-y thing.

Another future-y thing, and I will be posting these slides afterwards so you don't have to try to take a picture and painstakingly type that in. But if you're interested in Helm, I am actually going to be taking some time off from being on the road a bunch and speaking at things, and I'm going to help PM the Helm 3 release that's coming up, which is exciting. And it also means that that link right there on GitHub is we're actively soliciting community feedback. So if you've started using Helm 2 and you're "Helm 3 is not going to have tiller anymore. I have questions." There is a bunch of information out there, but we really are interested in your use cases and what the community thinks about this. There are a lot of other changes too, so it's worth taking a look at that. That sweetcode.io, it's a blog post from my colleague Matt Butcher.

Another thing that is worth looking at if we're talking about the future is as much as us cloud providers would like to forget about it, most people at large enterprises, they have some legacy. Legacy, by the way, is another word for where your customers and your money live. If it didn't matter, just ask people, "Could I just turn this off?" And if they turn white and start shaking, yes it matters a lot. But because people have legacy, they have data centers, they're "Okay, cool. We want to do some Kubernetes, but how do we use all this capacity over here?" And so it's worth looking at things you can do with Virtual Kubelet and virtual node. These are other open source projects out there that basically let you add capacity to your Kubernetes clusters from other sources, and it just looks like a node on the cluster. So that is probably worth looking at if you have those sort of problems to solve.

Yes, so I'm basically just going to wrap up by saying I think it's really cool and fun and interesting to do this Kubernetes stuff. And if you're here and you're trying to learn things but you're also a little worried because it's like you're in quicksand and the world keeps changing around you- I got to tell you, some of the first systems I had root on were SunOS 4 and 3. Pour one out for Sun, but if we don't try these new things then we aren't getting the interesting advancements that help us learn more. And this is a quote from my colleague Erik St. Martin, he's one of the people who founded GopherCon, and he also works on the same cloud advocacy team as me. And he says "Hey, you know, with apologies to Halt and Catch Fire, Kubernetes is not ‘the thing’. It's not like we have Kubernetes now, we're done. Technology is over." No. I mean, it's the thing that will get us to the next thing, which from the look of the hype cycle right now I'm going to say is probably service mesh. So, you know, set your phasers to service mesh.

But I think that hindsight being what it is, we didn't all look ahead and see all of this that we have now coming, and it is okay that we just keep iterating. If someone says, "But we did all of that containerization and now you want orchestration," you're "They're not separate. They're not different, we're just building on the past." I also think it's really interesting. This quote is from Jeffrey Snover. He is a creator of PowerShell. And he pointed out just recently that Azure is more than 50% Linux. And that's not something that I think the old Microsoft, whatever that was, would have done. But I think that choosing to be an organization that is slightly older than I am and has been very opinionated about open source in the past, choosing to change in this direction, I think if you're in an organization where you really want to Kuber some netes and people are "We have this fear and sadness, we're going to keep that," You can be "Look, if Microsoft can change like this, we probably can too."

And I think that it's also, when we're talking about open source, it's really important to point out to actors inside your organization who say, "Oh, business logic and continuity and we can't open source that." And you're thinking to yourself, "I really wish that my GitHub weren't empty, because it's all private," it might be worth pointing out that a large, or a fun startup out of Redmond they may have heard of, I think we’re going to go far, just gave all of our patents to the Open Invention Network. And if this is the sort of thing that actors that have definitely not been good actors in this space, if you can go from "open source is cancer" to "here are all our patents, they're open now," I think that those are conversations we can all have inside our organization. Because you have a previously bad actor who now is actually kind of a good actor that you can point to, and say, "This is what we can do. So I think that it's worth saying that it's not the world that it was.

It's possible that we live in the darkest timeline, I'm not going to rule that out, but at least we live in a timeline where I can tweet this. This is my pinned tweet. And the TL;DR, if you're like, "I'm not reading random tweets people put on screens", is like "Microsoft hired me and gave me a Mac and told me to get people to use Linux." And I was like "2017, now 2018, is definitely weird, but I kind of like it." And I think this is what we want, is to build our organizations to be more open and let us contribute back to the wider ecosystem.

If you want to learn more about some of the specific projects and things I talked about, my colleagues are going to be doing a free multi-city around the world tour, covering a bunch of these technologies. We just put a free learning platform out there called Microsoft Learn, so you can look at that. And that's it, that's what I've got. I really appreciate all of you being here, and go forth and open source all the things.

 

See more presentations with transcripts

 

Recorded at:

Jan 10, 2019

BT