BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Q&A with the Kismatic Team: The Past, Present and Future of Kubernetes

Q&A with the Kismatic Team: The Past, Present and Future of Kubernetes

This item in japanese

InfoQ recently sat down with Joseph Jacks and Patrick Reilly from Kismatic Inc, a company offering enterprise Kubernetes support, and asked about their thoughts on the recent Kubernetes v1.0 launch, the history of the project, and how this container orchestration platform may impact the future of microservice deployment.

Kismatic, a company offering enterprise support for Kubernetes and Docker, are closely involved with the open source Kubernetes ecosystem, and are one of the founding members of the recently announced Cloud Native Computing Foundation (CNCF). Patrick Reilly, CEO of Kismatic, has worked with clustering and container technology for quite some time, and has contributed to the open source ‘kube-ui’ Kubernetes web-based UI and other projects. Joseph Jacks, VP of Technology Strategy at Kismatic, has also worked closely alongside the development of Kubernetes technology, and has publicly voiced his support for the need for open foundations to drive forward the mass adoption of this technology.

InfoQ sat down with Reilly and Jacks, and asked about the history of Kubernetes, how the platform is currently being used, and what the future holds for this technology.

InfoQ: Kubernetes is famously inspired by Google's Borg and "The Datacenter as a Computer" collection of papers. What does this add to Kubernetes in comparison to other offerings?

Kismatic: Over the past 7 years, various popular open source distributed systems software projects have been created based on the concepts and principles outlines in these papers. For example, Cloud Foundry (CF) would simply not exist without Google since the two founding engineers who initially conceived of and created CF (Derek Collison and Vadim Spivak) spent several years working at Google heavily using Borg prior to building CF at VMware. Mesos is another great example of an open source project in this vein created this time not in industry, but in academia as a computer science PhD thesis at UC Berkeley. The primary creator of Apache Mesos, Ben Hindman, interned at Google and implemented Mesos based on conversations with Google engineers who provided insights on some of the cluster scheduling aspects of Borg. Thus, we would also not have Mesos without Google. Finally, what makes us extremely excited about Kubernetes is that the exact same engineers who tirelessly spent the last 10+ years actually creating, scaling and managing Borg internally at Google to millions of machines scale are now working in the open on GitHub giving the world the chance to interact with them transparently. Kubernetes is, in essence, the compression of over a decade of learnings running large distributed cloud infrastructure at scale. As Urs Hölzle says “When you spend 15 years working on cloud infrastructure, you learn what works - and what doesn't”.

InfoQ: We've heard talk that the primitives provided by Kubernetes (pods, replication controllers, services) make it easy for a developer to easily understand the technology in comparison with more coarse-grained IaaS/PaaS primitives. What are your thoughts on this?

Kismatic: This is true. Kubernetes provides some amazingly powerful building blocks for managing the lifecycle of distributed microservices in the context of Linux containers as the core packaging and isolation mechanism. In working with enterprise customers at Kismatic, we’ve found that the primitives in Kubernetes provide just the right level of automation for the developer to reason about how to make a service highly available, intuitively route traffic to various components of an application, all while providing a great degree of flexibility in how a service can be constructed and discretely managed. Kubernetes lets the developer think in terms of datacenter resources and (micro)services, instead of machines of config files. Therefore, you could think of Kubernetes as a single API to manage your data center.

InfoQ: Kubernetes is getting a lot of press coverage about the benefits it provides for building microservices. Do you have any useful patterns or references to share in regards to this?

Kismatic: Sure! Many of the patterns we have been strongly advocating at Kismatic are inspired by Kubernetes co-creator Brendan Burns. Brendan recently wrote a blog post based on his DockerCon talk, which you can read here. In this post, he outlines a few patterns that are very useful in understanding how to effectively design and build your microservice architecture in Kubernetes. Brendan points out service oriented architectures (SOA) encouraged decomposition of applications into modular, focused services -- much in the same way, containers should be encouraging the further decomposition of these services into closely cooperating modular containers. He then continues with, “By virtue of establishing a boundary, containers enable users to build their services using modular, reusable components, and this in turn leads to services that are more reliable, more scalable and faster to build than applications built from monolithic containers.”. The post wraps up with three popular example patterns using the core atomic unit of scheduling in Kubernetes, the Pods abstraction.

InfoQ: Due to the ease of provisioning and orchestrating applications, Kubernetes could be set to disrupt how the traditional Dev and Ops teams work together. Do you have any thoughts on the future skillsets that will be required among fullstack developers, platform teams and SREs?

Kismatic: In a future where Kubernetes is used as a core datacenter and distributed systems building block, we can expect one major thing to happen: developers will be far more productive. The real goal in Kubernetes is to decouple applications from the underlying infrastructure and offer an elegant abstraction for developers away from the minutiae of back-end infrastructure configurations. We have worked with customers who have told us their developers used to spend 70%+ of their time on back-end infrastructure tasks before using Kubernetes. Once they were able to define their core services in Kubernetes and adopt the new model, they were free’d up to spend 90%+ of their time on new features and writing code, and less than 10% was spent on back-end infrastructure. Platform engineers and SREs also become much more productive because many of the manual tasks they had to spend a great deal of time on before (like updating and maintaining configuration management cookbooks and recipes) almost go away completely. Google’s Borg paper outlines how Borg SREs are able to manage tens of thousands of machines each.

InfoQ: Kubernetes v1.0 has recently been launched. How important is this milestone within the Kubernetes ecosystem, what are you thoughts on the functionality that did (and didn't) make it into this release?

Kismatic: Kubernetes reaching v1.0 is a big deal. In just one year of existence, there have been 15,000+ commits from 420+ contributors! We can’t recall any other distributed systems open source software project to reach this kind of momentum in such a short period, barring perhaps the Docker project itself. In February of this year, many contributors from the Kubernetes community met in San Francisco and deliberated on the general scope of the v1.0 release. Many of those ambitions are now a reality. As reported by Google’s Craig McLuckie, here are some of the main features in Kubernetes v1.0, a production-scale release that has been tested up to hundreds of nodes (physical or virtual) and able to run many thousands of containers. Of course, this is only the beginning:

App Services, Network, Storage:

  • Includes core functionality critical for deploying and managing workloads in production, including DNS, load balancing, scaling, application-level health checking, and service accounts
  • Stateful application support with a wide variety of local and network based volumes, such as Google Compute Engine persistent disk, AWS Elastic Block Store, and NFS
  • Deploy your containers in pods, a grouping of closely related containers, which allow for easy updates and rollback
  • Inspect and debug your application with command execution, port forwarding, log collection, and resource monitoring via CLI and UI.

Cluster Management:

  • Upgrade and dynamically scale a live cluster
  • Partition a cluster via namespaces for deeper control over resources.  For example, you can segment a cluster into different applications, or test and production environments.

Performance and Stability:

  • Fast API responses, with containers scheduled < 5s on average
  • Scale tested to 1000s of containers per cluster, and 100s of nodes
  • A stable API with a formal deprecation policy

As for features that did not yet make it into the v1.0 release, we are very excited to see the roadmap unfold very quickly.

InfoQ: Docker has seen amazing growth over the past two years. Do you think the Kubernetes could emulate this growth in adoption?

Kismatic: Definitely. As mentioned previously, Kubernetes is already roughly on par with the core growth metrics Docker saw within its first year.

InfoQ: There are quite a few organisations rallying around Kubernetes, e.g. Google, Red Hat, and your own Kismatic, and the announcement of the Cloud Native Computing Foundation (CNCF) alongside the Open Container Initiative (OCI) has been an interesting recent development. How well do the goals of the organisations involved align, and how does the community benefit from this commercial and non-commercial involvement?

Kismatic: There are indeed a number of large and important companies rallying around Kubernetes as the ecosystem continues to develop. We are very excited to be a founding member of both the OCI and the CNCF initiatives. Since early in the year, we had somewhat been campaigning to have Google decouple themselves from the Kubernetes project as this greatly benefits the industry to have a neutral entity overseeing the evolution of the project. It is exciting to see this finally happen with the formation of the CNCF.

InfoQ: Could you tell us a little about the core offerings of Kismatic, and how they compare with those of other commercial Kubernetes vendors?

Kismatic: Definitely. Since our founding, Kismatic has been laser focused on executing our primary strategy as a company: spend most of our time with early Kubernetes adopters and enterprise customers by listening to, getting feedback from and learning about real-word production requirements. Listening to early adopters of Kubernetes has paid off in spades for us. We initially observed that simplifying the operator experience for Kubernetes would be crucial to broader and more rapid adoption, so in collaboration with Google we created the native WebUI console for visualizing Kubernetes. This dashboard has quickly become the standard way to see your pod and cluster utilization levels at a glance and introspect events and services using point and click interfaces.

Last week, we announced Kismatic’s commercial support subscriptions and production plugins for open source Kubernetes. We decided to build our support and commercialization strategy around pure open source Kubernetes after hearing our customers express concern over fragmentation risk and vendor lock-in. Since Kubernetes should run anywhere, we didn't want to impose any opinionated restrictions on the underlying Linux distribution and especially wanted to avoid forking Kubernetes with a separate release cadence and bits.

Kismatic’s enterprise support subscriptions for open source Kubernetes will be initially focused on five major Linux distributions: RHEL, CentOS, Debian, Fedora and Ubuntu. Customers will be able to leverage 24/7 indemnified support SLAs for their production deployments of Kubernetes on any supported environment.

In addition to providing distro-agnostic enterprise support for any open source Kubernetes deployment, Kismatic is opening up a beta program for enterprises who require production-grade integrations for rich security, governance, auditing and access controls of their microservices clusters. We will initially be focusing on the following enterprise plugins that can be deployed in any open source Kubernetes cluster: RBAC support, LDAP/AD integration, Kerberos encryption and rich auditing and logging persistence for compliance purposes.

InfoQ: Thanks for your time. Is there anything else you would like to share with the InfoQ readers?

Kismatic: Thank you very much for the opportunity to share what we are up to and our general thoughts in this area. It was a pleasure to speak with you.

Additional information on Kubernetes can be found on the kubernetes.io website and within the project’s GitHub repository. The Kismatic website and company GitHub account contains more details of services offered around Kubernetes and open source contributions being made.

Rate this Article

Adoption
Style

BT