BT

Facilitating the spread of knowledge and innovation in professional software development

Contribute

Topics

Choose your language

InfoQ Homepage News D2iQ Releases DKP 2.0 to Run Kubernetes Apps at Scale

D2iQ Releases DKP 2.0 to Run Kubernetes Apps at Scale

This item in japanese

Bookmarks

D2iQ recently released version 2.0 of the D2iQ Kubernetes Platform (DKP), a platform to help organizations run Kubernetes workloads at scale.

The new release provides a single pane of glass for managing multi-cluster environments and running applications across any infrastructure including private cloud, public cloud, or at the network edge.

DKP 2.0 is built on the Cluster API, a Kubernetes sub-project to simplify creating, configuring, and managing multiple clusters, to support Day 2 operations out of the box. Also, it has auto-scaling capabilities for workloads to improve availability and support for immutable operating systems such as Flatcar Linux.

InfoQ sat with Tobi Knaup, CEO of D2iQ, at KubeCon+CloudNativeCon NA 2021 and talked about DKP 2.0, its relevance to developers, and the future of Kubernetes.

InfoQ: Why is DKP 2.0 a significant release for D2iQ?

Tobi Knaup: Version 2.0 is always special for any software company. It's the culmination of everything that we learned from our customers running it in production since we released 1.0. We've learned a lot there and built the roadmap for 2.0 together with our customers.

DKP 2.0 is a significant re-architecture of the platform. We did that because we want DKP and especially Kommander, which is one of the products in the platform, to be the central point of control for the enterprise to manage the entire Kubernetes fleet.

Today, We see the world is moving towards multi-cloud, hybrid cloud, and edge. It's important to have that central point of control. Kommander and DKP 2.0 are built on top of cluster API so they can manage the life cycle of any Kubernetes cluster on any infrastructure. Once those clusters are brought up, Kommander has a whole set of Day 2 operations capabilities.

The other bit that's new in 2.0 is Flux for continuous delivery. We adopted it as we think it's a strong technology. It is Kubernetes-native, integrates nicely with other systems, ties into RBAC authentication, and can run on a namespace basis.

The third major piece is that we added support for immutable operating systems. This was driven by conversations we had with our customers who are very security conscious. We work with large enterprises, predominantly federal government agencies, and supporting immutable operating systems helps them improve their security posture.

InfoQ: How can developers benefit from the new features in DKP 2.0?

Knaup: For developers, I think it's exciting to have Flux built-in. Another thing that we did that is not part of the 2.0 announcement but one of our other products is Kaptain. It's an end-to-end Machine Learning (ML) platform based on Kubeflow.

For those ML developers, engineers, and data scientists, it is a seamless way to build models without ever leaving your notebook environment. Part of Kaptain there is a Python SDK that you can use to train your models in a distributed way without having to know anything about Kubernetes.

The nice thing about Kaptain is it is built in a modular way because we know that a lot of organizations will run some components on the edge and some on the cloud so you can set it up in that way. The user can decide to train their model on the cloud on a particular cluster and maybe later want to deploy it on the edge on another fleet of clusters.

InfoQ: Where do you think Kubernetes is heading?

Knaup: What I think is going to happen next is that many organizations who have been operating data services and stateful applications for a while are starting to ask questions such as: what shall we do with all such data and how can we get insights out of that data? A lot of times what that means is building machine learning models and AI that leverages that data. We see a lot of organizations building their next-generation products that include AI components.

For example, we work with a healthcare company that builds MRI scanners and CT scanners that have Kubernetes built-in, and they're planning to have machine models built-in. It makes perfect sense to then run those machine learning workloads on the same cluster as microservices.

I think the other thing that's interesting about such machine learning apps is that the data and the models need to run where the new data is coming in. That's often on the edge nowadays. For most enterprises, most of the data that they're consuming and processing happens on the edge and not inside their cloud or their data center. Teams can now decide to run some workloads on the edge, and other workloads on the cloud using the same platform and same user experience.

The third thing I see is that multi-cloud is becoming a reality as a lot of enterprises want to deploy across multiple cloud providers for many reasons. We help them by providing a control plane that they can use to manage their workloads across many cloud providers, Kubernetes clusters, edge, or private cloud.

D2iQ, formerly known as Mesosphere, shifted away from Mesos DC/OS a few years ago to focus on Kubernetes and Day 2 operations related to cloud-native applications and platforms.

A free trial of DKP 2.0 can be requested through the company’s website.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT