BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Q&A with Gabe Monroy of Microsoft on Azure Kubernetes Service from Build 2018

Q&A with Gabe Monroy of Microsoft on Azure Kubernetes Service from Build 2018

This item in japanese

Bookmarks

At the recently concluded Microsoft //build conference in Seattle, WA, there were a number of technical sessions on containers and Kubernetes. InfoQ caught up with Gabe Monroy, lead program manager for Containers on Azure regarding Azure Kubernetes Service (AKS). As recently reported on InfoQ, AKS went into General Availability recently.

Monroy provided information about how the Azure cloud complements Kubernetes Services, how the Deis acquisition has strengthened the Kubernetes offering, and how Microsoft is working with the community but at the same time trying to differentiate the service, for instance, by integrating Azure Active Directory (AAD).

InfoQ: Straight off the bat, Kubernetes services are being offered on different clouds. How is AKS a differentiator and why should developers/architects care?

Gabe Monroy: When selecting a Kubernetes service, it's important to remember that not everything will run in Kubernetes. Your business will also be dependent on data stores, queues, functions, and other services, many of which will not run on Kubernetes. Therefore I'd say the most important differentiator for a Kubernetes service is the cloud platform on which its running. With Azure, you get access to technology like Azure Active Directory, Azure Traffic Manager, Azure Monitor, and, of course, our unmatched suite of developer tools.

When it comes to the AKS service, some specific differentiating features include:
  • Kubernetes RBAC with Azure Active Directory integration
  • Monitoring and logging integrated directly into the AKS portal
  • HTTP application routing with integrated Ingress and DNS zonefile management
  • GPU-enabled nodes for compute and graphics intensive workloads
  • Custom virtual network support for private networking
  • Persistent volume support with Azure Files and Azure Disks
  • Regulatory compliance with SOC, ISO, HIPPA, HITRUST

InfoQ: You yourself were part of a company called Deis that was acquired by Microsoft. Can you comment on how the work that was being done at Deis has been or is being integrated into the Azure Kubernetes platform(s)?

Monroy: Deis had built a strong reputation in two areas:

  1. Building developer tools for Kubernetes - we developed a Heroku-style PaaS solution called Deis Workflow which was extremely popular. It was the first PaaS solution that could run atop any Kubernetes. We also built Helm, which evolved into the de-facto package management solution for Kubernetes used across the ecosystem. Helm was just recently voted into the CNCF as a top-level project. We continue to actively develop Helm, along with other open source developer tools like Draft and Brigade.
  2. Helping large customers succeed with Kubernetes in production - at Deis we had a well-respected professional services outfit that helped customers like Mozilla, Hearst Corporation, and OpenAI succeed with Kubernetes at scale. Given how early this work was in the life of Kubernetes, we learned a ton about what it takes to operate Kubernetes at scale. This experience has proven invaluable while building AKS.

InfoQ: Azure Container Instances became Generally Available (GA) recently. Can you provide some recommendations to developers/architects about when to use AKS and when to use the Azure Container Instances?

Monroy: As of last week both Azure Container Instances and Azure Kubernetes Service are now generally available, which is very exciting.

  • When should you use ACI? When you want containers but don't want the overhead of container orchestration. ACI is a good choice if you just want to run a few containers -- either to completion (i.e. a job that finishes) or maintain it running indefinitely (i.e. for a simple web app). ACI features easy usage, per-second billing, and no virtual machines to manage. The ACI API is a lot like a Virtual Machine API, where you can create, read, update, delete, and list instances. They just happen to be containers instead of VMs.
  • When should you use AKS? When you want to run containers and you need container orchestration features including: zero-downtime updates, maintaining a number of replicas, container-to-container communication with service discovery, batch workflows, and more. AKS is also a great choice if you want the benefits of the Kubernetes ecosystem, including portability across environments, and access to the large ecosystem of tooling and services in the CNCF space.
The Azure Compute Decision Tree has a lot of detail on when to select a particular service, and goes beyond just AKS and ACI.

InfoQ: The Kubernetes ecosystem is evolving quickly and that is a challenge to the developer. For instance, as an enterprise developer who wants to use their favorite monitoring tool like Nagios, Ganglia, etc. it becomes difficult to a) monitor the Kubernetes cluster and b) use existing and time-tested tools. What is the AKS approach in general to some of these features which are still evolving in the Kubernetes ecosystem, and in particular towards monitoring?

Monroy: AKS aims to provide a stellar out-of-the-box experience for monitoring and log aggregation. In fact, at our //build conference in Seattle this year, we showcased Container Health monitoring inside the Azure portal. This provides a dashboard where you can view containers, associated monitoring metrics, and even drill directly into containers logs -- all within the Azure portal, and all enabled by default. This is the experience I want as a developer.

For customers who want to run their own monitoring solutions, AKS supports this via Kubernetes DaemonSets which can install agents on all of the VMs in your Kubernetes cluster. Helm makes it easy to install off-the-shelf monitoring packages for common solutions. However, choosing to roll your own monitoring solution will always be more work than using the integrated solutions we provide as part of the product. Which path you choose is up to you.

InfoQ: The perception is that containers and Kubernetes introduce additional security risks. What does AKS do to obviate this?

Monroy: The issue is not that Kubernetes and containers are insecure, it's that they change the security model in fundamental ways. IP addresses have historically been used to construct access control lists for firewall rules. In the world of containers, IP addresses change so fast they can't be used for firewall rules, so instead security teams need to use label-based firewall rules (e.g. role=frontend can talk to role=backend). Similar changes are true for runtime security, where enforcing process capabilities and syscall filtering becomes important.

In general, I would say the security posture with a well-run container infrastructure is better than legacy VM infrastructure. For example, as of our general availability release, AKS now supports integrated Azure Active Directory for identity and group management. This provides a secure solution for managing user accounts and groups in a central location, which is then respected by Kubernetes. The user identities in Azure AD also forms the basis of our Kubernetes RBAC support. Active Directory is a unique feature of Azure, and we're thrilled to see it so deeply integrated into the product.

Can you provide a roadmap for AKS and the plan for working with the Kubernetes community going forward?

Monroy: We will soon be adding support for multiple node pools, integrated cluster auto scaling, and enhanced security features. We will also be adding many more regions and better integrating AKS into the overall Azure platform. Creating better developer tooling and user experiences will also remain a big focus for us.

One thing I'm particularly excited about is the work we are doing with the Virtual Kublet in the Kubernetes community. Customers are enamored with the idea of a "serverless Kubernetes" experience where you can use the Kubernetes API on demand, featuring per-second billing and no virtual machines. Look out for more updates on serverless Kubernetes in the near future.

Keynote sessions and other recordings are available via the build 2018 website including a video recording of the session.

Rate this Article

Adoption
Style

BT