BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News AWS Releases Multi-Cloud Kubernetes Autoscaler Karpenter

AWS Releases Multi-Cloud Kubernetes Autoscaler Karpenter

This item in japanese

Bookmarks

AWS recently released Karpenter, their open-source Kubernetes cluster autoscaler. This improves upon their Kubernetes Cluster Autoscaler by providing a easily configurable, fully automated scheduler. Karpenter is able to monitor for unscheduled pods and launch new nodes as well as terminate unneeded infrastructure. Karpenter is designed to work with any Kubernetes cluster in any environment.

Karpenter is able to observe the aggregate resource requests of unscheduled pods. Using this it makes decisions to launch or terminate as needed to optimize cluster performance and cost. According to Channy Yun, principal developer advocate with AWS, Karpenter "performs particularly well for use cases that require rapid provisioning and deprovisioning large numbers of diverse compute resources quickly".

High level architecture drawing of Karpenter workflow

High level architecture drawing of Karpenter workflow (credit: AWS)

 

The default provisioner with Karpenter can be set with a number of constraints. These include defining taints to limit the number of pods that can run on the Karpenter created nodes and setting defaults for node expiration. The provisioner can also be set up to use Kubernetes well-known labels to allow pods to request only specific instances based on instance types, architectures, or zones. At the time of release, only the kubernetes.io/arch, node.kubernetes.io/instance-type, and topology.kubernetes.io/zone labels are implemented in Karpenter.

Multiple provisioners can be declared within Karpenter. Karpenter will loop through each configured provisioner and if multiple match, it will randomly select the provisioner to use. As such, it is recommended to ensure provisioners are mutually exclusive.

Which node is selected to run the pod can be constrained in a number of ways. Karpenter supports requesting that the node have a certain amount of available memory or CPU. In addition to selecting nodes based on labels, Karpenter also supports node affinity. Node affinity allows for selecting a node with desired attributes as key-value pairs. For example, to ensure that the selected node is within a specified zone and is leveraging a specific capacity type you could use the following:

affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
         - matchExpressions:
           - key: "karpenter.sh/capacity-type" # AND
             operator: "In"
             values: ["spot"]
           - key: "topology.kubernetes.io/zone" # AND
             operator: "In"
             values: ["us-west-2a, us-west-2b"]

Karpenter also supports the use of Kubernetes topology spread constraints, which requests that the provisioner spread the pods out across the available nodes to reduce the impact of an outage. This is done with the maxSkew property. A maxSkew of 1 indicates that there must be no more than one pod difference across the hosts.

Karpenter can also be used for deleting nodes that are considered empty of pods or setting nodes to have a set expiration. Karpenter adds a finalizer to provisioned nodes and also changes the behaviour of kubectl delete node. With Karpenter nodes will be drained, then the underlying instance will then be deleted.

Karpenter is available under the Apache License 2.0 via GitHub. It is installed via Helm and requires some permissions within your environment to provision compute resources.

About the Author

Rate this Article

Adoption
Style

BT