BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Config Management Camp: BOSH, CoreOS and Kubernetes

Config Management Camp: BOSH, CoreOS and Kubernetes

Andrew Clay Shafer, senior director of technology at Pivotal, presented at Config Management Camp on BOSH, the project used to deploy Cloud Foundry PaaS, while Kelsey Hightower, developer advocate at CoreOS, talked about CoreOS and Kubernetes, the open source project started by Google to manage a cluster of Linux containers.

BOSH: Configuring Services

Andrew Clay Shafer introduced Cloud Foundry BOSH (slides), a project started by VMware, now in Pivotal, that unifies release engineering, deployment, and lifecycle management of small and large scale cloud software, and can provision and deploy software over hundreds of virtual machines. It can be used to deploy other software, not just Cloud Foundry, such as Hadoop, supporting multiple infrastructure providers like VMware vSphere, Amazon Web Services EC2 and OpenStack. It also performs monitoring, failure recovery, and software updates with zero-to-minimal downtime.

Shafer left clear that Cloud Foundry is targeting big deployments:

Do not use BOSH and Cloud Foundry to deploy a single instance of WordPress. These technologies can help you to scale to thousands and tens of thousands of virtual machines, but think about what you need.

BOSH allows developers to version package and deploy software in a reproducible manner. It is a distributed package and process orchestrator, with idempotent resource abstractions, declaring a desired state. The state is managed at service level, not at the server level.

For developers needing to manage a large distributed system, Shafer stated that config management is necessary but not sufficient:

If you are running installers with your config management tool, you do not understand what configuration management is.

He also answered the question about how to ensure services start in a specific order:

Don't do it. When you have an opportunity to do an automation project, one way to do it wrong is automating what you are doing right now. Take the opportunity to revisit the decisions you made before and think about how you do things.

Managing Containers at Scale with CoreOS and Kubernetes

Kelsey Hightower provided an overview of CoreOS and Kubernetes, including a live demo (slides). CoreOS is a container optimized operating system, minimal in both size and features, with no package manager and automatic updates. Hightower defined his principle for infrastructure design:

How would you design your infrastructure if you couldn't login? Ever.

Kubernetes is a project that covers container management, scheduling and service discovery. Containers are not golden images, nor lightweight virtual machines, just unix processes with a runtime environment defined by cgroups, namespaces and environment variables. Applications running in Kubernetes are managed through an API, with agents monitoring endpoints for state changes in real time, with consensus provided by etcd. Kubernetes controllers enforce desired state, including the number of containers to run, in a purely declarative model.

The main resource types in Kubernetes are nodes, pods, replication controllers and services. The nodes run containers and proxy service requests, and can do dynamic service lookups. CoreOS uses flannel, an overlay network that gives a subnet to each machine for use with Kubernetes, to manage the network communication between containers across hosts.

Pods represent a logical application, grouping together one or more containers with shared volumes and network namespace, optionally managed by replication controllers. The replication controller manages a replicated set of pods, creating them from a template and ensuring only the specified number of pods are running. It also allows rolling updates using the command line interface, with kubectl rollingupdate.

Kubernetes services enable service discovery for pods, with a proxy that runs in each node, using an IP address per service, avoiding port collisions, and distributing load with a basic round-robin algorithm to the backend selected using label queries, every resource in Kubernetes can use a label.

The source code used in the demo is available at GitHub, showing how to run pods with pgview and memcached and how to perform a rolling upgrade across multiple pods. InfoQ has covered Kubernetes in a previous article.

Rate this Article

Adoption
Style

BT