A Comparison of Some Container Orchestration Options
A recent article compares some of the container orchestration options available today. They range from open-source ones that can be self-hosted, to containers-as-a-service offerings, which again range from startups to enterprise players.
The orchestration options have some common features like container provisioning, launching and discovery, system monitoring and crash recovery, declarative system configuration and mechanisms for defining rules and constraints about container placement and performance. In addition to these, some have features that attend to special needs.
The open source orchestrators include Docker Swarm, Kubernetes, Marathon and Nomad. These can be installed on-premises in your own datacenters or in most public clouds. Among these, Kubernetes is also available as a hosted solution as part of Google Container Engine. It schedules logical units called pods - a group of containers that are deployed together for a particular task. Pods can be used to compose higher abstractions like Deployments. Each pod can have both standard as well as user defined health checks for monitoring. Kubernetes has seen adoption in projects like OpenStack too, both from the community as well as vendor supported.
Docker Swarm is Docker’s native offering for orchestration. With Docker 1.12, it added the “swarm mode” feature for orchestrating across multiple hosts. Docker Swarm remains a separate product. It can be accessed via the Docker API and used to invoke tools like docker compose for declarative orchestration of services and containers. Docker Swarm forms part of a bigger offering - the Docker Datacenter - that is aimed at enterprise container deployments.
Both Swarm and Kubernetes use YAML configuration files. Even though both are open source, Kubernetes does not have any dependencies on Docker and is part of the Cloud Native Computing Foundation (CNCF) projects. Both tools can, however, run on-premises as well as on public clouds like AWS.
The Marathon orchestration framework is based on the Apache Mesos project. Apache Mesos provides resource management and scheduling abstractions via APIs across datacenters that might be spread out physically. Systems on Mesos can use the underlying compute, network and storage resources just like virtual machines use the underlying resources via a hypervisor. Marathon utilizes Mesos to run on top of it and provides container orchestration capabilities for long-running applications. It supports both Mesos and Docker container runtimes.
Amazon EC2 Container Service (ECS) and Azure Container Service are two hosted solutions, which the latter being the newest. ECS supports running containers on AWS infrastructure only and can leverage AWS features like elastic load balancing and CloudTrail for logging. The ECS task scheduler groups tasks into services for orchestration. For persistent data storage, users can use data volumes or Amazon’s Elastic File System (EFS). Azure’s container service uses Mesos as the underlying cluster manager. There is an option of using Apache Mesosphere Datacenter Operating System (DC/OS), Kubernetes or Docker Swarm for orchestration.
Hashicorp’s Nomad is an open source offering that can support Docker containers as well as VMs and standalone applications. Nomad works on the agent model, with an agent deployed on each host which communicates with the central Nomad servers. The Nomad servers take care of job scheduling based on which hosts have available resources. Nomad can span datacenters and also integrate with other Hashicorp tools like Consul.
The article concludes by stating that one of the clinching factors in deciding which orchestrator to use is if lockin with a particular infra (like AWS or Azure) is acceptable.