BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Rancher Labs Makes Longhorn Generally Available

Rancher Labs Makes Longhorn Generally Available

This item in japanese

Bookmarks

Creators of the Rancher Kubernetes management platform, Rancher Labs, have made Longhorn, a cloud-native block storage solution, generally available (GA). Longhorn provides a vendor-neutral persistent storage solution that supports the development of stateful applications within Kubernetes.

Longhorn is open source, distributed block storage built using microservices. It was initially released in beta in 2019. Since then users have been stress-testing it as a Cloud Native Computing Foundation (CNCF) Sandbox project. This GA version of Longhorn delivers a series of functionality that is important for enterprise use cases: thin-provisioning, snapshots, backup, and restore, non-disruptive volume expansion, cross-cluster disaster recovery volume with defined RTO and RPO, a live upgrade of Longhorn software without impacting running volumes, and a Kubernetes CLI integration with a standalone UI.

Longhorn aims to help teams simplify the deployment of persistent storage without the cost of proprietary alternatives. It also intends to reduce the resources required to manage data and operate environments in order to enable teams to improve throughput and stability.

To keep up with the growing scale of cloud and container-based deployments, distributed block storage systems are becoming increasingly sophisticated. The number of volumes a storage controller serves continues to increase. Modern cloud environments require tens of thousands to millions of distributed block storage volumes, and so storage controllers have evolved to be highly complex distributed systems.

Distributed block storage is simpler than other forms of distributed storage, such as file systems, since no matter how many volumes the system has, each volume can only be mounted by a single host. Because of this, it should be possible to partition a large block storage controller into a number of smaller storage controllers, as long as those volumes can still be built from a common pool of disks and the means to orchestrate the storage controllers so they work together coherently exists.

When each controller only needs to serve one volume it simplifies the design of the storage controllers. Because the failure domain of the controller software is isolated to individual volumes, a controller crash will only impact one volume. Instead of building a highly-sophisticated controller that can scale to 100,000 volumes, Longhorn makes storage controllers lightweight so that 100,000 separate controllers can be created. Orchestration systems like Swarm, Mesos, and Kubernetes can be used to schedule these separate controllers, drawing resources from a shared set of disks as well as working together to form a distributed block storage system.

Longhorn users can create distributed block storage mirrored across local disks which also serves as a bridge to integrate enterprise-grade storage with Kubernetes by enabling users to deploy Longhorn on existing NFS, iSCSI, and Fibre Channel storage arrays and on cloud storage systems like AWS EBS, with additional features such as application-aware snapshots, backups, and remote replication.

The microservices-based design of Longhorn also allows each volume to have its own controller so that the controller and replica containers for each volume can be upgraded without causing a noticeable disruption in IO operations. Longhorn can create a long-running job to orchestrate the upgrade of all live volumes without disrupting the on-going operation of the system. To ensure that an upgrade does not cause unforeseen issues, Longhorn can choose to upgrade a small subset of the volumes and roll back to the old version if something goes wrong during the upgrade.

Longhorn allows users to pool local disks or network storage mounted in compute or dedicated storage hosts and create block storage volumes for containers and virtual machines. The size of the volume, IOPS requirements, and the number of synchronous replicas can be specified across the hosts that supply the storage resource for the volume. Replicas are thin-provisioned on the underlying disks or network storage.

Users can schedule multiple replicas across multiple compute or storage hosts, and Longhorn monitors the health of each replica and performs repairs, rebuilding the replica when necessary. Storage controllers and replicas can be operated as Docker containers. A volume with three replicas, for example, will result in four containers.

Multiple storage frontends can be assigned for each volume. Common front-ends include a Linux kernel device (mapped under /dev/longhorn) and an iSCSI target. A Linux kernel device is suitable for backing Docker volumes, whereas an iSCSI target is more suited for backing QEMU/KVM and VMware volumes.

Volume snapshots and AWS EBS-style backups can be created, with up to 254 snapshots for each volume, which can be backed up incrementally to NFS or S3-compatible secondary storage. Only changed bytes are copied and stored during backup operations. Schedules can be specified for recurring snapshot and backup operations along with the frequency of these operations, the exact time at which these operations are performed, and how many recurring snapshots and backup sets are kept.

Access Longhorn on GitHub here.

Rate this Article

Adoption
Style

BT