BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Weaveworks Releases Ignite, AWS Firecracker-Powered Software for Running Containers as VMs

Weaveworks Releases Ignite, AWS Firecracker-Powered Software for Running Containers as VMs

This item in japanese

Software startup Weaveworks celebrated their fifth birthday by releasing an open source project called Weave Ignite. This project is billed as a "GitOps-managed virtual machine (VM) with a container UX." This novel software uses Firecracker, the AWS open source project that underpins AWS Lambda. InfoQ spoke to the team behind the project to learn more.

In a blog post about Weave Ignite, Weaveworks CEO Alexis Richardson explained how it works.

Ignite makes Firecracker easy to use by adopting its developer experience from containers. With Ignite, you pick an OCI-compliant image (Docker image) that you want to run as a VM, and then just execute "ignite run" instead of "docker run". There’s no need to use VM-specific tools to build .vdi, .vmdk, or .qcow2 images, just do a docker build from any base image you want, and add your preferred contents.

When you run your OCI image using ignite run, Firecracker will boot a new VM in c.125 milliseconds (!) for you, using a default 4.19 linux kernel.

Richardson called out a handful of use cases, including quickly booting many secure VMs for testing or ephemeral workloads, launching complete stacks at once, and running legacy apps in lightweight VMs. 

Amazon released the open source virtualization technology Firecracker last November. Journalist Matt Asay remarked that what Weaveworks did here with Ignite is an ideal representation of how open source should work.

While the tech itself seems uber-cool, there’s something else in this that is awesome: AWS didn’t build it. They built Firecracker and made it open, such that other developers like @weaveworks could build on it. This is how real open source works.

Demonstrating that Weave Ignite can work anywhere, an engineer at Walmart Labs wrote a blog post demonstrating how to get Ignite working on Google Cloud.

To learn more about Weave Ignite, InfoQ reached out to Weaveworks and spoke to CEO Alexis Richardson and Ignite creator, Lucas Käldström.

InfoQ: Does Ignite create a "real" VM that can store persistent state, host its own containers, etc? Would this look different from VMs created by traditional hypervisors?

Weaveworks: Yes, Ignite creates a real VM. It is slightly different from “traditional” VMs however, eg.:

  1. Firecracker is by design a minimal KVM implementation

  2. Instead of using “bootable” disks like `.iso` files and supporting tools like Packer which produce vendor-specific `.vdi`, `.vmdk` files; we use a root filesystem from OCI images, the container industry standard.

  3. Ignite supports declarative configuration, and operations through GitOps

Also, please see our FAQ.md

InfoQ: Please explain Firecracker for those who are unfamiliar.

WeaveworksFirecracker is a minimal virtualization implementation for Linux (using KVM). 

Firecracker is purpose-built for a new era of serverless workloads; and hence its design is optimized for security and speed. In other words, Firecracker boots and monitors the VM, given a linux kernel and a disk.

From https://firecracker-microvm.github.io/: Firecracker implements a virtual machine monitor (VMM) that uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs. Firecracker has a minimalist design. It excludes unnecessary devices and guest functionality to reduce the memory footprint and attack surface area of each microVM. This improves security, decreases the startup time, and increases hardware utilization.

InfoQ: How does adopting a "developer experience from containers" make Ignite easier to use than raw Firecracker?

WeaveworksIgnite is to Firecracker as Docker is to runC, the OCI container runtime implementation.

Like runc, Firecracker is intended as a low-level component.  If you run a container today, you don’t use runC directly, but use a higher-level tool like Docker, containerd or Kubernetes. Similarly, unless you are a Linux kernel or KVM developer, you will most probably have a hard time figuring out how to use Firecracker efficiently and correctly. By taking the DX from containers, and integrating with a container runtime like Docker and the OCI image specification, Ignite gives the user the same experience running a VM as they would run a container, which is orders of magnitude simpler than requiring the user to create virtual block devices and ethernet interfaces.

InfoQ: What are the required components needed to get Ignite working on my machine?

Weaveworks: Basically Docker on Linux - see instructions here.

In detail: first and foremost; run Linux with KVM enabled. This is an essential requirement, as Firecracker by design implements KVM, a Linux-only feature. Secondly, install a container runtime, which Ignite integrates with, like Docker (currently the only supported runtime, more coming soon). Download the Ignite binary. That’s it!

InfoQ: Is GitOps an evolution of infrastructure-as-code? Can you tell us a bit about what GitOps is all about?

WeaveworksGitOps is a way to automate Kubernetes cluster management and application delivery.  Many people understand and use some GitOps concepts, but few are extracting full value from it. Fully realised it is the most profound improvement you can make to operations.  

In GitOps we manage an entire live system by continually observing runtime state and comparing it with desired state (stored as declarative configuration).  If the observed state has drifted from the desired state, then we use orchestrators like Kubernetes, Flux and Flagger to converge the system back to the correct state and fire alerts if we can’t converge. So we can provision and manage fleets of clusters and apps direct from config and, subject to policy, 100% automatically.  And with Weave Ignite we now have the first VM technology that is managed from config too - just like Kubernetes.

Weave products use GitOps to create clusters, scale them up and down, upgrade and patch them, and manage some D/R too.  We can do fleet automation, managing large numbers of clusters, templates and configurations.  Secondly we can replace deployment scripts with automated continuous application deployment.  We can execute progressive delivery - canaries, A/B testing with feature flags - and control policy too.  All this works with any CI tool, image registry, and Git repo.  

Yes - GitOps is an evolution of devops and IaC - but with important improvements.  What’s changed?  GitOps arguably takes the original vision of configuration-based management to the max:

  1. Instead of provisioning “boxes” of infra plus installed software, we manage complete running software stacks, including applications, services, mesh, canaries, .. and boxes.

  2. We deploy immutable containers and config files.  CI and dev never touch the runtime directly, they go via the immutability firewall. 

  3. We continually check the system for drift.  We have a complete description to compare with.

  4. All changes to the running system, no matter how fine grained, are driven by config changes.   

  5. Conversely, we do not use multiple interfaces, eg kubectl, ssh, UIs, CLIs, or aggregating facades like OpenShift.

  6. Gitops MUST use Git+orchestration and not Git+CI scripts.  We do not use CI scripts for CD because these can break and leave us in an uncertain state.  We update Git using manual or CI-based changes, but we do not let CI orchestrate deployment, because only Kubernetes and other runtime orchestrators can enforce convergence and atomicity.  

  7. We manage progressive delivery and feature flags this way too - see the YAML here.

  8. The whole environment includes non-programmatic assets, eg playbooks and dashboards.  When we update apps, we also want to update monitoring, alerting and ops docs, all under a single version control regime.

  9. Because we don’t want developers to write config files, we use higher order programming languages like Typescript to generate YAML from code safely, and to manage templates for fleets, pipelines and policy-based ops actions.  Unlike CI-based scripting models, this scales.

InfoQDid you need to submit any upstream changes to the Firecracker project to get Ignite working?

WeaveworksNo. It just worked :)

InfoQ: You called out a handful of possible use cases, including how Weaveworks uses it for its own cluster management products. What developer use case is most intriguing to you?

WeaveworksTo pick one: testing.  Imagine if you could spin up k8s clusters at zero cost for testing, CI and other flavours of dev.  That said - spinning up secure Kubernetes clusters fast on Ignite, makes several cases easy. See our introductory blog post. You only need to run `ignite run` a few times, and then install Kubernetes on those machines using your preferred Kubernetes installer, e.g. the de-facto community-built tool kubeadm, that Weaveworks has been developing in the open since the beginning, and for enterprises the Weaveworks Kubernetes Platform.  Ultimately: a whole Gitops data center could run anywhere using modern cloud native tools.

Rate this Article

Adoption
Style

BT