BT

Config Management Camp Panel: Provisioning Cloud Infrastructure as Code

| by Carlos Sanchez Follow 0 Followers on Feb 12, 2015. Estimated reading time: 2 minutes |

The Config Management Camp featured a panel with Mitchell Hashimoto, founder of HashiCorp and creator of Terraform, Gareth Rushgrove, senior software engineer at Puppet Labs, and John Keiser, development lead at Chef, discussing creation of infrastructure from code and cloud resources and APIs.

Before the panel, each of the participants showed a demo of their respective projects, running a deployment from Terraform, Puppet creating and managing EC2 instances and resources, and cloud provisioning with Chef.

Asked about the user problem that infrastructure as code and their respective products solve, Mitchell said that it is not actually creating the machines and infrastructure, but managing them, orchestrating the lifecycle or creating them in the right order. What HashiCorp is building is different from OpenStack and Cloud Formation, Mitchell said, managing change and not just creating resources once. For instance, Terraform is able to do rolling deployments, updating only a percentage of the servers. He also commented on the scalability challenges that his company tries to solve:

If you have a million servers Puppet would be doing API calls for days. HashiCorp thinks about these types of problems.

In a lot of cases people are logging into web interfaces to manage infrastructure, and large organizations need to answer the common questions of who has permissions to do that. For Gareth, when the infrastructure is put in source control and is versioned, and can go through a Continuous Integration pipeline, built and tested, a lot of problems get solved.

John shared that for him it all started looking at a wiki page about the infrastructure, with instructions on what to do and a series of commands to run. That was the motivation for creating a technical description of the infrastructure. Having a real description of what the infrastructure was, and how machines related to each other would allow people to just repeat those steps, all the way across production, and in multiple clouds.

Asked about what makes a good API, for John it depends on who is using the API, and used the example of Amazon as provider of good APIs, allowing doing several operations at once, and the plus of having documentation. Providers have very different APIs, and even though everyone has a machine or server abstraction, Gareth shared that for networking it is very different, and the representation of resources varies.

An API that works, no matter whether it is XML, JSON or anything else, is what Mitchell wants to find. When HashiCorp tried Terraform, their tests span hundreds of machines in parallel and found issues in all cloud providers, except Google. Some times they found major flaws in cloud providers' distributed systems, that would cause resource failures, even suffering unintended deletion of servers.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss
BT