BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Five Qualities of Application Delivery Done Right

The Five Qualities of Application Delivery Done Right

Bookmarks

 

This essay explains the goals of proper application delivery, and how to take gradual steps toward those goals.

Every company needs a way to move applications from development to production. There are several qualities of application delivery to focus on in order to build a workflow that is easy to use for developers, and powerful to orchestrate for operators. From a business standpoint, the goal is a workflow that promotes agility, reduces wasted resources, and ensures security and governance across the whole process.

Automated: An automated application delivery workflow limits human error and saves resources. Developers should simply push application code and the rest of the process from development to the target environment (staging, QA, production) should be completely automated.

Flexible: Company cultures and the applications that they create are not identical. Applications might written in Python, Rails, Node.js, Go, or a variety of other languages. They might be packaged as VMs or containers. They might be deployed on Amazon Web Services, OpenStack, Google Cloud Platform, bare metal servers, or a hybrid blend of infrastructure providers. An application delivery workflow should be consistent across these combinations and be flexible to adapt to different technologies and environments. Application delivery is a workflow and must treat all technologies within that workflow equally.

Scalable: A successful application delivery workflow is the same for one developer deploying to one server and one thousand developers deploying to ten thousand servers. To reach these scales, the system should be a composition of many smaller components with well-defined scopes rather than a monolithic system. This allows specific sub-problems within application delivery to be solved with specific solutions. Service configuration, environment provisioning, and application maintenance are different problems with different scopes within the larger challenge of application delivery. These problems should be solved with individual tools which are then united to create a resilient, scalable, and cost-effective application delivery workflow.

Secure: A modern system requires access to a multitude of secrets - database credentials, API keys for external services, credentials for service-oriented architecture communication, and more. Understanding who is accessing what secrets is very difficult and platform-specific. In a modern cloud environment with services and hosts frequently changing, security must take a different approach. Root secrets should be encrypted and stored in a central location, and services should receive dynamically generated, permissioned secrets. That way in the event of a breach, the entire system is not compromised, but just that service. With the addition of key rolling and revocation, responding to a breach is vastly simpler.

Transparent: Automated application delivery does not guarantee error-free application delivery. In failure scenarios it's essential that the workflow is transparent so teams can understand the exact state of the infrastructure, identify what or who created the change, and debug issues. Transparency requires that all infrastructure changes are versioned, auditable, and repeatable.

These five qualities combine to form an application delivery workflow that reduces errors, saves human and monetary resources, and increases transparency across both development and operations teams. The technologies (programming languages, package types, infrastructure providers) within the application delivery process are less important than the workflow itself.

The first step towards automating application delivery is to automate environment creation — compute, network, and storage.

Manual environment provisioning slows operations teams, isolates knowledge, and doesn't scale

If you had to re-create your production environment from scratch today, could you do it? No operations team wants to be in this situation, but it happens. When teams rely on manual environment provisioning, recovery from disaster scenarios is difficult. Scaling deployments beyond 50 servers is a challenge. Employee turnover could be disastrous if information is isolated in manual processes in one or two operators' heads.

Automating infrastructure provisioning addresses these challenges. It forces the process to be codified, and thus transparent to the organization. Automation allows the same number of operators to manage more machines. Finally, it enables teams to easily recover from disaster scenarios since a complete environment can be provisioned in one run. Once a full environment can be provisioned with a single configuration, commonly referred to as "infrastructure as code", staging environments become a breeze. Blue / green deploys are manageable. Operators can work quickly and with certainty.

Environment provisioning automation, infrastructure as code, and modern operations

Environment provisioning is the process of starting physical or virtual machines and creating networking rules. To automate provisioning, the environment can be codified into "infrastructure as code". For example, here is a Terraform configuration which sets up a virtual private cloud with two instances on Amazon Web Services. First, the VPC resource is created, then a public subnet is created within the VPC, and finally two instances are launched within the subnet. Terraform enables users to interpolate values from one resource into another, which simplifies configurations and reduces the amount of hardcoded values. For example, the subnet resource grabs the VPC id from the VPC resource with ` vpc_id = "${aws_vpc.main.id}"`:

resource "aws_vpc" "main" {
   cidr_block = "172.31.0.0/16"
   enable_dns_hostnames = true
}
 
resource "aws_subnet" "main" {
   vpc_id = "${aws_vpc.main.id}"
   cidr_block = "172.31.0.0/20"
   map_public_ip_on_launch = true
}

resource "aws_instance" "web" {
 instance_type = "t2.micro"
 ami = "ami-12345abc"
 count = "2"
 subnet_id = "${aws_subnet.main.subnet_id}"
}

This Terraform configuration can be run repeatedly with the same result, which makes it easy to provision staging environments, recover from disasters, and scale deployments. It's a powerful feeling to be able to provision a complete environment with hundreds or thousands of resources with one command, `terraform apply` in this case. To experience for yourself, jump into HashiCorp's interactive tutorial to automate environment provisioning.

Infrastructure provisioning automation tools

Any organization that is manually managing 25+ servers and multiple networking rules across one or more environments should consider automation tools. The simplest tools are the various infrastructure provider APIs, which can be called in custom scripts. Higher-level automation tools allow details of an environment to be codified into a configuration file, which represents the desired state of the environment. Some tools are provider specific, such as CloudFormation (AWS) and Heat (OpenStack), while Terraform supports multiple cloud platforms and allows building for multiple cloud platforms in a single configuration. Here is a comparison to learn more about the similarities and differences between automated environment provisioning tools.

Once environment creation is automated, the next step is to automate the configuration of services within that environment. At HashiCorp, we recommend an immutable infrastructure workflow and configuring services with pre-baked machine images or containers.

The decision between immutable infrastructure and runtime configuration drives operational decisions

The term "Immutable infrastructure" has been thrown into the spotlight recently, partially due to growing popularity of container technologies such as Docker. Immutable means "unchanging over time". In an immutable infrastructure workflow an existing server is never deliberately changed in place. Instead, when an application or configuration update is necessary, a new image is created, new servers are provisioned with the new image, and then the old servers are destroyed. This is a departure from the runtime configuration management workflow, which keeps the same server in place and applies configuration updates on top of existing configurations. Immutable infrastructure is certainly not a new workflow, but as organizations consider containers, they are gravitating towards immutable infrastructure for stateless services. For stateful services such as databases, operators certainly do not want to be frequently destroying those instances. Keeping the same machine and updating it in place is still the best workflow for services that store data.

Setting up an application delivery pipeline with immutable infrastructure is not an easy task. Building images is the easiest part of the process – but you also need a way to manage images, version images, provision machines with those images, and handle service discovery. However, if you are willing to invest resources into setting up the system, your infrastructure can reach greater scale with certainty.

When and why to implement immutable infrastructure

By nature, immutable infrastructure reduces the number of moving pieces in runtime environments. However, this complexity is not magically removed from the overall system. Instead, complexity is moved to the build phase. Moving a majority of configuration from runtime to build-time allows teams of operators to reduce deployment times, reduce the risk of a failed configuration process, improve the security of production systems, stabilize server configurations across environments, and most of all – scale deployments. Some runtime configuration still takes place, such as reading in environment variables or discovering dependent services. However, these are significantly faster tasks that can be handled with a service discovery tool such as Consul, Zookeeper, or etcd. Runtime configuration should take seconds, rather than minutes in the case of installing packages. These tools will be addressed in the next section.

Let's take a simple example to show how immutable infrastructure simplifies runtime environments. To deploy an Apache web server in a runtime configuration workflow, a base server is first acquired and provisioned, then a configuration management tool (or script) installs a specified or latest version of Apache on that server and pulls in application code. If 25 Apache web servers need to be deployed, Apache is downloaded and installed 25 times. If that process fails on one of the servers, there is an unhealthy host out in production. If the version of the Apache package changes upstream between provisioning, there are multiple versions of a particular software in production which may be incompatible.

In an immutable infrastructure workflow, a machine image is built with Apache already installed, and then 25 servers are provisioned using that image. Apache is downloaded and installed once at build-time. If configuration fails at build-time, there's no affect on production traffic. Deployment is faster since configuration already occurred. There is no room for package drift because everything is baked into the system at build-time. Regardless of the number of provisioned servers, they all have the same configuration. When you are managing hundreds, thousands, or tens of thousands of servers, the level of certainty and consistency that immutable infrastructure provides is powerful.

Building and managing immutable images at scale with Packer and configuration management

It is important to note that immutable infrastructure and configuration management tools such as Puppet, Chef, Ansible, and Salt are not mutually exclusive. In fact, configuration management tools make building immutable images much easier. Teams can use existing Chef cookbooks, Ansible playbooks, or Puppet manifests to configure deployable artifacts (AMIs, Docker containers, OpenStack images, etc) at build-time. The transition from runtime configuration to immutable infrastructure is most of all an ordering change – the configuration step is just moved from after deployment to before.

Packer is a popular tool for building machine images and deployable artifacts. Below is an example Packer configuration which builds a fully-configured web AMI using a Chef cookbook.

   "builders": [{
       "type": "amazon-ebs",
       "access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
       "secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
       "region": "us-east-1",
       "source_ami": "ami-9eaa1cf6",
       "instance_type": "t2.medium",
       "ssh_username": "ubuntu",
       "ami_name": "web {{timestamp}}"
   }],
   "provisioners": [{
       "type": "chef-solo",
       "cookbook_paths": ["cookbooks"],
       "roles_path": "roles",
       "run_list": [
         "role[web]"
       ]}
   ],

The "builders" stanza defines the type of deployable artifact that Packer will create. The "provisioners" section describes how that deployable artifact will be configured. In the above example Packer will launch a t2.medium instance on AWS, run the "web" role on that instance, snapshot the instance to create a machine image, and finally format the image as an AMI. This workflow is the same for all of Packer's builders — Amazon EC2, Google Compute Engine, Docker, VMware, OpenStack, and more. This AMI can then be used to provision EC2 instances with all the configurations and packages needed to run your web service. The only runtime configuration that would take place in an immutable workflow is setting environment variables and discovering other services in the infrastructure. This is usually done with a service discovery tool such as Consul, Zookeeper, or etcd.

The natural next question after building images is how to manage and store these images. Packer makes it easy to post-process images and send their metadata to artifact registries such as Atlas by HashiCorp, Docker Hub, or Amazon S3.

   "post-processors": [{
       "type": "atlas",
       "artifact": "hashicorp/web",
       "artifact_type": "aws.ami",
       "metadata": {
         "created_at": "{{timestamp}}"
       }
     }]

This example Packer post-processor takes the AMI created above and stores it in Atlas under the namespace hashicorp/web. Artifacts are automatically versioned to simplify management. In the above example, Atlas increments the version number whenever a new version of `hashicorp/web` is created. A complete history of each artifact is stored — when each version was created, who created, and the status of that build. To start building and managing images yourself, jump into HashiCorp's interactive tutorial to learn how to automate the building and management of machine images.

Application delivery for immutable infrastructure is challenging

So if immutable infrastructure has all these benefits, why aren't all companies using it? First off, immutable infrastructure takes organizational buy-in. But even if an organization believes in the merits of immutable infrastructure, creating a system to build, deploy, maintain, and manage immutable infrastructure is not an easy task. Building images or containers is easy. Deploying and maintaining images and the provisioned hosts is challenging. However, once the build-time system is in place, it is significantly easier to manage runtime environments since there are less moving pieces. For individuals managing personal websites or companies managing small infrastructures, the investment into immutable infrastructure is probably not worth it. But for those companies managing large-scale infrastructures, the benefits certainly outweigh the costs of building the system.

Once both environment creation and service configuration are automated, the next step is to automate service discovery so newly provisioned services can be seamlessly added into an elastic, service-oriented application.

Automated service discovery enables rapid deployment and modern scale

Twelve-Factor Apps have many qualities, but the main premises in regards to application delivery and infrastructure are environment-independent services and service-oriented architecture. Although there are many benefits to designing applications this way, the complexity is foreboding. Service discovery tools such as Consul, etcd, or Zookeeper are designed to overcome many of the infrastructure challenges of building Twelve-Factor apps.

Service-oriented architecture is the concept of breaking a monolithic application into smaller functional services, which makes it significantly easier to scale an application in a cost-effective manner. In this view an application is a collection of services running on separate servers that communicate with each other, rather than a monolithic application running on one server. Containers change this a bit in that containerized services can run on the same server, but are still isolated processes that must discover and communicate with each other.

Environment-independent services can move between environments, such as staging, QA, and production, without requiring manual configuration updates. Instead, each environment has a set of variables that the service automatically reads to properly configure itself. For example, when a service is in a staging environment, it should communicate with the staging database. In production the service should communicate with the production database. So in each environment, there's an environment variable set for the database IP.

These variables can be manually edited, or they can be dynamically read into a configuration with service discovery to reduce the opportunity for human error.

Remove hardcoded values from service configurations

Splitting a monolithic application stack into a separate database service and web service is usually the first step towards SOA. This holds several benefits, the largest being if your web service experiences unexpected traffic and goes down, it won't take the database service down with it. However, this separation means that your web service needs to have the IP of the database in a configuration file. Here's an example web service configuration which hardcodes the IP of the production Mongo database:

MONGO_HOST = '10.2.40.1'
MONGO_PORT = 27017

In staging the configuration might be:

MONGO_HOST = '10.3.22.1'
MONGO_PORT = 27017

While it's great that the web service is modular enough to write to the production or staging database just by changing a configuration field, it's risky and error-prone to manually hardcode the value. Eventually someone will forget to update the IP and then a staging service communicates with a production database and potentially tragic behavior occurs. Instead of hardcoding the database IP, it should be dynamically populated with a service discovery system.

{{range service "prod.database"}}
MONGO_HOST = '{{.Address}}'
MONGO_PORT = {{.Port}}{{end}}
```
```
{{range service "stage.database"}}
MONGO_HOST = '{{.Address}}'
MONGO_PORT = {{.Port}}{{end}}

This is using Consul Template, which queries the Consul service registry for the "database" service with tag "prod" or "stage" respectively, and then reloads the service configuration file with the proper value.

The way this works is a Consul agent is configured on each node (VM or physical machine) to identify the service(s) running on the node. Here's an example Consul agent configuration:

{
   "service": {
       "name": "database",
       "port": 27017,
       "tags": ["prod"]
   }
}

This identifies the node as a "database" service tagged "prod" running on port 27017. When a node with this configuration is provisioned, it joins the Consul cluster and its information is added to the service registry, which can be queried by all other services in the architecture. Each service in an architecture has a configuration which identifies it to the cluster and allows for dynamic discovery. The Consul service registry and key/value store is the source of truth for an architecture, from which all other services read to automatically configure themselves.

The end result of a service discovery system is that a service can move through environments without any manual intervention. New nodes can be brought up and immediately join a cluster, old nodes can be destroyed and immediately removed from the cluster. Service discovery makes an architecture more resilient, and enables operations teams to deploy more frequently without worrying that a configuration will be out of date.

The road to modern ops and application delivery done right

Building a robust, scalable application delivery workflow is not easy or quick. But breaking the problem space down into manageable components - environment provisioning, service configuration, service discovery, security, and scheduling - makes the overall goal achievable.

Scheduling was not discussed in this article, however it is the next step in modernizing application delivery. Tools like Nomad, Mesos, and Kubernetes allow organizations to split applications and infrastructure into separate immutable layers, which speeds up deployment times and increases resource density.

HashiCorp has developed the Road to Modern Ops, an interactive curriculum dedicated to guiding organizations from manual processes to modern, automated operations.

About the Author

Kevin Fishner is the Director of Customer Success at HashiCorp. He has extensive experience working with customers across HashiCorp's open source and commercial products. Philosopher by education (Duke), engineer by trade. @KFishner

Rate this Article

Adoption
Style

BT