Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles How to Make the Leap: Building Cloud-Ready Applications into the Architecture

How to Make the Leap: Building Cloud-Ready Applications into the Architecture

Key Takeaways

  • Cloud-Ready != Cloud-Native: The latter takes a lot more re-engineering than the former.
  • Cloud-Ready is mostly about altering deployments and, potentially, mindset.
  • Tooling is there to help in the form of Cloud Management Platforms (CMPs).
  • The benefits are largely about portability, right-sizing cloud deployments, and future-proofing flexibility so you can get more life out of those aging applications.
  • Start small, dream big. You don’t have to do it all at once, so pick something manageable to start with.

In an enterprise, application portfolios can be complex. It’s not uncommon for a set of supported applications to include diverse needs and architectures stretching from collaborative entries like email, wikis, and blogs to financial transaction assets, to HR-heavy employee portals, to customer-facing marketing websites. A modern approach to finding just the right hosting for this span of need would include some applications running in a datacenter while others run in the public cloud. Studies show that hybrid cloud approaches like this are popular with as many as 73 percent of organizations.

But what does that mean, exactly? How do you take all these very different kinds of applications and make them ready for cloud, public or private, when they’ve been running on bare metal for years and the people who wrote them left the company long ago?

Cloud-Ready vs. Cloud-Native

A cloud-native application is one that was originally written to run on a public cloud, often implying a container-based deployment. This style of architecture has horizontal auto-scaling built into it from its initial design and often relies on cloud-derived services like load balancers, object storage, managed databases and queueing systems so that it can focus on the business logic that it’s uniquely tasked with providing. Often, continuous integration/continuous delivery tool chains are associated with these applications so that Agile software development methodologies can quickly churn out new iterations.

That said, this is not the kind of application under discussion here.

The classic enterprise application has multiple components like web servers, application servers, and database servers. Many of these applications were originally written during the client-server era, with the intent of running them on bare metal hardware. Despite their age, these types of applications can be made cloud-ready. Fundamentally, the components talk to each other over TCP connections using IP addresses and port numbers that are often aided by DNS. Nothing about that structure prevents these applications from running on virtual machines or even containers instead, and if they can be run on either, they can be deployed to any public or private cloud.

While applications like this cannot take full advantage of the services that public clouds offer like their cloud-native brethren, there are times when a classic enterprise application can be made cloud-ready and get benefits without a complete rewrite. The limiting factor in this transition is typically not the code, because, as discussed above, the context within which the code operates does not change when it runs on a public cloud. Instead, the limiting factor tends to be the deployment mechanics, and examining that aspect of an application lifecycle is what can turn a classic multi-tier enterprise application into a cloud-ready one.

How Are You Deploying Today?

When considering the cloud readiness of an existing client-server era application, ask yourself the following question: How is the application being deployed today?

It ended up in production somehow, and double clicking on the details of that will reveal how easy or difficult it will be to make the application cloud-ready. Did it get deployed manually five years ago by someone who has since retired? That’s probably a good sign that the application has a lot more issues than simply making it cloud-ready.

Does it have a set of scripts that prepare its Linux or Windows environment and then automate the installation of custom bits and dependencies? More recent applications will have a set of scripts that automate those tasks like preparing operating system kernels and injecting IP addresses into configuration files. Those procedures and/or scripts provide the key to transitioning to cloud deployments and provide the roadmap to cloud-readiness. A working knowledge of how these work is critical background when preparing one of these classic applications.

In between those two extremes would be an application that has no automation, but a well-documented run book that details the steps needed to prepare a particular environment for the application. While not as easy to adapt to cloud-readiness as a fully scripted scenario or difficult enough to walk away like the manual case, this situation paints a gray area that needs to be carefully scrutinized on a case-by-case basis.

Pets vs. Cattle

Another deployment aspect to consider before beginning a cloud-ready journey involves a mindset change. For these older applications, physical hardware was a scarce resource, given its refresh time. Obtaining new hardware took months, and that influenced application architecture that saw a software release as a risk when weighed against the care and feeding of the physical servers. Because of this, we treated servers like pets, gave them names, and did everything we could to keep them up and running at all times. To release new software on these pets introduced risk that was difficult to recover from, given the expense of reinstalling a physical machine from scratch.

Contrast that with choices available with virtualization and the potential to improve application deployments in a way that can get extra lifespan out of them. In a VM world, compute resources can be treated like disposable entities instead of scarce resources. Instead of taking time to upgrade the operating system of a VM, for example, you create a new one with the new operating system, insert it into the load balancer pool, and kill the old one. This idea of treating VMs like cattle leads to benefits like horizontal auto-scaling and faster release cycles because the scarcity of the compute resources are minimized.

Some, but certainly not all, classic enterprise applications lend themselves to this modernization in parallel to preparing them for cloud readiness. In particular, applications that make use of load balancers at varying levels or that could tolerate their insertion are good candidates for this potential improvement. Regardless, understanding the difference between the two approaches is important while going through the cloud readiness process.


Cloud Management Platforms (CMPs) were specifically built to help with this cloud-readiness preparation. They provide a mechanism where someone knowledgeable about the deployment of a specific application can build a blueprint or profile that describes each application component and how they interact with one another.

CMPs typically provide implementations of commonly used components like the HAProxy, Apache, and MySQL shown above. Organizations that prefer their own implementations of those tiers can either create their own components or inject scripts at each tier to customize installation.

For example, in the screen shot above, maybe some custom MySQL configurations are required above and beyond the basic installation provided by the CMP such as specific monitoring or security software that is installed on the operating system. This is where that knowledge of the deployment process can be put to good use, to identify those additions to basic components that are currently in play.

The core advantage of a CMP is that they abstract the details of specific public and private clouds in such a way that eases the transition of a classic enterprise application to a cloud-ready one. For example, the IT staff does not have to become experts in Google Cloud Platform, Amazon Web Services, and VMware APIs, but instead funnel that effort into the CMP representation of an application as an abstraction. While an application can be made cloud-ready by writing deployment scripts directly to the new cloud API, should the choice of cloud change in the future, so would the scripts. By utilizing a CMP’s abstraction that obscures those API details, that work can be done once and typically with less effort than a direct API approach.

Benefits You Get and How To Start

Portability and ease of use are huge benefits when transforming a classic enterprise application to its cloud-ready equivalent via CMPs. With some additional effort, horizontal auto-scaling and blue/green deployments can be injected into the new deployment scripting, providing additional gains for some applications. Some CMPs even offer the ability to run benchmarks against cloud deployments so that VM sizes for each application tier can be selected correctly and even different clouds compared against each other using both price and performance criteria. That insures that applications are running on the most efficient cloud given the business needs being met by the application while at the same time making it easy to change course with the same CMP abstraction used during the initial on boarding.

Most organizations start this process by porting the deployment mechanism for a low demand application without complex high availability needs whose original deployment experts are still in house. This lowers the risk of a first attempt at this cloud-ready conversion. With an early success, the remainder of the application portfolio can be assessed and prioritized for cloud readiness. Typically, this means delaying higher demand, critical availability or applications where expertise has long left the building.

About the Author

A 20+ year tech industry veteran, Pete Johnson is the Technical Solutions Architect for Cloud in the Global Partner Organization at Cisco Systems Inc. He can be found on Twitter at @nerdguru.

Rate this Article