BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Transitioning to Cloud Native Applications and Beyond

Transitioning to Cloud Native Applications and Beyond

Bookmarks

Organizations are rapidly adopting cloud technologies, but migration is still proving to be a challenge. What should you look out for? What applications make the most sense to migrate? How should applications get refactored to be cloud friendly? What are some lessons learned by those making the move? In this series of articles, you'll get practical advice from those who have experience helping companies successfully move to cloud environments. There is an area that deserves significant attention, and we hope that you'll participate in the conversation.

This InfoQ article is part of the series “Cloud Migration”. You can subscribe to receive notifications via RSS.

 

Enterprises have continued to accelerate their adoption of cloud infrastructures. As this shift continues, it is important to understand what this means to applications that run in cloud environments.

Traditionally, applications relied on redundant and highly optimized infrastructure – not software architecture – to maintain high availability and reliability. The result was infrastructure that was more often than not severely underutilized. This excess capacity is essentially wasted money. Given the needs of modern organizations to decrease costs, increase efficiencies and improve flexibility this is unacceptable and has been a driving force in the move to the Cloud.

Cloud Native Applications

In the Cloud the application itself has a greater responsibility to be architected in a way that increases uptime, fault tolerance, and scalability. Although many cloud providers talk about the reliability of their services, the true paradigm shift is that of uptime responsibility moving to the application level, due to the ability of the application to be aware of and control the infrastructure it is relying on. This is fundamental to what makes an application “cloud native.”

Cloud native applications first and foremost should be architected and designed with failure in mind. Netflix is a company that has been a much-touted expert in the creation of cloud native applications and often blogs about how they design for failure. There are many patterns and practices to not only recover from unexpected failures, but to make sure an application degrades gracefully as the unexpected happens. Netflix provides a great chest of tools to simulate failures and latency in order to build cloud native applications.

A great place to begin your understanding of how to architect and design cloud native applications is by reading and learning about the 12-factor methodology. The 12-factor methodology provides a series of factors that can be applied using any language in the creation of modern software that is delivered as a service.

Lets review a few of the 12-factor principles that will help in architecting cloud native applications. All processes should be stateless and share-nothing. Data, be it for state or other reasons, limits the ability for a process to be distributed and/or easily scaled out. This means you should never architect a process that expects a piece of data to be persisted. Instead all data should be persisted outside the process using an attached service such as a database or another data store.

Another fundamental principle is that dependencies should be explicitly declared and isolated. In practice, that means the application is environment agnostic and doesn’t expect some tool or library to exist outside what is explicitly declared. Aligned with dependency declaration and isolation is the concept of a config file that stores anything that is distinct to an execution environment like development vs. production credentials. Anything from connection strings to credentials to unique hostnames should be stored in some type of config file and distinct from code that uses it. These are just a few examples of patterns and practices that should be understood and implemented in all modern applications so they are suitable to be deployed on the Cloud.

What YOU can do

Prior to starting that next project make sure you and your team is educated on what makes a 12-factor application learn more at the 12-factor app site.

The future of modern applications

So what is next when it comes to cloud native applications? There are several emerging trends and technologies that have been picking up momentum in software engineering. We at CenturyLink Labs have been working on containerization and the role containers play in the ways we will architect, distribute, and deploy software in the future. Containers are becoming increasingly popular, because once an application is containerized, it is very easy to package, ship, and run anywhere. Unlike traditional virtualization Containers provide a lightweight virtualization at the application level and thus are much better at squeezing every last ounce of utilization from available capacity.

Another great aspect of containers is they can start and stop quickly (sub-second), making it easy to both scale out and recover from failures simply by starting up new containers. The ability to start and stop quickly is one of the core principles of 12-factor applications called disposability, which wasn’t covered in detail but which containers make tremendously easy.

This is a great paper from IBM on the performance of virtual machines and Linux containers that has a ton of detailed data if you are the type that is interested in comparing and contrasting containers and VMs.

The promise of containers is they will deliver increased speed, cost savings and flexibility. They also open up new ways of thinking about software architectures like using ephemeral containers to accomplish asynchronous processes and workflows. If you find this area of containers interesting be sure to check out the CenturyLink Labs dray.it project.

What YOU can do

If you have yet to learn about Containers check out various resources like Docker and the CenturyLink Labs blog. Consider doing a POC within your company to get some practical hands on experience.

Immutable Infrastructure and Deployments

Another new and emerging practice that containers make easier that is being much discussed and debated in the engineering community is the concept of immutable infrastructures and deployments. That is the practice of never modifying the infrastructure or the application that is running. Instead, the application or infrastructure is always destroyed and recreated from scratch. But why would you want to do this?

Fundamentally it is very simple: it is always harder to change things and keep track of those changes than it is to create new things. It simply is a matter of having an understanding and controlling of all the variables involved in a complex system. For small applications this might not be a big deal, but it is compounded in most modern complex applications that are constantly involving through a high rate of iteration of features and patches. It can be near impossible if humans are making manual changes. The important thing is the system needs to be in a “known state”, specifically one that has been thoroughly tested and thus can be easily recreated regardless of the environment you’re trying to re-create it in.

What YOU can do

If you haven’t yet invested heavily in configuration management tools (Puppet, Chef, Ansible, et al) you might want to study up on immutable infrastructure and deployment practices as it might be an easier way to accomplish keeping things in a known state and help with the deployment speed, stability and testability of your applications.

Microservices

The final trend to discuss would be the emergence of microservice-based architectures. Microservices are a service orientation model where every minor capability of an application is split into a distinct self-contained program (service). This means most modern web applications would have well over 10+ microservices to function. Containers once again make this easier to initially accomplish and provide a great level of encapsulation.

However, microservices are not a panacea to distributed application architectures, and they actually are finally being realized as often only really playing a role in certain situations. Specifically when an application and team is very large and the complexity of many people working on the same code base starts to impact productivity. If you have this problem it might be a good point to start refactoring your architecture to microservices. On the start of the next feature instead of adding it to the same code base break it out as a microservice and slowly evolve your application either from its monolithic or macro-service orientation down to being a microservice application.

As with anything, if you break things up in smaller chunks vs. a large monolithic application it is easier to troubleshoot, change, and iterate – especially using different people who can focus and own each specific capability that each microservice provides.

What YOU can do

If you are experiencing the problems a large code base and a large team often creates educate yourself more on Microservices. A great talk on the benefits and pitfalls of microservices can found on: Scaling Gilt: from Monolithic Ruby Application to Distributed Scala Micro-Services Architecture. As with all new architecture paradigms, there are trade-offs that need to be understood and microservices is no different. Another great blog post, Monolith First by Martin Fowler, talks about the many pitfalls of starting new applications with a microservices approach.

Learn and Explore

It is important to understand the trade-offs of any new technology, pattern or practice. Nothing is free and although some technologies or practices might improve things in one area they might make other areas more complex. Many of the new trends discussed still do not have mature tooling around them. Enterprises should stay on the forefront of understanding these trade-offs and constantly be testing, exploring and learning by doing real-world POCs and having dedicated individuals whose role it is to understand emerging technologies and their potential benefits and impacts.

Cloud native applications, application containers, immutable infrastructures and microservices are all destined to have substantial impacts on how modern applications are created, delivered and operated in the coming years and should move the needle in improving the fundamental cost, efficiency and flexibility of any business.

About the Author

Ross Jimenez believes all great technology starts with great people. He has been hacking the web for 20+ years and is a veteran Technology Leader with numerous positions at Hewlett-Packard, Compaq Computer and Sandia National Laboratories. He currently is the Engineering Director at CenturyLink Innovation Labs.

 

Organizations are rapidly adopting cloud technologies, but migration is still proving to be a challenge. What should you look out for? What applications make the most sense to migrate? How should applications get refactored to be cloud friendly? What are some lessons learned by those making the move? In this series of articles, you'll get practical advice from those who have experience helping companies successfully move to cloud environments. There is an area that deserves significant attention, and we hope that you'll participate in the conversation.

This InfoQ article is part of the series “Cloud Migration”. You can subscribe to receive notifications via RSS.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT