BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Taking an Application-Oriented Approach to Cloud Adoption

Taking an Application-Oriented Approach to Cloud Adoption

Bookmarks

Key Takeaways

  • An infrastructure-centric approach to cloud adoption won't generate the massive benefits you hope for.
  • Don't be afraid to use first-party cloud services that remove the need to manage infrastructure.
  • Ensure that your failure planning considers application failure, service failure, infrastructure failure, and facility failure.
  • Carefully consider sizing requirements and take advantage of elasticity, and sometimes, long-term contracts that introduce savings.

Recently I was involved in a migration of an enterprise IT portfolio—made up of infrastructure and applications—to the cloud. I noticed we were too focused on the infrastructure aspects and downplayed the cloud’s impact on applications themselves. I believe that application architecture has a bigger role to play in the cloud era. Based on my experience in cloud implementations, I came up with principles focused on application architecture. Following these principles will help you reap the real benefits of cloud computing. If you simply take an infrastructure-centric approach, moving into the cloud will be just another transition rather than transformation.

Loosely coupled.  Cloud allows us to scale capacity in and out based on demand. But this can be achieved only when our system/sub-systems are stateless. System and subsystems should be loosely coupled so both can be scaled in/out respectively depending on the demand of the individual load.

If application and web servers are loosely coupled, then both can scale-in/out independently. In order to achieve this, use a cloud-native load balancer or queue mechanism. This allows scaling to be done to any size and hence removes the dependency constraint. Additionally, queue mechanism is one of the better options for connecting systems in a hybrid cloud scenario.  

Server with single responsibility. I borrowed this concept from Object Oriented Programming. In general, we have a tendency to utilise one server for multiple purposes. However, cloud allows us to create different server sizes varying from very small to very large. This, in turn, makes it possible to deploy only one codebase or executable unit on a given server. By doing this, changes in one application component do not have to impact any other components. In order to realize the above pattern, follow blue-green deployment methodologies so that deployment for one component will not create downtime for others.

Automated deployment. Cloud gives us an ability to provision resources on-demand. But we can’t take full advantage until we can run the application on dynamically provisioned infrastructure without any manual intervention. That means no interactive login to the server for application deployment, and configuration and settings should be applied programmatically. In other words, disable host login and all configurations and settings should be applied through either script or API provided by the cloud provider.

Use native cloud services. In many cloud implementations, we are still focused on a hosted model where cloud is primarily used as ‘Infrastructure as a Service’ (IaaS). In a self-managed model, it is our responsibility to define the trigger for scale-in/out. For many native cloud services, the cloud provider is responsible for scale-in/out the underlying infrastructure. Cloud service provider are responsible for hardware provisioning, setup and configuration, replication, and in some cases, software patching, and cluster scaling. The true benefit of cloud can only be realised by using native cloud services.

For example, use AWS Lambda, Azure Functions, SQS or similar cloud-native services to get away from defining infrastructure. Pass this burden to cloud provider! Use managed database service such as AWS RDS, DynamoDB, or Azure DocumentDB rather than self-managed databases. One of the drawbacks of this principle is; it will bind application with a respective cloud platform. In reality, if you go with cloud interoperability model you will not be able to take full advantage of the cloud. Like different operating systems brings similar capability (e.g.; access file system, network, codec, etc) in a unique way, each cloud provider brings its own unique proposition for common functionality.

Treat local storage as ephemeral. Applications hosted on cloud VMs can be thrown away anytime as part of scaling or deployment exercises. It’s important that applications do not storing anything of value locally in that VM. Local storage of the VM should be treated as ephemeral storage. It will be thrown away along with VM. Traditionally, applications store configurations, log files, images in local storage. However, this practice should be changed and any persistent information should be moved to a permanent service for block or object storage. Cloud applications should support blue/green deployment and it is only possible if currently executing code is not tied up with local storage. 

Always design for failure. In cloud, we don’t know exactly where our application is running. Hardware is prone to failure. Software updates and patches are also prone to error. It’s better to architect and design your application to handle failures rather than thinking and trying to make it robust which is never possible. Eliminate single point of failure (SPOF), build resiliency at every level. An application should function even when the underlying hardware has failed.

AWS Availability Zones (AZ) and Regions, similarly Azure Locally Redundant Storage (LRS), Zone-redundant Storage (ZRS), Geo-redundant Storage (GRS), and Read-access geo-redundant storage (RA-GRS) all make it easier to design redundant capabilities. Building resilient cloud infrastructure is straightforward and far less expensive than traditional means. Database storage should be designed to tolerate failure in at least one region or availability zone. An application can become highly available by applying the best practices of disaster recovery pattern like pilot light, or warm and hot-site deployment model. You can find more details in the AWS whitepaper Using AWS for Disaster Recovery.

Resilient to reboot and re-launch. Design your application to be resilient to reboot and re-launch. Systems must remain functional as components are reboot and re-launched. Don’t assume the health, availability or fixed location of any components. Bootstrap your instances with dynamic configuration. When instances launch it should ask “Who I am and what is my role/purpose”? Additionally, keep launch configurations short so that new infrastructure can quickly accept work.

Apply security in each layer. Build security with the assumption that nothing is inherently secure in the cloud. Add security to every layer. In a cloud, environment security works under a shared responsibility model between the provider and consumer. Cloud providers can ensure security to the underlying infrastructure, but the consumer is responsible for their workload itself. Consumers should apply an appropriate level of encryption while data is in transit and when it’s at rest. Always enforce the principle of least privilege and don’t give users (or other systems!) permissions they don’t need. Take advantage of the security features provided by the cloud provider.

AWS Key Management Service (KMS) and Azure Key Vault are managed key services that makes it easy to create and control the encryption keys used to encrypt data. Also use Hardware Security Modules (HSMs) to protect the security of keys. KMS/Key Vault are integrated with various cloud services and readily available for use.  Setting up such HSM infrastructure on our own is complex and requires special skills. Integration with other applications is not easy as well, so take advantage of native services where possible. Security is one of the biggest challenges in cloud adoption and integration. To overcome this challenge, adopt secure key services to secure each layer of the implementation. Use authentication and authorization services (AWS IAM and Azure Directory Services) provided by cloud to control and secure cloud resources. Don’t use it for authentication or authorization within the application itself. That’s an anti-pattern because an application may have hundreds or thousands of users and managing cloud resources permission for all would be tedious task and not required. If your application is an intranet application, then prefer to use Active Directory authentication. Multi-factor authentication is highly recommended to protect key cloud resources. One of the key security risk with applications is hard coding of password or other security credentials (e.g.; Access Key Id and Secret Access Key in AWS) in source code. It not only makes password and security key rotation painful, but also exposes the secrets to unwanted people once the code is committed into a source code repository. Always use the temporary credentials generated by services like AWS Security Token Service (STS) to access cloud resources from application.

Size your infrastructure. The initial size of infrastructure loses its importance in the cloud due to built-in scaling capabilities. However sizing infrastructure still matters. Considering 100s or 1000s of servers needed by an enterprise, it will help the organization to reduce cost by opting for reserve capacity as they shouldn’t be paying higher prices for the minimum workload. Minimum workload is, if you have 3-tier architecture application you will need at least 4 servers (2 database servers, 1 application server and 1 web server). In this case, you can engage into long term commitment for 4 servers. So, small homework on requirements can save loads of dollars. That said, cloud prices are keep falling so be wary of entering a long-term contract that restrict future scaling. One year timespan is often enough to gain the pricing advantage. Cloud allow us to surpass the constraint of infrastructure. On-demand price is suitable to support variable demand through dynamic scale-in and scale-out of cloud infrastructure.

Cache at edge location. Caching is a traditional technique to improve application performance for repetitive request. The AWS Edge locations or Azure PoP (Point of Presence) take caching to the next level and reduce latency. Wherever possible use cloud-native services (AWS Cloudfront or Azure CDN) which support content delivery through edge locations. Adopt AWS API Gateway or Azure API Apps to expose your REST API services to the external and internal consumer. It takes away all operational burden and provides feature like security, caching, management in few clicks.  

Tag resources for accountability. Your overall cloud objective is to bring business agility by making both application and infrastructure agile. Along with agility, cloud promotes more accountability to the business unit. Cloud give us an ability to associate cost with each business transaction. Cloud providers allow you to tag every individual infrastructure and have a provision to get billed accordingly. Years ago, Melvin Conway observed that organization structures would have a strong impact on any systems they created. Using an organization structure for tagging would certainly make each business unit and application owner more accountable and transparent!

From environment (e.g. development, testing, staging, production) perspective, have a separate account for each. It ensures environment are exposed to right people and maintained well. It will also avoid any accidental damage.

Applying the above principles doesn’t require a major re-write for most applications. Many of the principles can be addressed through setup, configuration and stringent deployment process, others can be implemented with minor changes in the application without touching the core functionality. In the past, assumption has been that after purchasing, server hardware would continue to be used on a permanent basis. Cloud is all about pay-per use, switch-off if you don’t need it. Don’t replicate your current deployment model on cloud. Therefore, you should revisit application and infrastructure patterns when adopting cloud, and avoid simple lift-and-shift exercises. At least above principles should be followed for any new deployment on cloud.   

I would suggest that cloud computing shouldn’t be treated as just another technology platform. It should be used to generate a competitive advantage. Cloud resources are like credit availability for a business. A small company can think big and realize the idea because now idea realization doesn’t require an upfront investment and high lead time. Use this to build applications and products for your competitive advantage.

About the Author

Amit Kumar is Manager– Architect at DXC (after merger of CSC and HP ES), with 15+ years’ experience in IT industry with Masters in Computer Application. Amit is AWS Certified Solution Architect - Professional and a TOGAF Certified EA practitioner. He is passionate about Cloud computing and has spent more than 18 months to transform client IT infrastructure and applications on cloud. From last six years, Amit is leading and mentoring group of Architects in DXC India. He acts as a consultant to both project delivery team and DXC end clients. He also conducts training on Solution Architecture, Application Guidance and Archimate. His some of the outstanding contribution includes demarcation of architecture definition into four steps (Strategy, Requirement, Definition and Validation). He can be found on Linked In.

Rate this Article

Adoption
Style

BT