BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Multi-Cloud Is a Safety Belt for the Speed Freaks

Multi-Cloud Is a Safety Belt for the Speed Freaks

Bookmarks

With the fast-pace of cloud changes (new services, providers entering and exiting), cloud lock-in remains a popular refrain. But what does it mean, and how can you ensure you're maximizing your cloud investment while keeping portability in mind?

This InfoQ article is part of the series "Cloud and Lock-in". You can subscribe to receive notifications via RSS.

Key takeaways

  • A strategy of multiple infrastructure clouds will become commonplace.
  • If you use the same platform to write and deliver applications, you can then run an application on whatever IaaS that platform will run on.
  • "Multi-cloud" is less about coordination across clouds, and more about portability among them.
  • When you hear "speed" as a core requirement for success, it often refers to faster time to market and the ability to iterate quickly to improve your application.
  • Speed doesn't negate the need for a disciplined process and smart safeguards.

 

Cloud bursting! On-premises! Hybrid cloud! Off-premises! Multi-cloud! These are phrases I’ve heard over the past 10 years when covering cloud as an analyst, strategist, and now evangelist. Each of them makes logical sense, especially on a big whiteboard with boxes and arrows going to and fro. In recent times, it’s the last - multi-cloud - that I’ve seen in actual practice the most.

Back when I was at 451 Research, we spent a lot of time coming up with a “hybrid cloud” definition. To be honest, I forget what we decided on: I think something NIST-y that included the use of two resources on distinct “clouds” to support one application or service. For example, you might have a travel website/mobile app that runs its front-end in a public cloud like Azure but relies on on-premises access to (traditionally hosted, but not “real cloud”) reservations system. In that example, you have at least three "premises" where all your gear is running. And, indeed, recently Gartner predicted that "[a] strategy that includes multiple infrastructure as a service (IaaS) and platform as a service (PaaS) providers will become the common approach for 80% of enterprises by 2019, up from less than 10% in 2015."

We can start betting, then, that organizations are going to be relying on many different clouds to serve up their IT. That's more or less the scenario people mean when they say "multi-cloud," but I've noticed that the semantics of the phrase also imply a standardized set of technologies and practices. This is akin to the "write once, run anywhere" promise from the Java days: if we agree to use the same platform to write and deliver our applications, I can then run my application on whatever IaaS that platform will run on. The notion of "multi-cloud" doesn't seem to require coordination across different clouds - though, I think most people want that and most vendors are delivering it - but instead focuses on the portability of your application and your organizational knowledge. This portability addressed one of the big cloud boogeyman: lock-in.

"Lock-in" of course is a delightful story we tell about ourselves. We're always questing to avoid lock-in, but then mysteriously help fuel locked down systems like Apple and AWS. We're actually fine "locking into" to technologies, we just want the freedom to leave, as Simon Phipps put it in a previous era. Whether it's technical debt ("we'd love to add your feature, but it'll take two months"), your process ("we'd love to add you feature, but it's very low priority in the backlog"), your industry regulations ("that's nice, but PCI-Level-Infinity precludes us from adding your feature") or the technologies you use ("our database simply can't scale that high, affordably")...you're always locking into to something. What you want to do, as ever with enterprise architecture, is manage your risk/benefit scales.

In the case of "multi-cloud," then, it's your application and organizational-knowledge portability you're hedging with in return for locking into the cloud native method of application development and delivery. What you get in return is the ability to run your cloud platform on any IaaS, public or private, that comes up. You'll probably want to choose a platform based on open source, like Cloud Foundry that's ensconced in a irreversible foundation. Then, even if you use a commercial, "open-core" distro of it, you'll have a huge degree of portability because the same application packaging, services and microservices architectures, and overall runtime will remain similar no matter which distro you choose in the future.

At this point, many people look to build their own platform to maximize their future freedom to leave. The theory is that if you build it, you own it, and you can control it. If you have the ongoing resources and skills to support that, it might be an attractive option - and the kids love building platforms and frameworks! But, as I'm fond of pointing out, it's probably not something that's worth doing if you try to only focus on building things that are differentiating and valuable to your customers.

Thus far, organizations using Cloud Foundry haven't found lock-in to be an issue and, in fact, have been more focused on the benefits of finally selecting a cloud platform that supports rapid application development and delivery. At a recent analyst breakfast during the Cloud Foundry Summit a panel including folks from Comcast, CoreLogic, and ExpressScripts expressed this very sentiment: they'd picked a commercially supported cloud platform (over building their own) because they didn't want to be in the business of creating and maintaining their own stack. So far, it seems to be working out for them all. Comcast, for example, said they can now deploy features in days instead of the weeks it used to take.

When you hear people talking about "speed" as a core requirement for success nowadays, they're talking about faster time to market and the ability to iterate quickly to improve your application. Getting features out the door and having the processes in place to continually improve them are core capabilities in the era of transient advantage we're all stewing in now. As with any fast moving process, you need proper safety measures in place to make sure your delightfully speedy productivity doesn't turn into darkly toxic mania. A disciplined process that prevents zig-zagging, focuses on continuous improvement, and forces you to always consider what's best for the user is a process requirement. Technology wise, you want the usual "ilities," especially portability which brings us right back to "multi-cloud." As you're going through the frisson of "digital transformation," making sure you're maximizing portability and IaaS choice with multi-cloud is one steady and even-keeled box you should be checking off.

About the Author

Michael Coté works at Pivotal in technical marketing. He's been an industry analyst at 451 Research and RedMonk, worked in corporate strategy and M&A at Dell in software and cloud, and was a programmer for a decade before all that. He blogs and podcasts at Cote.io and is @cote in Twitter.

 

 

With the fast-pace of cloud changes (new services, providers entering and exiting), cloud lock-in remains a popular refrain. But what does it mean, and how can you ensure you're maximizing your cloud investment while keeping portability in mind?

This InfoQ article is part of the series "Cloud and Lock-in". You can subscribe to receive notifications via RSS.

Rate this Article

Adoption
Style

BT