Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Virtual Panel: Kubernetes and the Challenges of Multi-Cloud

Virtual Panel: Kubernetes and the Challenges of Multi-Cloud

Leia em Português

Key Takeaways

  • Kubernetes is experiencing phenomenal growth since it solves specific pain points with respect to application portability and deployment.
  • Kubernetes is already a reality in eliminating vendor lock-in and enabling cloud portability with the choice of offerings on the different clouds.
  • Although Kubernetes is already established in multiple clouds, multi-cloud means more than that.
  • The application and distributed system patterns that render itself to multi-cloud including some examples and case studies.
  • How the Kubernetes community is coming together to address the challenges related to multi-cloud.

At the recently concluded “sold-out” Kubecon+CloudNativeCon 2018 conference at Seattle, attended by about 8500 attendees, many of the multiple Kubernetes services offered by the major cloud providers were discussed right from the opening keynote to the many technical sessions.

Although the variety of cloud providers have the respective Kubernetes offerings, with each cloud trying to distinguish their offerings around complementary services, the goal for the Kubernetes community is for application portability across the Kubernetes offerings. However in reality, can a single application or solution span these multiple clouds, or does multi-cloud simply mean the organizational reality of dealing with multiple clouds?

InfoQ caught up with experts in the field at Kubecon 2018 in Seattle: Dr. Lew Tucker,  former cloud CTO at Cisco, Dr. Sheng Liang, CEO of Rancher Labs, Marco Palladino, CTO of Kong and Janet Kuo, Kubecon+CloudNativeCon 2018 co-chair and software engineer at Google about the multi-cloud aspects of Kubernetes and the challenges that remain. 

The panelists start off by discussing the growth of the Kubernetes community, and how it has enabled application development and deployment in the cloud. They talk about what multi-cloud means and the synergies with the Kubernetes platform and community. 

They discuss some distributed system patterns that render themselves suitable for multi-cloud, including some existing projects and case studies layered around Kubernetes.

Finally, the panelists identify some of the challenges that remain to be addressed by the Kubernetes community in order to make multi-cloud a reality, including the potential roadmap for the project to address some of the challenges.

InfoQ: Let’s talk about the Kubernetes community in general, and the recently concluded Kubecon 2018 in particular. As folks being involved with the community and the conference right from its inception, what reasons do you attribute to the growth and how does it affect developers and architects in particular going forward?

Lew Tucker: It’s true – Kubernetes is seeing amazing growth and expansion of the developer and user communities.  As an open source orchestration system for containers, it’s clear that Kubernetes makes it easier for developers to build and deploy applications with resiliency and scalability, while providing portability across multiple cloud platforms.  As the first project to graduate from the Cloud Native Computing Foundation,  it is quickly becoming the de-facto platform for cloud native apps. 

Sheng Liang: Kubernetes became popular because it solved an important problem really well: how to run applications reliably. It is the best container orchestrator, cluster manager, and scheduler out there. The best technology combined with a very well-run open source community made Kubernetes unstoppable.

Marco Palladino: The growth of Kubernetes cannot be explained without taking into consideration very disruptive industry trends that transformed the way we build and scale software: microservices and the rise of containers. As businesses kept innovating to find new ways of scaling their applications and their teams, they discovered that decoupling and distributing both the software and the organization would provide a better framework to ultimately enable their business to also grow exponentially over time. Large teams and large monolithic applications were, therefore, decoupled -- and distributed -- into smaller components. While a few companies originally led this transformation a long time ago (Amazon, Netflix and so on) and built their own tooling to enable their success, every other large enterprise organization lacked both the will, the know-how and the R&D capacity to also approach a similar transition. With the mainstream adoption of containers (Docker in 2013) and the emergence of platforms like Kubernetes (in 2014) shortly after, every enterprise in the world could now approach the microservices transition by leveraging a new, easy-to-use self-service ecosystem. Kubernetes, therefore, became the enabler not just to a specific way of running and deploying software, but also to an architectural modernization that organizations were previously cautious to adopt because of lack of tooling, which was now accessible to them. It cannot be ignored that Docker and Kubernetes, and most of the ecosystem surrounding these platforms, are open source -- to understand the Kubernetes adoption, we must also understand the decisional shift of enterprise software adoption from being top-down (like SOA, driven by vendors) to being bottom-up (driven by developers). As a result of these Industry trends, that in turn further fed into Kubernetes adoption, developers and architects now can leverage a large ecosystem of self-service open source technologies that’s unprecedented compared to even half a decade ago.

Janet Kuo: One of the key reasons is that Kubernetes has a very strong community -- it is made up of  a diverse set of end users, contributors, and service providers. The end users don’t choose Kubernetes because they love container technology or Kubernetes, they choose Kubernetes because it solves their problems and allows them to move faster. Kubernetes is also one of the largest and one of the most active open source projects, with contributors from around the globe. This is because there’s no concentration of power in Kubernetes, and that encourages collaboration and innovation regardless of whether or not someone works on Kubernetes as part of their job or as a hobby. Lastly, most of the world’s major cloud providers and IT service providers have adopted Kubernetes as their default solution for container orchestration. This network effect make Kubernetes grow exponentially.

InfoQ: Some of us can recount the Java days that eliminated vendor lock in. Kubernetes has a similar mission where vendors seem to cooperate on standards and compete on implementations. Eliminating vendor lock in might be good overall. However, what does cloud portability or multi-cloud mean, and does it even matter to the customer (keeping aside Kubernetes)?

Tucker: Layers of abstraction are typically made to hide the complexity of underlying layers, and as they become platforms, they also provide a degree of portability between systems.

In the early Java days, we talked about the promise of “write once, and run anywhere” reducing or eliminating the traditional operating system lock-in associated with either Unix, or Windows. The degree to which this was realized often required careful coding, but the value was clear.  Now with competing public cloud vendors, we have a similar situation.  The underlying cloud platforms are different, but Docker containers and Kubernetes provides a layer of abstraction and high degree of portability across clouds.  In a way this forces cloud providers to compete on the services offered.   Users then get to decide how much lock-in they can live in order to take advantage of the vendor specific services.   As we move more and more towards service-based architectures, it’s expected that this kind of natural vendor lock-in will become the norm.

Liang: While eliminating cloud lock-in might be important for some customers, I do not believe a product aimed solely at cloud portability can be successful. Many other factors, including agility, reliability, scalability, and security are often more important. These are precisely some of the capabilities delivered by Kubernetes. I believe Kubernetes will end up being an effective cloud portability layer, and it achieves cloud portability almost as an side-effect, sort of like how the browser achieved device portability.

Palladino: Multi-cloud is often being approached from the point of view of a conscious top-down decision made by the organization as part of a long-term strategy and roadmap. However, the reality is much more pragmatic. Large enterprise organizations really are the aggregation of a large variety of teams, agendas, strategies and products that happen to be part of the same complex “multi-cellular” organism. Multi-cloud within an organization is bound to happen simply because each different team/product inevitably makes different decisions on how to build their software, especially in an era of software development where developers are leading, bottom-up, all the major technological decisions and experimentations. Those teams that are very close to the end-user and to the business are going to adopt whatever technology (and cloud) better allows them to achieve their goals and ultimately scale the business. The traditional Central IT -- far from the end-users and from the business but closer to the teams -- will then need to adapt to a new hybrid reality that’s being developed beneath them as we speak. Corporate acquisitions over time of products and teams that are already using different clouds will also lead to an even more distributed and decoupled multi-cloud organization. Multi-cloud is happening not necessarily because the organization wants to, but because it has to. Containers and Kubernetes -- by being extremely portable -- are therefore, a good technological answer to these pragmatic requirements. They offer a way to run software among different cloud vendors (and bare-metal) with semi-standardized packaging and deployment flows, thus reducing operational fragmentation.

Kuo: Cloud portability and multi-cloud gives users the freedom to choose best-fit solutions for different applications for different situations based on business needs. Multi-cloud also enables more levels of redundancy. By adding redundancy, users achieve more flexibility to build with the best technology, and also helps them optimize operations and stay competitive.  

InfoQ: Kubernetes distributions and offerings on different clouds will try to differentiate their respective offerings, which seems the natural course of “Co-opetition”. What are the pitfalls that the Kubernetes community should avoid, based on your past experiences with similar communities?

Tucker: It’s natural to expect that different vendors will want to differentiate their respective offerings in order to compete in the market.   The most important thing for the Kubernetes community is to remain true to its open source principles and put vendor or infrastructure based differences behind standard interfaces to keep the platform from fragmenting into proprietary variants.  Public interfaces, such the device plugin framework (for things such as GPUs) and CNI (Container Networking Interface) isolate infrastructure specific difference behind a common API, allowing vendors to compete on implementation while offering a common layer.   Vendors today also differentiate in how they provide managed Kubernetes.  This sits outside the platform, leaving the Kubernetes API intact which still leaves it up the user whether or not they wish to adopt a vendors management framework in their deployment model.   

Liang: The user community should avoid using features that make their application only work with a specific Kubernetes distro. It is easy to stand up Kubernetes clusters from a variety of providers now. After creating the YAML files or Helm charts to deploy your application on your distro of choice, you should also try the same YAML files or Helm charts on GKE or EKS clusters.

Palladino: The community should be wary of any cloud vendor hinting at an “Embrace, Extend, Extinguish” strategy that will in the long term fragment Kubernetes and the community and pave the road for a new platform “to rule them all.”

Kuo: Fragmentation is what the Kubernetes community should work together on to avoid.  Otherwise, end users cannot get consistent behavior in different platforms and lose the portability and the freedom to choose - which brought them to Kubernetes in the first place. To address this need first identified by Google, the Kubernetes community has invested heavily in conformance, which ensures that every service provider’s version of Kubernetes supports the required APIs and give end users consistent behavior.

InfoQ: Can you mention some no-brainer distributed system or application patterns that make it a shoe in for multi-cloud Kubernetes? Is it microservices?

Tucker: Microservice-based architectures are a natural fit with Kubernetes. But when breaking apart monolithic apps into a set of individual services, we’ve now brought in the complexity of a distributed system that relies on communication between its parts. This is not something that every application developer is prepared to take on. Service meshes, such as Istio, seem like a natural complement for Kubernetes. They off-load many networking and traffic management functions, freeing up the app developer from having to worry about service authentication, encryption, key exchange, traffic management, and others while providing uniform monitoring and visibility.

Liang: While microservices obviously fit multi-cloud Kubernetes, legacy deployment architecture fits as well. For example, some of our customers deploy multi-cloud Kubernetes for the purpose of disaster recovery. Failure of the application in one cloud does not impact the functioning of the same app in another cloud. Another use case is geographic replication. Some of our customers deploy the same application across many different regions in multiple clouds for the purpose of geographic proximity. 

Palladino: Microservices is a pattern that adds a significant premium on our architecture requirements because it leads to more moving parts and more networking operations across the board -- managing a monolith is a O(1) problem, while managing microservices is a O(n) problem. Kubernetes has been a very successful platform for managing microservices, since it provides useful primitives that can be leveraged to automate a large variety of operations that in turn remove some -- if not most -- of that “microservices premium” from the equation. The emergence of API platforms tightly integrated with those Kubernetes primitives -- like sidecar and ingress proxies -- are also making it easier to build network intensive, decoupled and distributed architectures. With that said, any application can benefit from running on top of Kubernetes, including monoliths. By having Kubernetes running as the underlying abstraction layer on top of multiple cloud vendors, teams can now consolidate operational concerns -- like distributing and deploying their applications -- across the board without having to worry about the specifics of each cloud provider.

Kuo: Microservices is one of the best-known patterns. Sidecar pattern is also critical for modern applications to integrate functionalities like logging, monitoring, and networking into the applications. Proxy pattern is also useful for simplifying application developers’ lives so that it’s much easier to write and test their applications, without needing to handle network communication between microservices, and this is commonly used by service mesh solutions, like Istio. 

InfoQ: What does the emerging Service Mesh pattern do to enable multi-cloud platform implementation (if any)?

Tucker: I believe Kubernetes-based apps based on a microservices and service mesh architecture will likely become more prevalent as the technology matures.  The natural next step is for a service mesh to connect multiple kubernetes clusters running on different clouds.  A multi-cloud service mesh would make it much easier for developers to move towards stitching together the very best components and services from different providers into a single application or service.  

Liang: The emerging service mesh pattern adds a lot of value for multi-cloud Kubernetes deployment. When we deploy multiple Kubernetes clusters in multiple clouds, the same Istio service mesh can span these clusters, providing unified visibility and control for the application.

Palladino: Transitioning to microservices translates to an heavier use of networking across the services that we are trying to connect. As we all know the network is implicitly unreliable and cannot be trusted, even within the private organization’s network. Service Mesh is a pattern that attempts to make our inherently unreliable network reliable again by providing functionalities (usually deployed in a sidecar proxy running alongside our services) that enable reliable service-to-service communication (like routing, circuit breakers, health checks, and so on). In my experience working with large enterprise organizations implementing service mesh across their products, the pattern can help with multi-cloud implementations by helping routing workloads across different regions and data-centers, and enforcing secure communication between the services across different clouds and regions.

Kuo: Container orchestration is not enough for running distributed applications. Users need tools to manage those microservices and their policies, and they want those policies to be decoupled from services so that policies can be updated independent of the services. This is where service mesh technology come into play. The service mesh pattern is platform independent, so a service mesh can be built between clouds and across hybrid infrastructures. There are already several open-source service mesh solutions available today. One of the most popular open-source service mesh solutions is Istio. Istio offers visibility and security for distributed services, and ensures a decoupling between development and operations. As with Kubernetes, users can run Isio anywhere they see fit. 

InfoQ: Vendors and customers usually follow the money trail. Can you talk specifically about customer success stories or case studies where Kubernetes and/or multi-cloud matters?

Liang: Rancher 2.0 has received tremendous market success precisely because it is capable of managing multiple Kubernetes clusters across multiple clouds. A multinational media company, uses Rancher 2.0 to stand up and manage Kubernetes clusters in AWS, Azure, and their in-house vSphere clusters. Their IT department is able to control which application is deployed on which cloud depending on its compliance needs. In another case, Goldwind, the 3rd largest wind turbine manufacturer in the world, uses Rancher 2.0 to manage multiple Kubernetes clusters in the central data center and in hundreds of edge locations where wind turbines are installed.

Palladino: I have had the pleasure of working very closely with large enterprise organizations and seeing the pragmatic challenges that these organizations are trying to overcome. In particular, one large enterprise customer of Kong decided to move to multi-cloud on top of Kubernetes due to the large number of acquisitions executed over the past few years by the organization. Each acquisition would bring new teams, products and architectures under the management of the parent organization, and over time, you can imagine how hard it became to grow existing teams within the organization with so much fragmentation. Therefore, the organization decided to standardize how applications are being packaged (with Docker) and executed (with Kubernetes) in an effort to simplify ops across all the teams. Although sometimes very similar, different cloud vendors actually offer a different set of services with different quality and support, and it turns out that some clouds are better than others when it comes to certain use cases. As a result, many applications running within the organization also ran on different clouds depending on the services they implemented, and the company was already a multi-cloud reality by the time they decided to adopt Kubernetes. In order to keep the latency low between the applications and the specific services that those products implemented from each cloud vendor, they also decided to start a multi-cloud Kubernetes cluster. It wasn’t by any means a simple task, but the cost of keeping things fragmented was higher than the cost of modernizing their architecture to better scale it in the long-term. In a large enterprise, it’s all about scalability -- not just technical but also organizational and operational.

Kuo: There are a wide variety of customer success stories covered in Kubernetes case studies. My favorite is actually one of the oldest -- the one about how Pokemon Go (a mobile game that went viral right after it’s release) runs on top of Kubernetes, which allowed game developers to deploy live changes while serving millions of players around the world. People were surprised and excited to see a real use case of large scale, production Kubernetes clusters. Today, we’ve learned about so many more Kubernetes customer success stories -- such as the ones we just heard from Uber and Airbnb on KubeCon keynote stage. Diverse and exciting use cases are now the norm within the community. 

InfoQ: From a single project or solution viewpoint, is it even feasible for the different components or services to reside in multiple clouds? What are the major technical challenges (if any) that need to be solved before Kubernetes is truly multi-cloud? 

Tucker: Yes.  Work on a dedicated federation API apart from the Kubernetes API is already in progress ( see on Kubernetes and on GitHub).  This approach is not limited to clusters residing at the same cloud provider. But, it’s still very early days and many of the pragmatic issues beyond the obvious ones such as increased latency and different cloud service APIs are still under discussion.

Liang: Yes it is. Many Rancher customers implement this deployment model. We are developing a number of new features in Rancher to improve multi-cloud experience. 1) A mechanism to orchestrate applications deployed in multiple clusters that reside in multiple clouds. 2) Integration with global load balancers and DNS servers to redirect traffic. 3) A mechanism to tunnel network traffic between pods in different clusters. 4) A mechanism to replicate storage across multiple clusters.

Palladino: Federation across multiple Kubernetes clusters, a control plane API that can help run multiple clusters and security (users and policies). Synchronizing resources across multiple clusters is also a challenge, as well as observability across the board. Some of these problems can be fixed by adopting third-party integrations and solutions, but it would be nice if Kubernetes provided more out-of-the box support in this direction. There is already an alpha version of an experimental federation API for Kubernetes, but it’s unfortunately not mature enough to be used in production (with known issues).

Thinking of a multi-cloud Kubernetes is akin to thinking of managing separate Kubernetes clusters at the same time. The whole release lifecycle (packaging, distributing, testing and so on) needs to be applied and synced across all the clusters (or a few of them based on configurable conditions), while having at the same time a centralized plane to monitor the status of operations across the entire system. Both application and user security also become more challenging across multiple clusters. At runtime, we may want to enforce multi-region routing (for failover) between one cluster and another, which also means introducing more technology (and data plane overhead) in order to manage those use cases. All of the functionality that we normally use in a single cluster, like CI/CD, security, auditing, application monitoring and alerting, has to be re-thought in a multi cluster, multi-cloud environment -- not just operationally but also organizationally.

Kuo: Kubernetes already does a good job of abstracting underlying infrastructure layer so that cloud provider services, such as networking and storage, are simply resources in Kubernetes. However, users still face a lot of friction when running Kubernetes in multi-cloud. Technical challenges include connectivity between different regions and clouds, disaster discovery, logging and monitoring, need to be solved. We need to provide better tooling to make the user experience seamless and the community is dedicated to providing those resources.

InfoQ: Please keep this brief but can you talk about some products or technologies that may not be mainstream yet, but might help obviate some of the issues that we’ve been talking about so far and makes development and deployment on multi-cloud easier?

Liang: Our flagship product, Rancher, is designed specifically to manage multiple Kubernetes clusters across multiple clouds

Palladino: Open-source products like Kong can help in consolidating security, observability and traffic control for our services across multi-cloud deployments, by providing a hybrid control plane that can integrate with different platforms and architectures, and by enabling large legacy applications to be decoupled and distributed in the first place. Open-source ecosystems, like CNCF, also provide a wide variety of tooling that help developers and architects navigate the new challenges of multi-cloud and hybrid architectures that the Enterprise will inevitably have to deal with (Monoliths, SOA, Microservices and Serverless). It’s very important that - as the scale of our systems increase - we avoid fragmentation of critical functions (like security) by leveraging existing technologies instead of reinventing the wheel. And open-source, once again, is leading this trend by increasing developer productivity, while at the same time creating business value for the entire organization.

Kuo: One of the common patterns I see is to manage everything using Kubernetes API. To do this, you most likely need to customize Kubernetes APIs, such as using Kubernetes CustomResourceDefinition. A number of technologies built around Kubernetes, such as Istio, rely heavily on this feature. Kubernetes developers are improving the custom Kubernetes APIs feature, to make it easier to build new tools for development and deployment on multi-cloud.

InfoQ: Final Question. Summing up, where do you see the Kubernetes community headed, and how important is multi-cloud in defining the roadmap? Any other random thoughts about Kubernetes and Kubecon that developers and architects should care about?

Tucker: Most user surveys show that companies use more than a single cloud vendor and often include both public and private clouds.  So multi-cloud is simply a fact.   Kubernetes, therefore provides an important common platform for application portability across clouds.   The history of computing, however, shows that we continually build up layers of abstraction.  When we therefore look at multi-cloud, it may be that other “platforms” such as serverless or a pure services-based architecture built on top of kubernetes might be where we are really headed.

Liang:. Multi-cloud started as a wonderful side benefit of Kubernetes. I believe multi-cloud is now a core requirement when people plan for Kubernetes deployment. The community is working in a number of areas to improve multi-cloud support: Kubernetes conformance, SIG multi-cluster, and Federation. I’m extremely excited about where all these efforts are headed. I encourage all of you to take a look at these projects if you are interested in multi-cloud support for Kubernetes.

Palladino: Kubernetes and containers enabled entire organizations to modernize their architectures and scale their businesses. As such, Kubernetes is well-positioned to be the future of infrastructure for any modern workload. As more and more developers and organizations deploy Kubernetes in production, the more mature the platform will become to address a larger set of workloads running in different configurations, including multi-cloud. Multi-cloud is a real and pragmatic topic that every organization should plan for in order to continue being successful as the number of products and teams -- with very unique requirements and environments -- keeps growing over time. Like a multicellular organism, the modern enterprise will have to adapt to a multi-cloud world in order to keep building scalable and efficient applications that ultimately deliver business value to their end users.

Kuo: The Kubernetes community has a special interest group for cloud providers to ensure that the Kubernetes ecosystem is evolving in a way that’s neutral to all cloud providers. I’ve already seen many Kubernetes users choose Kubernetes for the benefit of multi-cloud. I envision that more enterprise users will join the Kubernetes community for that very same reason.  


Panelists talk about their own personal experience with the Kubernetes community and how it’s enabling cloud development and deployment. All panelists conclude that Kubernetes is already a reality in eliminating vendor lock-in and enabling cloud portability with the choice of offerings on the different clouds. However, they also recognize that multi-cloud means more than a common platform on multiple clouds. 

The panelists talk of the pragmatic approach of the Kubernetes community to be able to solve specific pain points related to application development and deployment in a true open source fashion and community manner. This community approach is unlikely to fall into the danger of “embrace and extend” that has been the bane of many projects in the past.

Finally, panelists talk about application patterns, like Service Mesh, Istio and so on, in the context of some examples that require Kubernetes and its roadmap to evolve to be truly multi-cloud.

About the Panelists

Lew Tucker Former VP/CTO at Cisco Systems and served on the board of directors of the Cloud Native Computing, OpenStack, and Cloud Foundry Foundations. He has more than 30 years of experience in the high-tech industry, ranging from distributed systems and artificial intelligence to software development and parallel system architecture. Prior to Cisco, Tucker was VP/CTO Cloud Computing at Sun Microsystems where he led the development of the Sun Cloud platform. He was also a member of the JavaSoft executive team, launched, and helped to bring Java into the developer ecosystem. Tucker moved into technology following a career in Neurobiology at Cornell University Medical school and has a Ph.D. in computer science.

Marco Palladino is an inventor, software developer and Internet entrepreneur based in San Francisco. As the CTO and co-founder of Kong, he is Kong’s co-author, responsible for the design and delivery of the company’s products, while also providing technical thought leadership around APIs and microservices within both Kong and the external software community. Prior to Kong, Marco co-founded Mashape in 2010, which became the largest API marketplaceand was acquired by RapidAPI in 2017.

Janet Kuo is a software engineer for Google Cloud. She has been a Kubernetes project maintainer since 2015. She currently serves as co-chair of KubeCon + CloudNativeCon.


Sheng Liang is co-founder and CEO of Rancher Labs. Rancher develops a container management platform that helps organizations adopt Kubernetes. Previously Sheng Liang was CTO of Cloud Platform at Citrix and CEO and founder of (acquired by Citrix.)

Rate this Article