BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Q&A with Google VP Eyal Manor about Anthos, Kubernetes, and Multicloud

Q&A with Google VP Eyal Manor about Anthos, Kubernetes, and Multicloud

Anthos, formerly the Google Cloud Services platform for building and managing modern cloud native applications across multiple environments, was unveiled at the Google Next conference. Anthos was the centerpiece in the main keynote and was covered earlier by InfoQ.

As explained in the keynote, the Kubernetes project was an evolution from a Google internal project called Borg. The blog outlines how Anthos builds on the Kubernetes platform to support hybrid cloud and even multi-cloud. As Kubernetes spans multiple environments, including multiple clouds, enterprises need a centralized view across these different clusters and the need to manage them and push configuration and policy information uniformly. Anthos is aimed at solving this problem.

The On-Prem clusters connect via IP to Google's API endpoints, specifically Stackdriver for monitoring and alerting, and Google Kubernetes Engine (GKE) Connect for registering clusters with Google Cloud Platform Console.

The basic component diagram is as follows (Courtesy: Google Cloud docs).

Kubernetes and the Istio Service Mesh along with other open source components make it possible for developers to develop multi-cloud applications based around the Write Once Run Anywhere philosophy.

A central component in making the hybrid cloud and multi-cloud possible with the Google Cloud is the Cloud Interconnect as outlined below (Courtesy: Google Cloud docs).

Cloud Interconnect, in conjunction with the other components, provides a unified model for computing, networking, storage and service management for a hybrid cloud by pushing config and policy information.

InfoQ caught up with Eyal Manor, VP of product and engineering, Google Cloud, as a follow on to his appearance at the Google Next keynote to deep dive into the architecture for Anthos.

InfoQ: At Kubecon 2018, Multi-cloud was left, right and center. It was covered in my virtual panel. Simply put, is Anthos a multi-cloud platform based around Kubernetes and Istio? Can you deep-dive a bit into the architecture?

Eyal Manor: Yes, Anthos is a multi-cloud platform based on Kubernetes and Istio. Anthos makes multi-cloud capability a reality through the use of open APIs and an extended management interface from which customers can easily deploy pre-configured Kubernetes applications. Anthos also lets enterprises automate policy and security at scale across deployments, using Istio and Anthos Config Management. Anthos also includes a container marketplace, GCP container Marketplace, with OSS and 3rd party vetted ISV solutions, as well as monitoring and logging out of the box, providing unified controls and observability.

Google Kubernetes Engine(GKE) was the first managed Kubernetes service available in the industry, and it makes a great infrastructure base for Anthos. One of the main goals of hybrid is meeting people where they are on their cloud journey. Anthos’ ability to provide unified controls and manageability based on other Kubernetes installations ensures the right support for all phases of cloud adoption.

InfoQ: Workloads on-premises will need Google Kubernetes Engine (GKE). what will be required to run Anthos, controlled by GKE, on other cloud providers? Will that even be possible? Would you expect Azure & AWS to launch similar hosted offerings based on Anthos?

Manor: Anthos’ hybrid functionality is generally available both on Google Cloud Platform (GCP) with GKE, and in the customer’s data center with GKE On-Prem. Right now we have GKE connect agent that allows us to connect to EKS environments or kubernetes workloads running anywhere. We have further plans to expand this capabilities in the near future.

GKE On-Prem requires vSphere v6.x, and we know that cloud providers offer that as an API, which would allow customers to have the GKE On-Prem experience on that cloud. We are talking to many customers and partners about what other platforms they would like to see GKE support next.

InfoQ: In your blog you characterize Anthos as “write once, run anywhere.” As developers and architects, we’ve heard this before, although in a different context and the initial reality was a lot different. Can you address this skepticism here?

Manor: Anthos is based on Open Platforms and Open APIs, the only practical way to for developers to enable multi-cloud and multi-environment compute. In the past there were protocol specs; now OSS is becoming the standard that works consistently anywhere with the same developer experience. Working with proprietary, closed systems to ensure support and reliability versus finding open, consistent platforms with a high trust, reliability and support partnership. Anthos takes the latter, customer-friendly approach. Structuring a customized operating model to fit disparate environments maximizes benefits of modernization versus finding a single/consistent operating model that works across on-prem and cloud organizations. Anthos reduces time for Developers and the ISV providers by enabling portability.

InfoQ: Part B of the above question, for example, CAP theorem written by Eric Brewer - suggests that any space for partition will cause issues with application uptime. How will you address this with Anthos?

Manor: The great thing about working at Google is I can just ping the person who devised the CAP theorem, as he's working on Anthos alongside me! Eric points out that applications can continue to run on premises without a connection back to GCP. However, we do need a connection to manage upgrades and other operational changes. So some updates will not be possible during a partition, but local updates will work and the application should continue to run.

InfoQ: You talk about migration of workloads. Is this similar to VMotion, which is live migration of workloads without interruption?

Manor: No, this is different from VMotion and in a lot of ways solves a fundamentally different customer pain point. VMotion allows users to move an existing workload from one location to another running on the same platform without interruption. Once the move happens, the VM will still be running on the same environment in the same form-factor. With Anthos Migrate, Google Cloud has live migration of VMs, and remains the only hyperscale cloud provider to offer this technology.

When you lift an application from on-prem and drop it onto the cloud in the same fashion, you are just moving your platform problem elsewhere.

Anthos Migrate offers customers a way to migrate workloads (physical servers, VMs from on-prem, Google Compute Engine, and/or other clouds) directly to containers in Google Kubernetes Engine (GKE) , a modernized environment. Available in Beta, Anthos Migrate migrates stateful workloads to containers in GKE within minutes, automatically transforming workloads to run as containers in Kubernetes pods, helping reduce risk, labor, and downtime for modernization projects.

Once migrated and upgraded to containers, workloads can be further modernized in GKE with added services such as CSM, Stackdriver logging and monitoring etc. as well as other OSS and 3rd party solutions from GKE Marketplace.

InfoQ: Eric Brewer and Jennifer Lin wrote a white paper on Anthos. One of the fundamental tenets in the paper is decoupling. Can you address application development and Anthos. Is project Anthos aimed at cloud native applications only? How does project Anthos simplify application development beyond application portability provided by Kubernetes?

Manor: Anthos lets customers build and manage modern hybrid applications across their environments. Non cloud -native workloads can also take advantage of Anthos. Anthos addresses existing applications through the service mesh and migrate capabilities and is not limited to cloud native applications. Powered by Kubernetes and other industry-leading open-source technologies from Google, Anthos transforms customer’s architectural approach, lets them focus on innovation, and allows them to move faster than ever without compromising security, increasing complexity or reducing developer productivity. It lets customers modernize their applications on premises, or in the cloud. Use new or existing hardware while Google Cloud offers a managed environment for their applications. Faster time to market, lower administrative overhead, and increased innovation capabilities. Anthos gives customers one platform that they can run anywhere, and view what’s happening across environments. It’s built on open source technology created by Google, so it’s portable, consistent, and extensible, to help future-proof investments. Anthos simplifies serverless application development through integration with Cloud Run which abstracts Kubernetes, and through the marketplace and a multi-cluster, multi-cloud management plane.

InfoQ: While stateless applications are fairly "easy" to deploy anywhere, when you start bridging across clouds, the albatross always comes back to storage, networking and identity. Worth asking the same question I asked in the virtual panel: what are the major technical challenges (if any) that need to be solved before Kubernetes is truly multi-cloud?

Manor: At the end of the day, Kubernetes and platforms built on it, like Anthos, are running applications that customers directly, or another vendor, provide. Customers have to deal with the same challenges as if they were running those applications in a similar environment but without Kubernetes. The good news is that Kubernetes actually makes things easier; by having a standard control plane and API, we are finding customers are able to run core workloads like identity systems in multiple locations, much easier. Google's experience with running distributed systems, at scale, means we have a library of patterns to help address running these services, and our philosophy of Customer Reliability Engineering lets us extend our SRE team to share responsibility for them with our customers. (SRE stands for Site Reliability Engineering, more info on SRE can be found here).

InfoQ: Can you briefly talk about what’s on the horizon from the Kubernetes community that Anthos can leverage, and what’s the immediate and long term roadmap for Anthos?

Manor: The Kubernetes community has developed a great set of primitives - Pods, Deployments, custom resource definitions - inspired by Google's pioneering work with Borg and Omega. Now, we're seeing more people focus on what they can build on top of these primitives. Service meshes like Istio let developers focus on the application business logic, not on the infrastructure that runs them. We're delighted to see adoption continue to increase while we are reducing production complexity for early users and improving ease of use.

Likewise, with the Knative project, we've worked with industry partners to create a standard set of serverless building blocks, which means that each vendor can focus more on a great end-user developer experience. Anthos bundles many Cloud native technologies in a managed way that is easy to use and support, and also brings along multiple partner products as part of the solution or in the container marketplace.

I believe there is much opportunity ahead for the community to further automate and accelerate software engineering for everyone. Google engineers have long had great internal platforms and tools chains to develop, build, test and release code accelerating dev velocity and productivity, we are working to bring the best of that functionality to the community.

In summary, project Anthos is a multi-stage evolution from project Borg to Kubernetes to multi-cloud. It's aimed at Write Once Run Anywhere application development and management enabled primarily by the Kubernetes platform.

The white paper authored by Googlers, Eric Brewer and Jennifer Lin talks about application modernization enabled via Kubernetes.

More detailed information on Anthos is available in docs.

Rate this Article

Adoption
Style

BT