BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Future of Cloud Native API Gateways

The Future of Cloud Native API Gateways

Bookmarks
41:34

Summary

Richard Li talks about the evolution of API gateways over the past ten years, and how the original problems being solved have shifted in relation to cloud native technologies and workflow. He talks about some challenges using API gateways in the cloud with Kubernetes, and some strategies for exposing Kubernetes services and APIs at the edge of a system.

Bio

Richard Li is co-founder and CEO of Datawire. He is a veteran of multiple technology startups including Duo Security, Rapid7, and Red Hat. He is a recognized Kubernetes and microservices expert and has spoken at numerous conferences including ApacheCon, the Microservices Practitioner Summit, KubeCon, and O’Reilly Velocity.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Li: I'm Richard Li. I've been working with API gateways and infrastructure in the cloud for the past five years, and so today I'm going to talk about the future of cloud-native API gateways. Just like Mark [Rendle] was in his previous talk about the evolution of APIs, I thought it would be helpful when we talk about the future to talk a little bit about how we got to the point that we are now.

What's interesting that I thought when I was watching Mark's [Rendle]talk was he was talking about decade by decade the evolution of APIs. I actually have chunked up my presentation around the history of API gateways in these five-year increments. As it turns out, when I was working on this, the evolution of this infrastructure has really happened in these five-year increments, which I think is pretty neat.

Just a definitional term, I will be using the term edge throughout this presentation. When I talk about the edge I really mean the boundary between the data center and your clients, your browser, your mobile devices. This is distinct from the edge of IoT devices, which is a thing I don't know very much about, but I'm told that there is an edge there, too. I'm sure there is. The edge that I'm talking about is that data center edge, and where you typically deploy your load balancers and your API gateways.

The core thesis of this presentation is that the evolution of the edge has really been driven by your application architecture. What I'm going to do is I'm going to start with 1995, talk a little bit about application architecture then, and then talk about the edge as it pertains to that. Then we're going to go through this over the next 20 years, and we're going to go through this relatively quickly.

Application Architecture In the '90s

Application architecture in the '90s, J2EE. How many of you guys have programmed EJB? Look at that, EJB is very popular. Serverless JSPs, EJBs, monolithic into your application. Your client was a browser or a Java applet, and there was, believe it or not, a debate at the time as to whether or not browsers or Java applets were going to be the preferred client to your servers.

When you expose this at the edge, you had, essentially, a hardware load balancer. That hardware load balancer was managed by system administrators. The developers had no idea what this thing was. They just knew that they wrote some code and somehow it got connected to the internet. There was this hardware load balancer, and its primary purpose was high availability and scalability. It didn't have a lot of functionality. It was just really about load balancing, different load balancing algorithms, and some sort of health-checking mechanism, so that if one of your instances of your application server crashed, presumably your load balancer was smart enough not to actually send requests to that particular application server.

2000

2000. Five years later, similar application architecture. J2EE, EJB. Most people were starting to move more towards the servlet and JSP thing. HAProxy actually came onto the scene in 2001. Nginx came on right after that. Nginx was a little more focused around this web server use case. HAProxy, though, was really interesting, because HAProxy was a piece of software that was written explicitly as a proxy to help load balancers manage the incoming traffic.

The core functionality in the initial use case of HAProxy was your load balancers might set cookies for session persistence and load balancing. Then HAProxy could actually offload some of that routing computation from your frontend load balancer, which was setting the cookie. It would look at the cookie in the header, and then based on that cookie, it would actually route to a particular instance. If the cookie wasn't there, it would redirect back to the load balancer. This is an instance of where you start to see software load balancers working with hardware load balancers, or in some cases, replace them altogether.

What I think is really interesting, and I'm going to talk a little bit more about this, is that this is where we start seeing the separation between layer 4 and layer 7. Folks who are familiar with the OSI network model, which is actually obsolete, but still, there's nothing better so we still talk about layer 4 and layer 7. Layer 4 is your transport layer. Your protocols or your transport layer are typically TCP, UDP, and your unit of communication is a connection. Then layer 7 is your application layer, so example, protocols in your application layer are HTTP, JRPC, Redis, Kafka, a lot of stuff that Mark [Rendle] just talked about. Your unit of communication is your request and message. HAProxy, looking at the cookie and then routing on it, was actually doing layer 7 routing, in conjunction with your load balancer, which was doing some layer 4 stuff as well as some layer 7 stuff.

Again, to recap, the 2000s started seeing more of the software load balancer configuration. It was system administrators, but because it was software they were early dev ops adopters. Even the dev ops wasn't a term back then. Still focused around high availability, scalability, load balancing, health checks. For the first time you also saw much more sophisticated observability functionality because that software load balancer would actually produce very rich logs. You can see this today. HAProxy produces very rich logs. They started this actually back in 2001.

2005

There's a woman named Darcy DiNucci who actually coined the term web 2.0 in 1999. Around 2005, Tim O'Reilly actually started to popularize this term. The term of web 2.0 was about moving the web from this one-way communication mode where you would just publish content and people would read this content to more of an interactive mode where you're suddenly getting more database-backed websites where people would actually be able to write comments, and they would be able to interact in it. Instead of just publishing your catalog you actually started to buy stuff from the catalog. Web 2.0 became this big thing.

Ecommerce also really started to take off. Amazon tripled revenue from 2000 to 2005 on its way to becoming the huge retail goliath it is today.

The dominant architecture that really started to drive the web was Ajax. Ajax let you separate that presentation layer from data interchange, and it let you do a whole bunch of other things. By creating this much richer, real-time experience, Ajax really transformed the web.

One of the consequences of Ajax was it really changed the network traffic of your edge. Because instead of a single set of requests, a single request come in and then a bunch of HTML go back out the other end, you'd have a single request come in, a bunch of HTML, and JavaScript would actually go out the other end. Then there would be a lot of traffic back in forth with using XML to actually update the data in your browser. This created a much chattier network traffic model, longer lift connections, a lot more requests. This became the load balancer, started to evolve into the application delivery controller.

The application delivery controller is, again, mostly used by system administrators. Not only did it do the high-availability stuff of previous load balancers but it also started focusing on the marketing term was application acceleration. SSL offload, optimized TCP stacks, caching compression, as well as load balancing. With ecommerce you had to worry about SSL offload. You would tune your TCP stack to support these longer lived, high-request connections, and so load balancers started gradually being replaced by application delivery controllers, which would then route to your frontend servers, which would then talk to your backend servers.

The canonical example of ADC is F5, and essentially it was a big piece of hardware with a big light on it. The one that I actually used to use actually had a bigger light. I couldn't find a picture on the internet with the really big light, but that was my favorite thing about the F5.

2010

Five years later, the iPad actually comes out, although this is not really a story about the iPad. In 2010 APIs started becoming much more popular. Google launched its API in 2005, and then you started seeing these companies like Stripe, and SendGrid, and Twilio start productizing APIs around specific domain functionality, and that evolves into this first generation API gateway. This is really the first time we actually started seeing an API gateway.

The purpose of this API gateway was API management. All these businesses saw that APIs were a thing. They wanted to figure out how to expose their APIs to the rest of the world, and so API gateways evolved with modules for monetization, analytics, publishing, and documenting the APIs through a developer portal and API gateways really adopted by both sys admins as well as API developers. It didn't really route to your entire application. It was really about your APIs, which was thought about separately from your overall application, and really Apigee, MuleSoft were some of the early companies in this space.

2015

In 2015 things start to get a little bit more interesting here. 2015 is the era of cloud scale. Companies like Twitter started to really get a lot of users, and they tried to figure out, "How do we scale the entire monolithic application as more and more users came to it?" This is a picture of the Twitter Fail Whale, where Twitter was over capacity, and the poor, little whale, the little birds couldn't carry the whale. What did Twitter actually do?

A whole bunch of companies started independently evolving the model to monolith, and what they did was they started to shard the monolith into these mini services. They would take some of the computation of the monolith and parcel it out into independent services that they could scale and manage independently. An example would be, instead of putting search in the monolith you would actually put search as a service. Then when an incoming request came in and they wanted a question about search, the monolith would actually say, "This is a search request. I'm going to send it to the search service." It wouldn't do a lot of computation, and that would be much more efficient in terms of actually scaling the entire application.

One of the other benefits that they discovered later on is they actually let you decouple the release, and this was the precursor to microservices, which we'll get to in a minute. They would take the monolith, shard off that functionality into these mini services, and they would scale up mini services independently in monolith. In this monolith, you think about it, the monolith by design isn't doing a lot of computation. It's basically doing maybe a little bit of authentication and routing.

This gives rise to the 2nd-generation API gateway. It doesn't take a huge amount of imagination to realize, "What if we actually, instead of doing all this basic stuff which all the services need in my monolith," and you can argue that monoliths were really the first instance of the second generation API gateway. Actually, a fun fact, Yelp has 3 million lines of Python code. They're doing microservices, but their API gateway is still their monolith, at least last time I talked to them about a year ago.

What people started doing, though, with 2nd-generation API gateways was they said, "Let's refactor all these cross-cutting functionality, and authentication, and rate limiting, and routing into an independent monolith or API gateway that's designed for this kind of stuff. Then route that traffic to my mini-lith, which is shrunk down, as well as these other mini service." That gave rise to the second generation API gateway, also used by system administrators and API developers, centralizing these cross-cutting app concerns with functionality like authentication, rate limiting, monitoring, as well as routing.

2020

Now we're in 2020. The era of cloud native is upon us. What's happening with that? Microservices are a big thing. Microservice become the standard architecture as people evolve into cloud native. Microservices is where you take your mini services and make much smaller microservices. I'm not going to actually explain microservices. I'm going to assume that many other talks will do a much better job of explaining microservices. For the purposes of the edge, what's interesting is that you have service proliferation, so you have a lot more services. Each of the microservices is actually built and operated by an independent application team, and they're also fully elastic. They independently scale up and down based on your demand.

When you think about your app architecture, it's really about a spectrum of services. Your services are running in different locations. They might be running Kubernetes, they might be running as functions, might be running as virtual machines. They also speak different protocols because there's no one-size-fits-all protocol. Some of them might speak gRPC. Some might speak HTTP. Some might speak WebSockets. It all depends on your particular service. They have different load balancing requirements. If you have legacy services running on your virtual machines that were written a long time ago, maybe they need sticky sessions, while a more modern container-based service might just want a round-robin based load balancing algorithm, different authentication requirements. You really need an API gateway that can actually route to this different spectrum of services.

Cloud gateways or API gateways are now evolving into what I consider to characterize as cloud gateways. The three trends that are happening right now when you look at API gateways is organizations are taking API-gateway type management capabilities, so authentication, developer portal, and metrics. They're marrying it actually with ADC-like traffic management capabilities, like rate limiting, timeouts, retries, all that sort of stuff. Then they're integrating these two areas of functionality with real-time service discovery. Because your services are all over the place with functions, virtual machines, Kubernetes, and so you need to be able to discover, your API gateway needs to discover, in real time, the network location of all of my containers and virtual machines in an elastic environment.

As an example, if I have a container and it needs to scale up, and it suddenly produces 20 containers, well, my API gateway needs to know that I need to actually load balance to these other 20 containers. Then I need to be able to do different kinds of load balancing, which is a traffic management capability, but I also need to do authentication, which is more of an API gateway type functionality. We're seeing today that people are really evolving their cloud gateways to include these three categories of functionality.

Just to recap, a spectrum of services means cloud gateways are merging this load balancer, ADC functionality, plus API management, plus service discovery.

That's not the biggest change in my mind when it comes to microservices, though, because the biggest change with microservices is it's changing your workflow. Developers, actually, as they started adopting microservices, they actually are very frequently on call.

There's an entire track here that was here yesterday around full-cycle development. I think that's the popular term du jour, but essentially, application teams with microservices, they have full responsibility and authority for delivering not just code but an actual service. Instead of having an independent team code and another team do deployment with your engineering services, another team worrying about release, and then an operations team responsible for running this thing in production, you actually have an application team that's responsible for that full-cycle of development.

The reasons why organizations do this is to really try to increase agility by accelerating that feedback loop. Netflix has this blog post, and you should also watch the talks from yesterday around full-cycle development. This is really, I think, the biggest change when you start thinking about microservices about this notion of transitioning your workflow.

When it's a workflow change, this is actually really different from everything we've talked about. I'm going to revise a thesis I actually had at the beginning of the talk. I think the evolution of the edge in the future is not just going to be driven by your application architecture, but it's also being driven by your application development workflow. We're doing all this because, as you start thinking about the cloud and everything that's evolving, we're going faster and faster. Trying to drive agility is what creates competitive advantage in businesses.

Full Cycle Development

How does the edge play into full-cycle development? If you think about it, all of your services are running, I'm going to say a cluster, but they're running behind the edge, whether they're in development, whether they're in testing, whether they're in production. All of your incoming traffic, whether they're from developers or customers, are all flowing to the edge. The edge actually sits at a really interesting place in your architecture where it's actually mediating all this traffic.

What I'm going to talk about is the four phases of a full-cycle development lifecycle, I've simplified it, and talk about how people are using the edge today, and I expect them to use the edge to actually accelerate and make that full-cycle development model very productive for them. I'm going to start with some less controversial things that people are doing today, and then I'm going to do maybe some more controversial stuff that I think I could be totally wrong, but where I think people are going.

From release – Sam Newman talked yesterday about the importance of decoupling release from deployment. Deployment is the operation of actually getting your microservice deployed in the cluster, and then release is actually the operation of exposing that to customers. You want to make sure that a deployment has basically no customer impact at all times, because then you can actually iron out any technical issues, and then you can choose when to release it at some point in time.

Cloud gateways actually control release through routing. Canary releases where you actually say, "Here's version 1.1 of my service. I'm going to route 5% of my traffic to version 1.1, 95% to version 1. If it works well, then I'm going to move forward. If it doesn't work well, I can actually rollback." One of the benefits of actually using routing for rollback is that rollback is instantaneous. You're literally just changing a rule, as opposed to doing a redeploy, and so this is another benefit of decoupling release from deployment.

What we see is that full-cycle developers are defining policies for release. You want to ramp traffic from 5% to 100% over the next day, and rollback if you see more than a particular threshold in your errors. Or you might rollback manually by watching the graphs. Cloud gateways can provide a really important role in release, and this is actually how we see most modern releases happen, is really through routing rules.

From an operational standpoint, you want to make sure your services never crash. How do you make sure your service is actually available? The two areas that cloud gateways support, operational aspects of deploying and running services is via metrics and traffic management. Metrics, real-time data about your application traffic, so your latency, your throughput, error rates, all that sort of stuff is generally collected by a modern cloud gateway.

Then traffic management. This goes back to my point around how ADC-type functionality is being merged into the overall API gateway. Strategies for availability, such as timeout, automatic retry, circuit breaking, rate limiting, all these things are traffic management techniques designed to increase the availability of your system. Full-cycle developers should design a policy for traffic management.

I was working with this large payroll provider and they were deploying a whole bunch of services at the edge, and one of the things they found was that for early versions of their microservices they would sometimes crash unexpectedly. They didn't really want to slow the development cycle, so what they did was they implemented an automatic retry policy at the edge, which actually proved the overall resilience of the application so they could continue to accelerate, continue with the pace of deployment. They probably should fix whatever underlying issues there were, but this was a way that they could actually help mitigate the overall availability issues that they had in their application.

Policy

I see this happening all the time today. Organizations are integrating cloud gateways to help with release and run. I've been using the word policy a lot. What's a policy? Policy is a statement of intent. In other words, they're declarative, and if you're actually using Kubernetes you're already writing policies today. To use Kubernetes, you basically write a set of policies that tell Kubernetes what to do, and then it does those things.

In my world, I see that Kubernetes has pretty much taken over container management, so I'm going to show a bunch of examples now that are Kubernetes-centric. You could probably apply these to other systems, but I would imagine that if you're going to containers and you need container management, Kubernetes is probably your default choice today.

We've just talked about release and traffic management policies. How do you actually implement these policies? This is not a real policy implemented by a piece of software, but you can imagine it's patterned after Kubernetes custom research definitions. We've created an edge policy type and it's got a name. It's got a prefix. It says everything that goes to the backend service should be routed to the /target URL, and we implemented both a retry policy and a release policy. What you can do is you can actually take these policies and implement these and define these on a per microservice basis by your application development teams.

The reason why this is actually incredibly powerful is that each microservice may need its own specific retry availability release policy. You want to be able to tailor that on a per-microservice basis, but you also want to make sure that your full-cycle development teams are actually able to define those policies. When you do something like this, you're actually decentralizing the management and giving that power to your full-cycle development team, and your operations teams don't need to get super involved with the minutia of exactly what needs to happen for a particular microservice.

Code and Deploy

We've just talked about the release aspect of full-cycle development, which you can manage with release policies. We talked about run with traffic management policies and metrics. I'm going to talk about code and deploy.

When you go to microservices, what's the problem with development? There are a bunch of problems, but probably one of the biggest problem on challenge with development is the fact that when your application consists of hundreds of microservices and you've got an API gateway, and a database here, and another database here, and a message queue, and elastic search, how do you set up your dev environment? You can't run it locally because you don't have enough CPU memory on your laptop, and it's actually really complicated, especially if you have lots and lots of developers trying to make changes, and they're adding new dependencies and new services. You don't really want to run it in a cloud-staging environment because you get a really slow dev loop, because you make a code change, and then you have to figure out how to get your code change from your laptop all the way up into the cloud. It doesn't work really well with IDEs, or profilers, or any of the tools that you run on your laptop, so what can you do?

If you think about it, we just talked about how cloud gateways should be routing to a bunch of different targets, Kubernetes, virtual machines, and functions. Why not have your cloud gateway also route to your laptop? If you can do that, you can actually run a microservice you're coding on locally using your IDE and preferred tool set. You run everything else in the cloud, and then you can tell your cloud gateway, "Actually route 1% of the requests from the internet and copy them and send them to my laptop." Or in case if it's my laptop, I might eject an authentication header into requests that I send with a special JWT that belongs to Richard and say, "Route Richard's request to Richard's laptop." That way I can actually do local development, even though I'm integration testing and using other services that I have a dependency on.

Let's talk about deployment. The deployment cycle for Kubernetes is actually pretty complicated. You have your source in GitHub or wherever you source control, then you have to build a container in CI. That container, once it's built, it actually gets pushed into a registry somewhere in the cloud usually. Then Kubernetes has to download that container from the registry, applies a bunch of deployment metadata to it, and then you have to create a route in your gateway so that it actually becomes available.

Why expose application development teams to such abstraction? One of the things that I think is really interesting is that there's actually an entire cohort, a community of folks who are actually trying to solve this problem. That community is actually your function-as-a-service, or serverless community.

Function-as-a-service is about focusing in your application logic. The framework actually takes care of servers, deployment, and routing, and the developer just writes some application logic and they stick it in a code repository. If you think about it, the cloud gateway provides a very natural way to actually deploy your function. The last step in your deployment process is actually publish a route as part of your release process. If you actually synchronize and figure out a way to get your cloud gateway to integrate with your function, you actually have a really powerful way to accelerate your deployment lifecycle.

What if you could actually route, again, from a GitHub repository? Not just route to your laptop, Kubernetes to functions, you can actually route from a GitHub repository. Again, here's an example. You have a project resource. It specifies a host and a prefix. It points to the GitHub repository, and what we do is we can actually move a release strategy and associate with a particular GitHub repository. Your GitHub repository contains all your application code, your deployment metadata, your serverless dependencies. Then you'd have your cloud gateway actually read your GitHub repository and deploy that code. Then by associating the release policy of the repository you actually still maintain that decoupling between deployment and release.

If we go back to the full-cycle development, from a coding perspective you can route to the laptop, from deployment you can route from code or your GitHub repository with the serverless framework from a release perspective to find a release policy, and then from an operational perspective, traffic management policy and metrics. I'm putting traffic management policy and release policy in a darker color because I think these are well-established techniques. How full-cycle development integrates with cloud gateways in these other areas, I think, is an area that there is a lot of exploration.

Recap

Just to recap what you could see from a policy perspective, you can see how with just two policies you can define the entire edge policy as well as your entire project and release policy with two relatively simple manifests.

Microservices, not just a change in architecture, but actually the workflow of software applications. The edge is a critical component of that workflow. Really importantly, with full-cycle development, the edge needs to be managed by developers and not just operations. This is a huge shift in how you should be thinking about your API gateway.

Full-cycle developers can use the edge to go much faster. Not totally sure if this will be called an API gateway anymore, maybe it's a cloud gateway. Some people use the term edge stack. We're going to have to see what sticks.

Getting Started

I just did a whirlwind history, plus where we are today, where we expect to go. How do you actually get started? Created an acronym, LESS. Acronyms hopefully are more memorable. Locate, empower, strangler fig, share.

What I suggest you do is actually you start by locating a self-contained edge business function. You find a particular business capability, not a technical domain, that you want to actually improve. Recommend you start at the edge and work your way in to foundational services because there are fewer dependencies at the edge. If it's a foundational service and you have 30 other services depending on it, actually turning that into a microservice is actually quite challenging. Pick something at the edge where there's fewer dependencies.

Ensure there's a clear and compelling business justification for this effort. For example, our recommendation engine, we think we can actually get 5% more in sales because a better recommendation engine, and so we need to iterate on this much more quickly. If you have a compelling business justification that's the first part.

Once you've located an edge service you empower a full-cycle development team. One of the great things about full-cycle development is that they actually are supposed to operate independently without any dependencies to the mother ship. Essentially, give a team the autonomy to actually iterate and experiment to figure out what works really well for them. Think about creating a spinoff. You just create a team and say, "Guys, figure it out. Build a microservice that solves this business problem, and figure out the right deployment, release, best practices, everything for getting this microservice going."

Then you apply the strangler fig pattern to your monolith. Essentially you can route all your traffic to an API gateway, which functions as a facade to your monolith and to your microservices. Then as you refactor and move functionality from your monolith into these microservices you can do that transparently without impacting your end user.

Finally, you want to share those best practices. When we talk to organizations that are adopting Kubernetes and some of these cloud-native development techniques, the number one challenges that the team comes to us with is, "We have 5 or 10 developers who really know what they're doing, but our organization has 500 or 1,000 engineers. How do we get this knowledge into the remaining 500,000 engineers?" There's no real easy answer. The most popular technique, believe it or not, is lunch and learns. I'm sure lunch and learns are great. I think there's probably a lot more that people need to think about.

The first full-cycle development team, you want to pick people who actually like to share their best practices with the rest of the organization, because they're going to be counted on to be technical leaders. Over time what we see is that people start creating a platform team that helps codify those best practices as part of the platform. If, for example, canary releases become a really popular and powerful technique that works for your organization, canary releases don't work for everyone. Then you want to figure out how to actually build it into part of your platform so that just any team can actually access the power of canary releases for their particular microservice.

Recap

To recap, the edge has really evolved in response to application architecture and workflow from monolithic web applications back in 1995, to cloud-scale services just a few years ago, to microservices today. You can see that there's been this transition from hardware load balancers, to software load balancers, and ADCs to these REST API gateways, to whatever we want to call where we are today. Cloud native is a new application architecture with microservices, and it's also a new development workflow with full-cycle development. Start with LESS. Locate an edge service, empower a full cycle development team, take advantage of the fact that they can operate independently. Strangle fig your monolith, and then share those best practices in your organization.

I'm Richard Li. You can find me on Twitter, and thanks for joining my talk.

Questions and Answers

Participant 1: I have two questions. The first one is security. How will you handle security with the edge? Because if you're talking about you're routing something to your local computer there's security issues in there, sensitive data, or something like that. The second one, I don't understand why the API gateways or cloud gateways or something are not thinking about data. Because if you're doing rollback you're just rolling back infrastructure, or service, or routes, but are not rolling back the data, because you lost the data. I know 5% of your traffic can be a lot of orders you're losing in your ecommerce platform, or something like that. How can we reply that data and guarantee you're rolling back everything, not just a service, not just a route, not just infrastructure?

Li: Your first question around security. I think there's a couple ways to actually answer that question. One simple example that we've seen is that people actually adopt this initially just in the staging environment so it's more internal. Honestly, coding live in production isn't necessarily where I would suggest that organizations actually start. You probably want to create a shared staging development cluster for your entire development organization, and then they could actually do live integration testing there before they actually roll something into production. Depending on what you want to do, you'll probably want to do authentication and that sort of thing. Depending on your view around different authentication techniques, that may or may not be secure enough for you.

Your second question around data, that's a great question. I think that there's no one-size-fits-all answer. One of the things that we see is if you're actually doing a rollback, yes. What you would do is you're basically taking your live traffic, and it's going to new versions of a service. That service is still persisting that data, and when you do a rollback you're rolling back to version 1. As long as the data model is compatible between version 1 and 1.1, you can actually do that rollback without an issue. If the data model changes between 1.1 and 1.0 aren't the same, then I think there's different techniques you have to use for rollback and testing. For example, canary releases don't work very well when you're actually making stateful changes, and that's where traffic shadowing, where you're actually taking the incoming request and copying it, and testing that with your 1.1 service, and then your 1.1 service needs to actually delete or drop the data persists that you want to persist a staging database, those kinds of techniques become more important.

Participant 2: In terms of API gateways, how do you see API gateways can evolve when it comes to microservices, orchestration, and compositions?

Li: There was a cohort of API gateways pioneered probably by Netflix/zuul where you did a lot of composition and request mutation inside your API gateway. What we see is that a lot of that is really business logic, and that's being moved out to microservices where they actually do that aggregation. Generally speaking, putting more and more business logic into the API gateway, which used to be a trend, we're seeing that being pulled out into microservices. Then essentially what you do is you have a couple microservices that would send their request to an intermediary microservice, which would do request mutation, whatever, and then pass that response back to the edge, so instead of putting that directly into your API gateway. That way you can actually do independent release cycles. You're not as coupled to your API gateway, so we see that as where people tend to be going in terms of that, unless it's truly a cross-cutting functionality that every single one of your microservices needs.

Participant 2: I have two questions actually. Where does the API gateway sit as part of the platform? Does it sit internal to the network, or it sits external to the network generally?

Li: If you're using Kubernetes, so Kubernetes is very interesting, and one of the concepts of Kubernetes which is really powerful is it introduced its own private network address space. Every single pod inside Kubernetes has its own IP address. The consequence of this architectural decision is that you actually need a network bridge between internal Kubernetes and the external internet. If you're familiar with Kubernetes, the term for this is an ingress controller. By definition, because of the way Kubernetes works, you will have to deploy an ingress controller, which is where we see most people deploying their API gateway. Those API gateways tend to be deployed inside Kubernetes in the same cluster as everything else.

Participant 2: One last question. One of the functionalities is you do real-time service discovery in the API gateway. If you have a laptop – you talked about VMs, functions, and laptops – do you have any strategy how we do real-time service discovery with the laptop?

Li: If you want to, there's a bunch of different techniques. One would be, if you're using DNS for service discovery, you can do things where you can actually register your laptop with your cluster DNS. That would be one thing. Generally speaking, service discovery, you're going to use some sort of service discovery mechanism. I'd say DNS is probably the most popular, or you can use a distributed key value store like console. In either event you actually want to use the APIs available to you, they all have APIs, where you can actually register new services, new devices. The laptop is actually just the same as registering a function from the perspective of service discovery.

Participant 3: In the local development space in Kubernetes you've obviously got things like telepresence, and I've used that a lot. My question is really, have you seen any projects developing in the run as a route to code space? Because the GitOps-type practices run as code makes sense, but have you seen any projects developing there?

Li: I think what I see is that Knative is actually doing a lot of really interesting things in that space. That's the project I'm most familiar with. I'm less familiar with OpenFaaS, and serverless, and some of the others, but certainly with Knative you see a lot of that actually happening in the project. The challenge with a lot of these serverless frameworks that I've seen is just that your first-time user experience and how you actually bootstrap someone into that entire ecosystem is complex.

 

See more presentations with transcripts

 

Recorded at:

Apr 07, 2020

BT