BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Next Generation Client APIs in Envoy Mobile

Next Generation Client APIs in Envoy Mobile

Bookmarks
45:48

Summary

Jose Nino guides the audience through the journey of Mobile APIs at Lyft. He focuses on how the team has reaped the benefits of API generation to experiment with the network transport layer. He also discusses recent developments the team has made with Envoy Mobile and the roadmap ahead.

Bio

Jose Nino works as a Software Engineer at Lyft.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Nino: Let's start with an idealized representation of networked applications. At its most basic we have clients that communicate over the network to a server with a request and then the server responding back to the client over the network. In the middle here we have APIs, and that's what we've been talking about all day today. They are the contracts that establish how clients communicate with servers, and they're this boundary that creates the standards for the communication. However, I'm sure most of you have experience with architectures that don't look like this. They probably look a little bit closer to that, or maybe even nastier. Or, on a particularly bad day when you are on call, it probably looks like that.

My name is Jose Nino, and I am an infrastructure engineer at Lyft where I've worked for the past four years building systems that take this reality and try to make them feel more like this idealized version. For the first three years that I worked at Lyft I worked in the server networking team. There we recognized that there were two dimensions to API management. There was the what data do we send via our APIs – this is the shape of our APIs – and then there was the how do we send that data. I am going to call that the transport of our APIs.

Server-side, after some evolution, we standardized along IDL. The technology that we used there is protobuf. Whenever a new service pops up at Lyft the product engineers need to define their models in protobuf. Then we are able to compile protobuf to generate both the client and server stubs. Here, client I refer to another service back in infrastructure that's trying to communicate across the network.

In the transport dimension, my team created Envoy a little bit over five years ago and open-sourced it a little bit over three. Envoy at its most basic is a network proxy, and it can be run standalone or as a sidecar. Importantly, for our backend infrastructure, we standardize the transport of our sidecars by using Envoy at every hub of the network. Every service runs with Envoy as a sidecar. What this has resulted for us is we have a common network substrate that all services use to communicate with one another.

Together in the backend, what we've produced here is an ecosystem of strongly-typed APIs that are guaranteed by our proto models and a common universal network primitive guaranteed by Envoy.

This would have been the end of my talk circa the beginning of 2019. We felt pretty good about the state of our back end. Obviously there's always more room for improvement, but what we started thinking, a group of us at, at the beginning of 2019 is that we had left the most important hub out of this ecosystem. That hub is the mobile client because traditionally we treated clients as independent from the back end infrastructure. We've built unique solutions for what we thought were unique problems.

What we identified here was a technology gap because in spite of all the work that we had done server-side to increase consistency and reliability, we recognized that increasing reliability to 99% on the server side is really meaningless if we leave out the mobile clients because they are the ones that are being used by our end users to interact with the platform that we build.

We started thinking, "What do we want from our client APIs?" Really what we wanted were the same guarantees that we had given for the server. Most importantly, when problems do occur, we wanted the same tooling and observability in order to identify them.

In brief, what we were proposing is that we don't need to treat the mobile clients any different from our backend infrastructure. We want the mobile clients to be another node in the network, another part of this mesh.

This is going to be the focus of the rest of the talk, hence the title of Next Generation APIs with Envoy Mobile. About a year ago I formed a new team called the client networking. There we started evolving how client APIs were defined in shape at Lyft, and also we started evolving how we transport them. Today I'm going to take you a little bit through that evolution of both the shape of the API and how that evolution culminated in Envoy Mobile, which is new networking library that takes Envoy and brings it to the mobile clients.

API Shape

Let's start with the evolution of the shape of the API at Lyft. This is the earliest workflow that engineers use at Lyft to define their client APIs. They discovered a new product feature that they wanted to have, wrote a textbook about it, and then went and handwrote all the API codes both for the client and for the server.

We quickly realized that there were problems with this approach. For instance, programming errors could lead to different fields in the payloads where, for example, iOS might have a particular key but Android might have a misspelled key. Because this was free form JSON that we were sending over the wire, the JSON would end up in the server and then the server would catch on fire. Then our server engineers would say, "Don't worry about it. We can fix this with just a feature flag tagging your client." We recognized that this was not an ideal situation. We clearly had problems because we had no visibility over problems with our payload until they hit the server.

A few of our Android engineers took on themselves to try and fix this problem, and they introduced YAML schema to our API definitions. What they tried to do with that was to have guarantees about the fields that should go in the payload, but this had a large problem, which was that this was only deployed to our Android clients. We still had inconsistencies between our iOS and Android clients. Really, what this meant was that there was no source of truth for our client APIs. There was no guarantee that both of our clients would behave the same way.

What we were shooting for here was consistency between our client APIs. We wanted a single source of truth that determined that the shape of the API was going to be the same between the two clients and the server. Really what we wanted was to do the same thing that we had done for server internal APIs. We wanted to provide that consistency guarantee.

What we envisioned was a workflow that looked similar to what the internal APIs looked like. We wanted to go from tech spec to a commit in our IDL repo. Then from our IDL repo we wanted to trigger CD to automatically generate both the client and the server stubs. We didn't want to stub just at the server. We wanted to do that also for the client.

We used protobuf again to solve this problem. I want to stop and describe a little bit about what we chose protobuf. First, protobuf is strongly-typed. We had compiled time guarantees about what the runtime payload would be. Second, protobuf also gives us JSON deserialization and serialization. This means that we had a path for migrating the legacy APIs that were in free-form JSON into this new, strongly-typed APIs. Lastly, we were also using the server-side. We were going to have consistency guarantees not only between our two mobile clients but also between the mobile clients and the server. We could also benefit from all the operational knowledge that we had already derived by working with protobuf on the server side for years.

That's what we did for our public client APIs. We made a pipeline so that when engineers wrote a commit into their proto, extending the models that a particular public API had, that would trigger generators that then would create not only our server stubs but also our client stubs. With the client stubs we went a little bit further and precompiled those files into modules based on the API package that they belonged to. We did this do reduce compilation times and also to better organize our APIs so that mobile engineers could work with them easily.

Using Generated APIs In Swift

Let's take a look at what the cogenerators do once they run. Here I'm going to show you Swift code, but you can assume that the Kotlin code for Android clients looks basically the same.

First, we generate the models for the request and the responses. What this strongly-typed model ensured is that we detect problems with the payload at compile time and not as a fire in the server. Second, we generate API stubs that are consistent between iOS and Android, and, extremely importantly, we abstracted implementation of the transport obvious APIs. In other words, now that we're migrating to these generated APIs, my team as a platform team is able to go and change the underlying infrastructure of this transport layer to bring new changes. This is a foresight that we have going into the rest of the talk.

Lastly, we also generate mocks for our client APIs. These mocks conform to the same protocol as the production code. What this does is that it encourages our client engineers to go and test all of our public APIs. This is another piece where we're trying to prevent runtime errors by having prevention at CI time when we run jobs on our pool requests.

I've been describing some problems that we had and a solution that we reached. What are some of the benefits that we got here? We finally have a single source of truth between not only all of our client APIs between Android and iOS but also between the mobile clients and the server. Now we have the fact that engineers don't need to discuss or investigate discrepancies when we have incidents.

This is because we have created highly testable, consistent, strongly-typed APIs that can be checked for errors at compile time rather than runtime problems on the wire. Most importantly for the evolution of transport at Lyft, we have abstracted that implementation. Again, my team can go in and make improvements underneath the hood without worrying of massive migrations with long tails and having to disrupt the workflows of a lot of client engineers. By having this consistency and this guarantee, we could start, as a platform team, working on providing the other guarantees that we wanted to work on for our client APIs.

Performance

One of the things that we did at first when we had this consistent platform is that we started optimizing the encoding of our APIs. Earlier today in the brief history of the APIs we heard about protobuf's binary encoding. That's something that we used at Lyft. Remember that before we had clients that talked over free-form JSON to services, and then services responded with JSON. Now that we had our generated APIs we could swap the encoding that we actually sent over the wire for a more efficient format.

What we did is we changed from JSON to binary encoding. Importantly, we did this without having client engineers have to worry about what was going on on the transport layer because now that we generated these APIs we could actually negotiate the content between client and server. The client could send a protobuf request, and the server did not understand that it could negotiate a JSON request instead. This allowed us to decouple migrations of our clients and our servers understanding the wire encoding.

What we saw were huge improvements here. On average we saw about a 40% reduction in the payload sizes. In green are JSON payloads, and in blue are binary encoding payloads. What this means in turn is that we saw a big improvement in request success and also a big reduction in request latency because mobile networks are particularly sensitive to the size of the payload. It makes sense that if we reduce the payload sense, then we got better performance out.

Most importantly, this change was transparent to our engineers. This means that we didn't have to run a migration after migration as we wanted to roll the improvements that we were working on. Now that we have the platform again we can go as a platform team and improve things under the hood.

Let's take a step back and see where we are in the journey that I set out to describe. We achieved consistency across our mobile platforms and with the server via this protobuf defined, automatically generated APIs. We started to improve performance a little bit by experimenting with the wire format of the payloads that we sent over. By using these generated APIs, like I've said, we had a platform to start executing more on these guarantees that we wanted to provide.

Extensibility

Another thing that we wanted to do, for instance, is protobuf allows us to define extensions on the fields. What these extensions allow us to do is to create declarative APIs. Instead of just having fields in the API we can start declaring what the behavior of the API should be. For example, if we have a polling API, we might be able to define the polling interval of that API in the message that this actually being defined. We were already pretty satisfied with going from JSON to binary encoding, but if we want to go even further and reduce our payload sizes, we might enable gzip compression. With protobuf extensions we can do that declaratively.

To enact these behaviors we need to use the transport layer. In the server side we already have Envoy for our internal APIs communicating between services and having this unified network primitive, but in the mobile clients we had two different libraries. Both iOS and Android have different libraries. Historically we've used URLSession for iOS and OkHTTP for Android. The problem with having these three places where we need to enact behavior is that it made it hard to focus our engineering efforts in solving these problems.

We went back to these guarantees and we asked ourselves, "How are we going to deliver these guarantees?" To be able to provide them, we couldn't stop at just having a platform, a unified platform for the shape of our APIs. We needed an ecosystem on par with what we had with the server. We needed to control both the shape and the transport of our client APIs. In other words, we had to focus not only on the shape but also the transport.

API Transport

What we theorized here is that if we went from two different libraries in our mobile clients, then we could more effectively use our resources. We're going from three different implementations of our transport to two implementations: the mobile client and the server. We actually started thinking, "We could go one step further." Given that we had worked to achieve true standardization in the server side there was nothing that stopped us from believing that we could benefit from powering our clients with the same transport layer: Envoy.

In the beginning of 2019 my team started investing in this new networking library called Envoy Mobile. We believe that it has the potential to really find transport in mobile client applications the same way that it did for backend infrastructure.

With this project, and by extending the last mile of our network topology to be part of this mesh, what we achieve is true standardization of the network. Similar to what Kubernetes has done for container orchestration, we want Envoy to do the same thing for the network transport. We believe that we can do that with Envoy Proxy on the server and with Envoy Mobile on the mobile clients.

Why is this standardization so important? We believe that for the same reasons as standardization was important in the server, standardization can also bring us the same benefits between the client and the server. These are just some of the tenants that we used to think about this, the first one being: write once and deploy everywhere. This is what I've been talking about. Instead of having to have your engineering resources be split amongst three implementations – iOS, Android, and the server – by having this one unified transport we can focus our resources in just that one platform.

Second, we can share common tooling for common problems. For example, with observability, instead of having an observability stack for your mobile clients and an observability stack for your server, by having the same admission of metrics from both places, we can start utilizing the same infrastructure that we use in both places.

Third, by homogenizing the network and the behavior guarantees that we provide, we make life easier for system operators. Instead of having to reason about three different systems and how they interact with each other, by having this universal network primitive we actually reduce the cognitive load that they have.

All these three reasons sounded very compelling to us to go ahead and build this new transport layer, so that's what we did. For the past nine months we've been working on Envoy Mobile, and in that time we've had three releases. The first release, it was version 0.1. That was just a proof-of-concept release. We wanted to see, can we actually compile Envoy, which is meant to run on servers, and run it in our mobile applications? Can we actually route requests through them? Importantly, even when we released this initial proof-of-concept demo, we actually went and open-sourced it because we believe that this library has a lot of potential. The power of an open source community has been clear in Envoy. We wanted to do the same for Envoy Mobile.

With the v0.2 release we started to lay the foundation of a library. We actually started adapting how Envoy operates to provide a platform so that mobile applications could use it. Then, with the v0.3 release, which we're going to cut in the next couple weeks, we're actually going to call it the first production-ready release, not only because we've heartened the library a lot but also because we've started running production experiments in the Lyft app.

Envoy

That bears the question of: how do we take this thing that was a network proxy – Envoy – and turn it into a mobile networking library, Envoy Mobile? Let's take a deeper look into the architecture of the library, starting with its build system. I know everyone loves to talk about build systems. What we used was based on two particular reasons.

First, it has actually pretty good cross-platform support. Here we needed to compile five languages and target two mobile platforms over a lot of different chips and architectures. Bazel provided us a tool chain that actually allowed to do this. Second, and perhaps a more practical reason, was that Envoy was already built with bazel. We could leverage a lot of the same solutions that Envoy had had in Envoy Mobile.

This gives us a high level overview of the organization of the library. This organization also leads us to understand how the library is architected. On the left, in red, you have the platform targets. These are targets that are actually being compiled for iOS and for Android. It has the Swift and the Kotlin code that actually allow the mobile clients to interact with the Envoy Mobile engine.

Then in the middle, in blue, we have the C bridging code. We decided to write this bridging code in C because we see a lot of interoperability opportunities for Envoy Mobile to not only power mobile devices but perhaps other things that want to use Envoy has a library. That's a topic for another talk. Then on the right we have the green targets. These are the native C++ targets. Not only the C++ code that we have written in Envoy Mobile to adapt Envoy to become a platform in mobile clients but also Envoy itself. Envoy, remember, is at the core of Envoy Mobile.

That gives us an overview of one of the dimensions of the library. How is it organized, and how is it architected? Across another important dimension is: how do we take this thing that was supposed to be a multi-threaded process (the Envoy Proxy) and turn it into a single thread context in a sandbox mobile application, Envoy Mobile? In other words, how do we turn something that was supposed to be a proxy and turn it into an engine?

That concern lead us to this dimension, and that is the threading contexts that occur in this mobile library. The first threading context up top is the application threads. These are the threads that interact with the engine and actually issue network requests. In the middle we have the Envoy thread, which actually runs the Envoy engine. Then at the bottom we have the callback threads. These are threads that the engine can use to dispatch control back to the application when responses come in the network.

If we overlay then the architecture dimension on top of the threading dimension, we get this matrix that's going to allow me to explain to you how the engine actually works. The first thing that happens in the application threads and in the platform layer is that we create the Envoy engine object. This is the object that allows the application to run Envoy and provide initial configuration. From the application Envoy engine we start the Envoy main thread. Unlike Envoy running the server, Envoy Mobile does all of its work in the main thread.

This was enabled by one of Envoy's key design principles, which is that most code in Envoy is actually single threaded code. When Envoy is running in the server, Envoy's main thread is mostly responsible for life cycle updates, and then when requests come in they are actually attached to worker threads. If worker threads need to communicate to Envoy's main thread, Envoy has an event dispatcher. This is what allows us to cross threading barriers both from the worker threads to the main thread and also from the main thread over to work threads.

The second important concept here is that when Envoy is making HTTP requests it uses an HTTP manager. This HTTP manager is the basis of Envoy's HTTP stack. We'll see how this becomes really important in a few slides.

What we did in Envoy Mobile is actually we hoisted these two constructs (the event dispatcher and the HTTP manager) and we bolted them together into what we call the Envoy Mobile engine because the HTTP dispatcher allows us to cross this threading context that I've been describing, and then the HTTP manager allows us to actually make HTTP calls out into the network. What I want to highlight here is that we have lifted these constructs from Envoy itself. While Envoy Mobile is a newer library, the real underlying implementations of all of our engine is the well-trod paths of Envoy constructs that have a lot of industry production experience.

After we have the engine started, then the application threads can create HTTP streams. The HTTP streams then can issue calls across the application thread into the Envoy engine with event dispatcher and then into the HTTP manager. Then from the HTTP manager we go out into the internet.

I want to zoom here into the HTTP manager because, as I said, the HTTP manager is the foundational basis of the HTTP stack in Envoy. It's also a big extension point for Envoy because it has this concept of HTTP filters that attach onto the HTTP manager. What these filters do is they enact behavior onto every single HTTP request that goes out of the engine and then every HTTP response that comes back in. Really, I want to put a pin here because this is exactly what we were looking for before: a place where we can enact those behaviors that we had declared onto our APIs.

After we have the request go out into the internet and our services do whatever they need to do, the response comes back into the engine. When the response comes back into the engine we have some bridging code that calls callbacks back into the platform. This is done via platform-specific callback mechanisms. It allows us to then, again, dispatch from the main thread into the callback threads and give control back to the application layer.

An important design decision that I'd like to discuss here is that we unsynchronized both the request path and the response path. There is at no point a moment of synchronization. We did this deliberately because we didn't want any platform code to actually hang onto operations while we were issuing network requests.

The only point that we added synchronization here is for stream cancellation, but even that is done in the native layer. This means that even the one part of the puzzle that is a little bit complex doesn't have to have two different implementations in iOS and in Android. It's all done in the shared native code. What this results in is a dramatically simpler implementation in the library and, also, as well as easier usage by an end user that would use the library. This is an overview of how the engine works.

Let's now take a look at what the platform code looks like when you're actually using Envoy Mobile. Here I want to highlight again that while I'm showing you Swift code, the Kotlin code looks exactly the same. This was, again, deliberately done because we wanted the consistency guarantees between the two platforms.

The first thing that happens at the platform layer is that we build an Envoy client to make the network calls. Internally this starts the engine and all the processes that I just showed you. At this point of creating the Envoy client we also allow you to configure the client differently depending on what your needs might be. You might want different logging levels or different stats flushing intervals. I'll get to why stats is up here.

Then you can build a request with the request builder. This is the object that we used to actually start the HTTP streams and make requests from your client out into the internet. With requests we have also exposed places that you can modify things. For example, you can add different headers to the requests in a programmatic way.

Lastly, you build a response handler that will receive the callbacks in that diagram that I showed you where the engine then gives back the response to the platform. It allows then the product engineers to write whatever the business logic is. For example, it has a callback for when headers are received. Then that gives control back to the business logic.

That was a deep dive into how Envoy Mobile works and the API that we expose for clients to use. Let's go back at where we started the conversation of why we went ahead and wanted to create a new transport layer. It's because we wanted to deliver consistent behavior, not only be able to describe it consistently but actually enact that behavior consistently across our mobile clients.

In our generated pipeline, what I didn't show you before is that we abstracted the implementation of the actual generated API. Initially, the underneath of that where API calls into NSUrl session for iOS and OkHttp for Android, but now that we had Envoy Mobile as this transport layer we could just go in and swap those two libraries for API calls into Envoy Mobile. Again, this was transparent to our mobile engineers. They didn't have to know what was being generated under the hood.

We had reached a point in our mobile client development that we had reached with the server. We now had a full ecosystem where we controlled both the shape of our APIs with generated clients and also the transport of our APIs with Envoy Mobile.

Let's get back to our chart. Let's see how standardization of our transport layer is helping us achieve some of these missing goals that we had. Going back into the story, we had already a way to declare consistent behavior between our APIs via protobuf annotations. Like I said before, by having these annotations we could declare a behavior of our APIs. We could declare that a particular payload could be gzip and get benefits from that compression, but we didn't have a way to enact them because we had to go out and implement three different places where this compression might take place: in the iOS client, in the Android client, and then decompression had to happen in the server.

Previously, we couldn't execute on this. We had three different places where we needed to do that, but now, by having Envoy Mobile as our transport layer in the client and Envoy in the back end, we now finally had a place to actually enact this declarative behavior that we had attached to our client APIs.

Remember I pinned the fact about HTTP filters because this is the place where could enact that behavior. Every single HTTP request coming out of Envoy Mobile and coming back into Envoy Mobile had to go through these HTTP filters, and these HTTP filters allowed us to really enact the behavior that we wanted from our mobile APIs.

We can focus our engineering resources because we have to implement it only once. Now we have a client that wants to send a request. In their API they have declared, "Please gzip this request. I want full compression." That request goes out from the mobile client via Envoy Mobile, and it goes through a compression filter in this HTTP stack. The request is then compressed and sent over the wire. Then when it gets to our Edge infrastructure where Envoy is running we have, again, the compression filter. This compression filter understands that the payload has been compressed and is able to unzip this payload and pass it on to the service that requires it.

This is just the beginning of what we want to do with Envoy Mobile and the Envoy Mobile filters. What we're creating here is an advanced suite of network behavior for our mobile clients. This is really unparalleled in scope in other places and in other libraries, especially because we're bringing consistency in this behavior not only between our mobile clients but also between the mobile clients and the server. Let's go and imagine some of its use cases.

Use Cases

For example, we might have deferred requests. What happens when a service goes down and then you have a user trying to use your mobile application? You might have to fail that request. It would be really nice if what we could do was actually notice that the network condition is poor and not send that request, then resolve things in the client to make your customer believe that the request has succeeded, and hold that request in an Envoy filter. Then later on when the service is back up and healthy we could actually dispatch all those requests and create consistency with the server. All of this without your customer noticing that they went through a tunnel or they lost connectivity when they went into an elevator.

Let's imagine another use case for security. We might want to have request signing. I know that at Lyft we receive a lot of spoof requests from malicious clients that want to commit fraud in our platform. A common thing that we could do is sign those requests in our official mobile clients and then have the server verify that those requests are valid. Again, this could be really hard to implement if we had to not only implement a request signing library in Android, a request signing library in iOS, but also a way to verify that in the server. Now imagine if we could just implement one request signing filter in Envoy. By default we get not only the mobile client implementation to sign this request but also the server-side implementation to verify that these requests are valid and not fraudulent.

This is just the beginning. This is just a list of some ideas that my team has and wants to start implementing. Again, this would be intractable if we had to implement this in three places rather than just one.

Observability

Let's switch gears outside of HTTP filters for a little bit and talk about observability. I said at the beginning that it wasn't only enough to have consistent behavior when things were going well, but also it's important that when things do go wrong we have the right observability stack to analyze how things are going wrong and go fix them. One of Envoy's main driving features is its unparalleled observability. At Lyft we use Envoy on the server side to observe network behavior end to end. We have dashboards that we auto-generate for every single service in order to know what's happening. With Envoy Mobile we wanted to bring that same observability that we had in the backend infrastructure all the way to the missing hub, our mobile clients.

The goal is that an engineer that is curious about a particular thing in our network doesn't have to worry about getting observability from all sorts of places. We want them to be able to reason about the network end-to-end, all the way from our mobile clients into the edge of our infrastructure and our service mesh using the same metrics. For example, you might imagine someone is interested in looking at request volume. Instead of having to go and do some analytics queries to know what the mobile clients are doing, you could just query for the same metrics that you would query on a backend service.

That is what we did with Envoy Mobile in the Lyft app. This is a place where Envoy's succinctability also helped us because Envoy's only extension points are not HTTP filters, but it also has a plethora of other extension points that we can use. One of these extension points is the stat sync. It allows us to determine different places where we might flush our time series metrics.

One of the stat syncs that has been implemented by the community is the metrics service stat sync. This allowed us to build a very simple GRPC service that would then receive the influx from all of our mobile clients. Then it would aggregate them and flush them to our already existing Statsd implementation. Here we not only leveraged the fact that Envoy has existing time series metrics and existing observability extension points, but we were also leveraging already existing observability infrastructure that we had at Lyft for the back end.

This is what we got. I want to emphasize just how important this graph is. Up top we have metrics coming from Lyft's mobile clients. Down below we have the exact same metric that we are receiving at the edge. This is insight that we did not have before leveraging Envoy Mobile for our mobile clients and the existing observability infrastructure that we have. This is really showing us the true power of having this unified network primitive all the way from our mobile clients to our server. We can understand the network end to end using the same concepts. This is the place where we start acting on that reduction of cognitive load that our operators have to have in order to understand the system.

Onwards

Let's go back to this laundry list that I started with at the beginning. I want to posit and hopefully have shown to you that by having an ecosystem where we understand both the shape of our APIs and the transport, I've shown you how my team has started to provide the same guarantees that we have server-side also in our client APIs.

This is only the beginning because Envoy Mobile is the first open-source solution that provides the same software stack not only between the edge and the backend with Envoy Proxy but also now all the way to the mobile clients with Envoy Mobile. I hope that I've shown you some of the range of potential that we have with this paradigm, not only with the functionality of filters but we could also start doing protocol experimentation.

If you were in the panel earlier today, Richard Lee from Datawire and I were talking over lunch about QUIC experimentation. HTTP/3 over QUIC has shown historically in experiments done by Google and Facebook and Uber that we really have dramatic performance improvements, especially in networks with low connectivity and low bandwidth. These are the types of problems that only large companies like Google or Facebook were able to tackle before because they had the engineering resources to go and deploy three different implementations of QUIC: one for iOS, one for Android, and one for the backend. Now that we have the same transport implementation in all three places we can deploy a team of four, like my team is, and actually go and do these big projects.

Lastly, we also want to open source the cogenerators for our mobile client APIs that go from the protocol files into the Swift and the Kotlin stubs because we want to give the community not only the transport layer but also the shape of the API. We want to give the community this whole ecosystem in order to go and enhance their client APIs. At the end of the day what we believe for next generation client APIs is that these APIs are model-based APIs that are defined strongly in a typed IDL, like protobuf, so that platform engineers like myself can go and iterate on this behavior by using a common transport layer, which is Envoy Mobile.

I wanted to go back down memory lane. This is a picture of my team taking their first Lyft ride ever dispatched through Envoy Mobile. At that point we actually had network requests going from the mobile client all the way to the database flowing through Envoy. Now, in just a couple weeks, we're going to start doing production experiments with Envoy Mobile.

What we believe is that Envoy Proxy on the back end and Envoy Mobile on our mobile clients are the basis for this ecosystem of next generation APIs. We will continue investing heavily on it at Lyft. More importantly, however, we believe in the power of open-source community. That's why we open-sourced Envoy Mobile from the very beginning. I hope you check the project out and join us.

 

See more presentations with transcripts

 

Recorded at:

Mar 25, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT