Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Podcasts API Showdown: REST vs. GraphQL vs. gRPC – Which Should You Use?

API Showdown: REST vs. GraphQL vs. gRPC – Which Should You Use?

This episode of the InfoQ podcast is the API Showdown, recorded during QCon Plus in November 2021. What is the single best API technology you should always use? Thomas Betts moderated the discussion, with the goal to understand some of the high-level features and capabilities of three popular technologies for implementing APIs. The discussion covers some of the pros and cons of GraphQL and gRPC, and why you might use them instead of a RESTful API.

Key Takeaways

  • The winner is, of course, “it depends.” Each technology has strong benefits, but those come with trade-offs. Your needs may best be served by a combination of technologies.
  • REST, or at least JSON over HTTP, is the most ubiquitous standard for web-based APIs. This means it’s easy to get started, you can use a wide variety of languages, and it works natively with web browsers.
  • In their own way, GraphQL and gRPC address some of the limitations of REST.
  • GraphQL allows a client to specify just the information they need, which can greatly reduce duplicate or unnecessary data being transmitted. But, it requires additional setup and training.
  • gRPC is built for fast transport, leveraging HTTP/2. This requires a well known contract, typically using Protocol Buffers, that is shared by the client and server.


Welcome to the API showdown. I'm joined by three panelists, each chose to be a representative of one particular API technology, more or less. Any good architect knows the only time you should ever use always or never is to say we will never do an absolute. I don't want to paint any of the panelists into a corner as only an expert on one topic. It's just a helpful way to create a discussion with diverse opinions, because we really want to understand, what are the tradeoffs you need to consider when evaluating your options?


Representing gRPC, we have Alex Borysov. Alex is currently an engineer at Netflix, and was previously at Google, who gave us the g in gRPC.

Michelle Garrett is part of the team building Twitter's large scale GraphQL API. Twitter picked her up after she did an amazing job implementing GraphQL at Condé Nast, where surprisingly, they did not put the GQ in GraphQL.

Matt McLarty is a Global Field CTO for MuleSoft, co-author of two O'Reilly books on microservice APIs architecture. He's not behind a microphone here at QCon, he's behind a microphone as co-host of the 'APIs Unplugged' podcast.

The Origin Story of REST

I want to start off with brief origin stories. APIs have existed for as long as we've needed two systems to be able to talk to each other. For various reasons, this has led to the invention of countless protocols, patterns, and paradigms for developing APIs. We're now at a point where with any of the three technologies you represent, someone can come in and just implement that technology right now. At some point in the past, they did not exist. Tell me, why was your technology created? We don't need a whole history. Give me the one problem it was trying to solve and what was the innovative idea that it brought to the table. Matt, you're up first to tell us about REST.

McLarty: I'm only choosing REST, because CORBA was not an option. It's a great point, you could probably write chair or token size books on the history of REST and APIs. Roy Fielding had the dissertation, which defined the style of Representational State Transfer. He was more like showing in a thesis, here's how you can define an architectural style, and let's choose how the web works, so we'll use REST. In terms of solving the problem, I think what's really interesting is, it was a very organic adoption of REST in a lot of different uses. At the time, there was a lot of energy behind SOAP in the enterprises, here's a way of using web technologies to connect things, and all these very prescriptive standards about how to write SOAP messages. REST's rise was really in response to, in rebellion against this SOAP. It started just being like, here's a practical way of doing things. What if we just use HTTP verbs and define things as resources over the web? If I had to generalize what problem it was meant to solve, it's like the web itself. If we go underneath the browser on webs, how do we just plug things together using web protocols? That's why I think part of the power of REST and part of the ubiquity of REST has been its wide adoption in many different use cases. I think, people learning it in one area and then applying it another.

It used to be you just connect to the eBay API or something, using REST in the early days. Then it was like, what if we use a REST API to do deployments or do some management over web networks. Then it was, there's these mobile devices. How are we going to expose systems to the mobile? We can use this REST thing that seems to be working in other places. It really has been a very organic thing. Some of the reasons that new protocols have emerged has probably been based on its ubiquity and the compromises that come with that, and the ambiguity that comes with that. The story of REST is really the story of the web.

Why We Have GraphQL

Betts: Michelle, why do we have GraphQL? What does it give us?

Garrett: GraphQL was originally developed by Facebook in 2012. Then the spec was first released publicly in 2015. It's been around, open source for about six years now. The history of GraphQL at Facebook is quite widely written about and you can look up online and find out all about it. I'll give you a brief TLDR. Ultimately, the first ever implementation of GraphQL was the Facebook mobile newsfeed API. The problem that Facebook engineers were trying to solve with GraphQL was that pain point of having to make loads of different API requests back and forth from different endpoints in order to get all of the data that was necessary to render a view, which in this case, is the very complicated Facebook newsfeed where everything is interconnected and nested with each other. This was coupled with a problem at the time of people shifting to mobile and using really bad 3G mobile networks.

GraphQL was invented to solve those problems at Facebook. The fresh idea that GraphQL brought to the table was really thinking about data in terms of a graph instead of multiple endpoints. Unlike REST APIs, which expose data via multiple different endpoints, GraphQL exposes all of the data through a single endpoint that is flexible. When you build a GraphQL API, you're really trying to build a unified data graph, which client developers can then query a subset of based on what their needs are. Really, the key feature of GraphQL, and what everyone loves about it, and what is innovative about it, is that it empowers client developers to select exactly the data that they want from the API, and get back exactly what they've asked for without anything else. They get it all in one go. That's GraphQL.

Betts: Nice and shiny.

gRPC's Origin Story

Alex, that means you get to tell us about the newest kid to the party, gRPC.

Borysov: Contrary to popular belief, g doesn't stand for Google. I'll send you a link for what g stands for. Thomas mentioned that gRPC was the newest kid, but in fact, it has older siblings at Google. gRPC is just a new version of Stubby, the API technology Google used internally for a long time, for over a decade. This new version is based on HTTP/2. It accumulated all the knowledge and lessons learned during these long years of running truly distributed systems inside Google, something that we call microservices now. In a way, gRPC was a set of best practices for efficient remote communication, implemented in one open source RPC framework. gRPC was released shortly after HTTP/2 specification was published. One of the ideas was to bring all those performance improvements and capabilities of HTTP/2 into a simple, performant, easy-to-use API first framework that can be utilized in a number of scenarios, from high throughput backends to communication with devices on unstable mobile networks, with limited CPU and memory footprint.

The idea was not to create an architectural style. The idea was not to create a query language. The idea was to offer a collection of technologies that help you as a developer, build and run distributed systems with a large number of remote calls and hide all the HTTP/2 protocol implementation detail. Give you as a developer a framework with efficient wire format, for serialization, deserialization, and built-in features you will need to run your system, and help you focus on modeling your business oriented actions as opposed to HTTP protocol details.

When gRPC is the Right Solution

Betts: Most of the attendees here going to the API architecture track probably are architects or fulfill the architect role sometimes in their job, and you always have to look at these tradeoffs and wonder, what's the right tool for the job right now? I'm not saying that one tool is always correct. Can you give us one example when I see a problem, and my tool is the right solution in that situation? Alex, what problem do you see that gRPC is like, that's the right tool for the job?

Borysov: If you're designing a low-latency, highly scalable distributed system, you should at least consider using gRPC. The default out of the box implementation is using protocol buffers as an ideal and serialization mechanism. I said by default, because, again, gRPC is actually encoding agnostic. Technically, you can use gRPC without protobuf, but let's stick with protobuf. Protobuf is wire efficient. You're not sending giant string payloads. You're not spending resources on parsing those strings. In some languages, string parsing can be extremely inefficient. If your backend is used by mobile clients, they will be more tolerant to bad network connections because you just send less data over the wire. Those clients will need to spend less CPU on deserialization, which helps with battery life.

On the other hand, if you're building a backend system that includes or at some point will include hundreds or thousands of interconnected microservices, gRPC will offer efficiency and speed, and also will provide built-in features like deadline propagation, cascading cancellations, retries, request hedging, and so on. Or perhaps at some point, you will need streaming APIs. Your application needs to stream multiple requests or multiple responses in one RPC interaction. Maybe your application streams data from sensors or stock price and sub-millisecond latency is very important for your business, you may need flow control or backpressure for those streaming APIs. Streaming RPCs with flow control have first class support in gRPC. You will probably need TLS. At some point, you will need load balancing, custom authorization, monitoring. gRPC provides building blocks with default implementations and extension points for all those concepts. The best thing is, you don't have to think about all those features until you need them. gRPC is very easy to start with. You can give it a try just for expressive language-neutral ideal, and auto-generated client libraries, and later start using those additional features and extensions when you need them.

When GraphQL is the Right Tool for the Job

Betts: Michelle, what about you? What is someplace where you were starting from scratch, but were going to go with GraphQL, because that's the right thing for this job?

Garrett: I have two use cases. The first one is where you have one client application with many data sources. The second use case is when you have many client applications and one shared data source. Let's talk about the use case where you have multiple clients that need to use the same set of data in different ways. For example, you might have a web application and a mobile application, which both need to access the same set of data but the view looks different in both of those applications. The interface that they need of that data is completely different. Trying to share a REST API between these two clients can result in really bloated endpoints with lots of data that is unnecessary to one or the other clients. If you use GraphQL, on the other hand, it allows both of these clients to share the same set of core data, but they can use it in a way that actually makes sense for them. They can ask specifically for the data fields that they want and they're not going to be burdened by the data needs of the other applications that are sharing the same dataset. That's number one.

My second beautiful use case for GraphQL is when you have multiple data sources for a single application, and you want to streamline those into a single interface. Say you're building a client application, and it has loads of data sources, say like, three REST APIs, a JSON file, and a database. Maybe one of those REST APIs is a horrible legacy API that people are scared to touch, and it's undocumented, and the naming is horrible. You can't really change anything in it because it has to remain backwards compatible for other applications. In this case, you have disparate data sources and GraphQL is the perfect tool for the job to unite those data sources in a data layer. You can wrap all of them in a single GraphQL layer. This layer allows you to hide the ugliness of the underlying services. You can name fields, how you like. You can hide implementation details. You can relate data from the different data sources together in a way that makes sense, but you don't have to expose that they come from different services to the client. Basically, you create a nice streamlined data layer for your application.

When REST is the Right Thing to Do

Betts: Matt, we're going to ignore the fact that I think Michelle just called your API ugly. When would REST be just the right thing to do? Is it simply that that's what everybody knows, and that's what we're comfortable with, and that's what we use? Or is there a place where that's all we should be using?

McLarty: In this moment of being personally insulted, I'm just going to reflect for a second and invert things a little bit. Honestly, I think this question to me raises the question of, what's the right way of deciding. What criteria do you use to decide on the protocol? I think we've heard some really good technical justifications. We've heard some design justifications factoring in consumer needs. The reality is, depending on what perspective you're coming from, you could arrive at different answers. I just wanted to pause and step out of character for a moment and say, you really need to think about what your consumers need first, I think. Especially, you may be in a situation where you own both ends of the pipe, and so you've got a little more liberty to make decisions based on other criteria. For me, step one is, what do my consumers need? You might have the perfect technical justification to go with one protocol over the others, but if your consumers really need it in one format, then that should override things, or maybe they need multiple.

The answer for REST is like everything else, or those plus everything else. I think that what we've seen is, first of all, how do you define REST? Because even within the community, there's lots of disagreements around that. I agree, there are some ugly REST APIs that could use some abstraction, but it's not really dictated by the protocol. That flexibility is, again, its strength, and also leads to some of the pitfalls. I would say in general where you meet the web, or maybe more abstractly, when you don't know who your consumers are, so when you're really aiming at an unknown audience. I think it is very beneficial to say, REST is the great equalizer here. That if you don't know who's going to consume it, it's table stakes to say, have a RESTful interface. You know that there's going to be people who know how to use that. Then, maybe there'll be reasons to specialize on other protocols later. Even in the examples, we'll see some organizations will use a RESTful interface as an abstraction on top of a gRPC channel internally as a way of opening up to the web. Or, you'll see RESTful APIs being building blocks for a GraphQL endpoint, that's aggregating things. There's lots of common authorial scenarios as well. My things would be, think about what your consumers need, and if you're not totally sure of what your consumers need, then that's probably where REST is an obvious choice.

When REST is not the Right Fit

Betts: You've all given me a good answer of I'm the right choice for this job. We've also seen tools used incorrectly. I've seen lots of software written that was, I wrote C++ style F# or something like that, because that's how I knew how to write stuff, and I should have written functional but I know how to write object oriented. When is your tool the wrong tool for the job? Admit it, and don't tell me that we're always right, and I can solve every problem, because we all know that that's not always the right fit. The answer, of course, is it depends. Give me a good, "I wouldn't do it if this is what you have. You should go and choose something else."

McLarty: There's a couple of intersections that are really getting predominant. Event driven comes around the circle every few years. We're in a heavy event driven period. There's lots of asynchronous use cases where you can model asynchronously, webhooks and things like that, over REST, but there's places where you can certainly optimize better with a purely asynchronous protocol. I think the big one for me is this convergence of the analytics world and big data world, mashing up with the user facing application microservices world. Having lived in the distributed computing world, there were a lot of assumptions made that, big data analytics, whatever, it's just more data to handle. There's clearly a lot of cases where you're connecting things with massive amounts of data and scale of data that you don't necessarily want to put it through a straw of a more message oriented RESTful interface. That's probably the big one, I would say. If you're a distributed systems architect or engineer, and you're just now getting into the world of analytics and ETL, big data, watch out for that.

When GraphQL is the Wrong Choice for The Job

Garrett: I think that GraphQL is really geared towards product engineering, and that is where it really shines. That's where it has the most adoption. One of the creators of GraphQL, Lee Byron, has said in the past that GraphQL really isn't the right choice for server to server communication. If you're looking to build a way for your backend services to speak to each other, I don't think GraphQL is the right choice for that. Because I think the power of GraphQL is in what it gives to product developers, which is flexibility and great tooling. I think that the benefits are less in this scenario. You might want to look more towards Thrift, or gRPC, or REST, something else.

When gRPC is the Wrong Tool for The Job

Borysov: I'll start with some other's answers for gRPC, for example, when your language is not supported. Code generation comes with the downside that only 11 languages are supported. If your language is not on the list, you're out of luck. Or, which can be even more important, if your language is on the list but your consumers cannot use or don't want to use a language that is supported, they should probably look or you should probably build your API with something else. In those cases, you can create reverse proxies, you can use projects like gRPC gateways. It creates REST endpoints for gRPC services, or you can use Envoy with gRPC-JSON transcoders that translates RESTful requests into gRPC. Those solutions are more like workarounds. If you know that a significant amount of your API consumers will not be able to use native gRPC libraries, it's probably not the best tool to use.

Also, if you're building like one of APIs that you know you will sunset soon, for example, you build migration APIs or something, the schema overhead might not be worth it. Or, if you're building a service that only talks to a web browser, and it doesn't call any backend services, it just calls the database, for example, it's a simple application, or you're building a monolith for a good reason, in those scenarios, you can get more benefits from another technology, GraphQL or REST. Yes, you may still be able to use gRPC, and you still will benefit from language neutral contract, from code generation. To talk to the web browser, you will need gRPC web proxy, and communication between proxy and the web browser will still be HTTP/1.1. If your only integration point is web browser, gRPC is probably not the best choice. You should really understand what your complexity is. You should start with understanding what your complexity is. If your complexity is in QPS, throughput, latencies, you're concerned about optimizing tail latencies, go with gRPC. If your complexity is in complex domain that you have rich UI application pulls data from dozens of services, GraphQL can be the better choice.

Getting started with gRPC

Betts: Alex, you mentioned the idea of code generation being critical for gRPC. What does it take to get started? If I'm just exploring this, I may know some things about REST, and I may know how to just make an HTTP request because you just have to make an HTTP request, and that's very easy to learn. If we assume that all this is client server communications, even if it's server to server, one of those is a client, one is a server, what does it take from the server side, and what does it take from the client side to say I now have two components in my system that can talk to each other?

Borysov: First, your consumer and producer client and server, they have to agree on the protocol. gRPC is an API-first framework, so an API contract is not an afterthought. It's the very first thing you start with. Again, for simplicity, let's pretend it is only protobuf. Unlike REST or GraphQL, in gRPC you model actions. You model your methods. You define your service, which is just a collection of actions that your server can do. Your server can be resource oriented, but it doesn't have to be. You define what it can do, and this action can be defined around data entities or not. Then, even though gRPC is based on HTTP protocol, gRPC does not expose any HTTP details, so this protobuf schema is very simple. You define your request as a proto message, your response as a proto message, and your method. Then you will need to use protocol buffer compiler with gRPC plugin to generate classes for both consumer and producers.

For example, let's say we define the QCon service with two actions, attend the session, and attend lunch. Two most important things you can do during the conference. Attend session takes request, it returns a response, so your API definitions will be compiled into classes in a language of your choice. For example, let's say we use proto compiler with gRPC Java plugin. For backend, it generates an abstract class called QCon service Impl base, and backend developer would need to extend this class, override that attend session, override attend lunch method, so no annotations, no mapping, you just need to implement methods in your language. For the client, proto compiler generates client stubs and client libraries, these methods attend session and attend lunch. In fact, it generates two types, two subsets of client libraries: synchronous and asynchronous. In Java, it will generate three clients: blocking client, callback based client, and future based client. gRPC encourages you to use non-blocking clients, but you can start with simple blocking client and then switch to asynchronous later on if you need to.

You can just start with instantiating those generated clients, and you use your data entities, you build the request, and then call this method to invoke a server. All the heavy lifting, serialization, deserialization, opening streams, managing connections work in this HTTP/2 frames, it's all hidden. gRPC doesn't expose any of those details. The framework has already made all the decisions on how to layer your RPC model on top of the protocol.

Getting started with GraphQL

Betts: Michelle, what is GraphQL? Again, I've seen little tutorials of, here's how to get started with GraphQL. How complicated is it really? Do I need a new server? Do I just write some code and deploy it out?

Garrett: With GraphQL, everything starts with a query, or a mutation. If we're talking in CRUD terms, a GraphQL query is equivalent to reading data, and a mutation is when you want to create, update, or delete some data. Let's go with queries because it's the most straightforward one. A GraphQL query is essentially like a shopping list of all the data that you want to get back from the GraphQL API. You write down a list of all the fields that you need, and you can parse any important arguments like a user ID. All the fields that you write down, they have to correspond to the GraphQL schema, which is a strictly typed contract that describes all of the data that it is possible for you to ask for from the GraphQL API. GraphQL APIs have just a single endpoint usually /GraphQL, usually a POST endpoint, and when you want to make the request, you post your GraphQL query to the endpoint, and the server will receive your query. Then on the server side, there's things called resolvers. You define a resolver for every single field in the GraphQL schema, and it's essentially a function that tells GraphQL how to populate that data. You can pull that data from anywhere you like. You can pull it from other REST APIs or the database, or you can just return a string, whatever you want to do. Basically, the server will read through the query, look at all of the fields, and populate those fields with data, one by one, based on the resolver functions. Then, once it's done that, it'll return it to the client.

Getting started with REST

McLarty: I want to once again step out of character a little bit, because there is an element of design that happens independent of the protocol. I think that's an important point to make. The first thing you should worry about isn't like, let me run and start writing my protobuf definition or my GraphQL schema or my JSON schema. I think there's an element of design of just, again, like I said before, thinking through who your consumers are, thinking about, whether it's resources or schemas or messages, what's the actual business activity that's going to take place on the interaction? That's so important, because I think, if we look at REST, and if we want to just use the basic definition of CRUD over HTTP using resources. Everyone's familiar, there's a bazillion implementations of clients and servers there. It's great the developments that have been made in Swagger and OpenAPI specification to help drive more metadata, and there's flexibility to plug in JSON schema, and security schemes. There's all that technological goodness that I'm sure Kin undoubtedly did a phenomenal job of explaining a lot of the details around that.

I think to get started with REST, if you're thinking in a bigger context of, I'm actually creating an API that's going to be a new channel for my company, or for my business, we talk about this idea of APIs as products a lot. That really got the book here on, "Continuous API Management" for my friends that wrote it. There's this whole aspect of thinking through like, how are people going to find my API? How will it be discovered? What is the different segments of consumers involved there? How am I going to handle versioning, and all that? The reason I bring it up now is because I think there's a temptation to just say let me get the MVP Quickstart thing out there. I think the sooner you start to think through, let's just put the technology aside for a second and think of all these product considerations, is really fundamental to do that. Because what's that, Mary Poppendieck quote, "The biggest failure in technology is not having something crash, it's building the wrong thing." The more thinking you put into tuning what you're designing for, I think that's key. That to me is a big part of getting started.

Betts: I think that's a great callback to people who saw Christi Schneider's presentation, talking about design for extensibility. She talked a lot of the same things without specifically mentioning it was just APIs. It really is if you have some product that you're trying to adapt over time, how do you version it? How do you document? How do you teach people about it? How do you get them to adopt it?

McLarty: On the theme of architecture, what's the difference between architecture and design? It could be just architecture are those decisions you make that have long lasting implications. There are things you can do upfront that will snooker you on creating landmines down the road. It's definitely important to put that thought in.

How CQRS Gets Implemented

Betts: There's one narrow technical question. Since it is an architecture track, one of the themes we tend to hear about is CQRS, Command Query Responsibility Segregation. I want to know, how does that get implemented in each of these things? I think, Matt, it's pretty obvious, you write a command, you write a query, or you have different endpoints for those things, but is that no longer RESTful, and do we care?

McLarty: CQRS has grown out of domain driven design. There was this whole evolution of CQRS. I've heard people sarcastically say CQRS was just a thing Microsoft was pushing because they couldn't get SQL server to run effectively without separating commands and queries. I think notionally it's been actually a pattern that's been in place for a long time. You could absolutely separate your GETs, if you want to have a proxy that does filtering and dynamic routing, and then goes back and hits different backends. It's really, at what point do you want to make the separation happen? Is it right at the network layer? Do you want some application layer? Do you want to add some data separation? Typically, I would say, working in the enterprise space that I usually am working with customers in, there's a lot of different optimizations that happen on the chain. I'm not seeing a ton of organizations being purist around CQRS. Those that are tend to be going all in on event modeling, and event sourcing approach where they're optimizing specifically for that type of separation. It's doable, but I think the main point is that there's lots of different points in the stack where you might want to have separation for optimization purposes.

Betts: Michelle, there's a Q in GraphQL, but there's no C. Do commands not even work through GraphQL? Is that something somebody else should be responsible for?

Garrett: I've honestly not heard this acronym before, so maybe you can tell me.

Betts: You have a path where you go down and you write all of your queries against a read model. Like Matt said, SQL Server was being slow, so you wrote materialized views, so they read faster, or whatever you did, but that you don't write a document and read a document the same way. GraphQL seems to be, I'm going to aggregate all my data together and I can ask all these different questions, just give me this little bit of data. It's not meant for, please send a message to the next person in the queue, or something like that.

Garrett: Yes, that sounds reasonable to me.

McLarty: I think it's interesting that you've got queries, mutations, and subscriptions as well. I've been banging this drum around commands, queries, events, whatever you want to call them, like there's these three different interaction patterns. Again, that goes back to the fact that maybe these protocols aren't all that different. We're solving similar problems, and there's a unified, conceptual view of the world that is expressed in all three.

Betts: I know RPC is remote procedure call, it doesn't imply what you're doing. You can do whatever you want.

Borysov: Absolutely. It does not. You can easily define separate services, you can define a service to read your data, and a separate service to mutate your data. You will have query service and command service, they can be implemented by the same gRPC service, but those services are just namespaces. If your consumer is read, thus they use your query service, if they're write, they send commands using your command service.

The Next Best Option to gRPC

Betts: We started off by saying, I'm the one technology to win them all. If you had to choose somebody else, and you're going to say a two sizes fits pretty well option, which of the other choices in the room would you pick? If you are not the right fit, who do you want with you?

Borysov: If I have a rich UI that aggregates data from multiple sources, I'll go with GraphQL. I might end up supporting two ideals but there's a price to pay. On the other hand, if I have a subset of consumers who can't or don't want to use gRPC clients, I will go with REST.

An Option to GraphQL

Betts: Michelle, if I said we couldn't just use GraphQL everywhere, who's your wingman?

Garrett: I'm going to pick REST as my wingwoman. Although I feel like REST and GraphQL are pitted against each other. I'm sorry that I called it ugly earlier on. I think that they have a lot of harmony together. They're often found together in real world implementations of GraphQL. There are just so many REST APIs in the world and I don't think REST APIs are going to go away. I think that wrapping a REST API in a GraphQL layer, but maintaining the underlying REST API services is a really common pattern, and is a sweet spot for the two of them to work together.

REST vs. gRPC vs. GraphQL, Which Is Better?

Betts: Matt, it's up to you, who's winning this?

McLarty: One of the reasons I'm here is the article I wrote, which is specifically calling out these false dichotomies. I can't sit up here with a straight face and say, I will pick one over the other, because it is silly to have like, gRPC is better than REST and REST is better than GraphQL. In that article, I was stressing this ubiquity and the value of that, but then the tradeoffs. I'm going to go with Kafka. It depends. We know it depends.

Betts: I thought it might have been CORBA once again, coming back up.

You can watch the video of this discussion on InfoQ.


About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article