BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Why LinkedIn chose gRPC+Protobuf over REST+JSON: Q&A with Karthik Ramgopal and Min Chen

Why LinkedIn chose gRPC+Protobuf over REST+JSON: Q&A with Karthik Ramgopal and Min Chen

This item in japanese

LinkedIn announced that it would be moving to gRPC with Protocol Buffers for the inter-service communication in its microservices platform, where previously an open-source Rest.li framework was used with JSON as a primary serialization format.

InfoQ spoke with Karthik Ramgopal, distinguished engineer at LinkedIn, and Min Chen, principal staff engineer at LinkedIn, to learn more about the decision and company motivations behind it.

InfoQ: What were the main reasons for choosing gRPC and Protocol Buffers over REST with JSON?

Karthik Ramgopal/Min Chen: We chose gRPC to replace the current REST framework Rest.li for the following reasons:

  • Superior capabilities - gRPC is a highly capable framework we explain in detail with support for advanced features like bidirectional streaming, flow control, and deadlines, which Rest.ii does not support.
  • Efficiency - gRPC is also a highly efficient framework with performance baked in its implementation, such as fully async non-blocking bindings and advanced threading models. We have also validated this via synthetic benchmarks as well as production ramps of gRPC and Rest.li services running side by side.
  • Multi-language support - Rest.li is primarily implemented in Java, with patchy or non-existent support for other programming languages. gRPC has high-quality support for several programming languages, which was important when considering LinkedIn's infra-support requirements.

Beyond all these, gRPC has the support of a large and vibrant OSS community and wide usage across the industry. Rest.li while open source is contributed to and used primarily by LinkedIn.

InfoQ: You previously adopted Protocol Buffers as a serialization format in the Rest.li framework. What have you learned from this experience, and what other serialization formats have you evaluated?

Karthik Ramgopal: While we do go in-depth in our blog post, the primary learning was that there are massive latency and throughput wins to be had at scale by switching from JSON to Protobuf. In addition to Protobuf, we evaluated CBOR, MessagePack, SMILE, Avro, Kryo, Flatbuffers, and Cap’n’Proto. We ultimately picked Protobuf since it offered the best trade-off between runtime performance (latency, payload size, throughput), developer experience (IDE authoring, schema validation, annotation support, etc.), and multi-language/environment support.

InfoQ: On the Rest.li GitHub page, you announced that the framework will no longer be developed and will be deprecated. What advice would you give to the current users of the framework?

Karthik Ramgopal/Min Chen: Consider moving off Rest.li to gRPC. Please contact us via LinkedIn if you want some pointers to speed up the migration via automation.

InfoQ: In the blog post, you stated you observed up to 60% latency improvement for some of the services. Can you provide further details about this?

Karthik Ramgopal: Most of the latency improvement comes from a smaller payload size and less CPU time spent in serialization/deserialization. The 60% number was for services with very large and complex payloads where these costs were the predominant contributors to latency. We also saw significant improvements to tail latency (p95/p99) in many services on account of a substantial reduction in GC when using Protobuf.

InfoQ: You also mentioned there were over 50,000 Rest.li endpoints currently in production at LinkedIn. This is an impressive number, so can you explain why there are so many?

Karthik Ramgopal: As one of the largest professional networks, we have a complex “economic graph” of entities. Included in this, our enterprise businesses (Recruiter, LinkedIn Learning, Sales Navigator etc.) which have their own entities. Additionally, we also have our internal applications, tooling systems, etc. Further, we typically use a 3-tier architecture with BFFs (Backend For Frontends), midtiers, and backends. We encourage CRUD-based modeling with normalized entities for standardized modeling and ease of discoveryFor the past 10 years, all of these use cases have been modeled using Rest.li, which accounts for the high number of endpoints.

InfoQ: What was the main goal/reason for the gRPC+Protobuf adoption project? Is LinkedIn working or planning to work on other initiatives supporting similar goals?

Karthik Ramgopal/Min Chen: The reasons for adoption are consistent with why we picked gRPC+Protobuf over REST+JSON.

We are working on migrating all our stateful storage and streaming systems to Protobuf from Avro. We are also moving some common infrastructure functionality (AuthZ, call tracing, logging, etc.) from Java libraries into sidecars, exposing a gRPC over UDS API to reduce the cost of multiple programming language support. We are also revamping our bespoke in-house service discovery and load balancer to adopt industrial standard xDS protocol to work with both gRPC xDS SDK, and the Envoy sidecar.

About the Author

Rate this Article

Adoption
Style

BT