BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Give REST a Rest with RSocket

Give REST a Rest with RSocket

This item in japanese

Bookmarks

Key Takeaways

  • Representational State Transfer (REST) has become the de facto standard for communicating between microservices. The author argues that is not a good thing —- in fact, it’s a very bad thing, particularly for microservice communication
  • REST was implemented as a hack on top of HTTP. An often-cited reason to use RESTful web services is that it’s easy to debug because its “human readable”. Not being easy to read is a tooling issue.
  • Some of the things we would want in a protocol designed for microservice communication include binary serialisation, bi-directional communication, multiplexing, and the ability to exchange metadata.
  • Engineers want the ability to process data as it comes -- they want to be able to stream data. For data that is sent via streams, application flow control is needed.
  • We need a modern material to replace HTTP for creating modern services. Open source RSocket is designed for services. It is a connection-oriented, message-driven protocol with built-in flow control at the application level.

Representational State Transfer (REST) has become the de facto standard for communicating between microservices. That is not a good thing — in fact, it’s a very bad thing.  How did this come to pass? Well, at the time REST emerged, there were even worse options. When Roy Fielding proposed REST in 2000, REST was a kale sandwich in a field of much worse tasting sandwiches.

People were using SOAP, RMI, CORBA, and EJBs. JSON was a welcome respite from XML. It was easy to use URLs to spit out some text. Plus, JavaScript started to really take off in browsers and it was much easier to deal with REST, than it was SOAP. Unlike the recent microservice trend, most applications were the traditional monolithic 3-tier application. The source of most of the external traffic they were talking with was a browser, so when they had to produce something REST was an easy choice. Many people began to move from bigger commercial offerings like WebSphere to Jetty and Tomcat. These didn’t even have the facilities to deal with EJBs, so REST was a convenient choice.

What does this have to do with microservices? Early microservice pioneers moved to microservices for a different reason than people are doing it today. They moved to them because they had to deal with massive scale. They started to get so many users that they couldn’t serve everything in a single monolith. And unlike many enterprises today, cost wasn’t the motivating factor — time was.  They needed to get their services out yesterday. As they got more and more users their monolith wasn’t cutting it, so they cut their app up into smaller pieces. They could deploy these applications on thousands of servers, and then eventually virtual machines.

Furthermore, they could deploy their applications very quickly. Companies that adopted this model were able to survive. During this race though, there wasn’t much time to consider what they were doing. These early pioneers had to deal with exponential user growth and competition, so it makes sense they would opt for tactical solutions. One of these was using REST to communicate between services.

Why REST is bad for Microservices

When programming an application, your programing language eventually ends up as machine code. This is obvious. Even an “interpreted” language like Java or JavaScript does, as well. Instead of compiling directly to machine code, they use a JIT or just-in-time compiler. In some cases, JIT’ed code can be faster than what an engineer can write and tune by hand — VMs are truly a miracle of modern computer science.

Why then do we waste this miracle? Instead of sending binary messages optimized for machines, on a protocol optimized for services, we send messages optimized for humans. We send around things like JSON and XML using a protocol that was designed for sending books. Think how ridiculous this is! You have a program that is binary, that turns a binary structure to text, sends it over the network in text, to a machine that parses and turns it back into binary structure to be processed in an application.

Avoiding cache misses on a modern CPU is critical. Unfortunately, parsing tons of JSON and Strings is going to cause cache misses!

An often-cited reason to use REST is that it’s easy to debug because its “human readable”. Not being easy to read is a tooling issue. JSON text is only human readable because there are tools that allow you to read it – otherwise it’s just bytes on a wire. Furthermore, half the time the data being sent around is either compressed or encrypted — both of which aren’t human readable. Besides, how much of this can a person “debug” by reading? If you have a service that averages a tiny 10 requests per second with a 1 kilobyte JSON that is the equivalent to 860 megabytes of data a day, or 250 copies of War and Peace every day. There is no one who can read that, so you’re just wasting money.

Then, there is the case where you need to send binary data around, or you want to use a binary format instead of JSON. To do that, you must Base64 encode the data. This means that you essentially serialize the data twice — again, not an efficient way to use modern hardware.

At the end of the day, REST was implemented as a hack on top of HTTP. And HTTP is being used as a hack to send transport data between services. HTTP was designed to schlep books around the Internet. It shouldn’t be used for services to communicate with one other. Instead, use a format that is optimized for your application — the thing that is processing all the data.

What is good Microservice Communication?

If we suppose for a moment that REST isn’t the best choice for service to service communication, then what is? Let’s look at some of the things we would want in a protocol designed for microservice communication.

For starters, we want things to be bi-directional. That’s a huge problem with REST — clients can only call servers. When both sides have equal ability to call each other, you can create interactions between applications in a natural manner. Otherwise you are forced to devise clunky workarounds such as long-polling to simulate server-initiated calls. You can partially get around it with HTTP/2, but the call still needs to be initiated by the client. What you want is the ability for clients and servers to be free to call each other as necessary.

Another requirement is the connection between services must support multiple requests on same connection – at the same time. This is called multiplexing. Now, with a single connection, there needs to be some way to distinguish one request from another. This is unlike HTTP where one request starts when another one ends. With multiplexing, you are going to need keep track of the different requests. A good way to do this is having each request represented with a binary frame. Each frame can hold the request, as well as metadata about the request. Then, it can be used to get the frame to the correct location.

When sending data over a single connection, you need the ability to fragment requests. A large request with a single connection will block all the other requests behind it, aka head-of-the-line blocking. What is needed, instead, is to fragment the requests into smaller sizes and send those over the network. Since data being sent is framed, it can be broken into smaller frame fragments, and then reassembled on the other side. This way, requests can interleave with each other. No longer can a large request block a smaller request. This will create a much more responsive system.

Also, the ability to exchange metadata about a connection is useful. Sometimes there is data to send that isn’t necessarily part of a business transaction — things like configuring the overall tracing level or exchanging information for dictionary-based compression. These are things that don’t have to do with business logic but could be controlled at a connection level. The ability to exchange metadata would provide for that.

Often in application code, a function or method will be called that takes a list, returns a list, or both. This happens in microservices all the time, as well. REST doesn’t deal with these situations well and this leads to all sorts of hacks and complexity.

What’s needed is a protocol that can deal with iterative data easily and naturally — like you do in your application. It doesn’t make sense to read an entire list of data, process it and then return a list of data once everything is processed. What you want is the ability to process data as it comes. You want to be able to stream data. If there is a long list of data, you don’t want to wait for that data to be processed — you want to send the data off as it becomes available and get the responses back as they occur.

This will create a much more responsive system. It can be used for all sorts of things from reading bytes from a file and streaming it over the network, to returning results from a database query, to feeding browser click-stream data to a back-end. If first-class streaming support is present in the protocol, it’s not necessary to include another system like Spark to do stream processing. Nor is it necessary to include something like Kafka unless you want to store data.

For data that is sent via streams, the next thing needed is application flow control. Byte-level flow control works for something like TCP because everything is the same size, and generally, the same cost to process from the perspective of the network card. However, in an application, not everything is the same cost. There could be a message that is 10 kilobytes that takes 10 milliseconds to process, but another message that is 10 bytes that takes 10 seconds.

Another scenario found in microservices is that downstream services process data at slower rates than the data can be processed. This means that TCP buffers are never full. There needs to be some way to control the flow of traffic to keep from overwhelming downstream services in order to keep them responsive.

The application must be able to control the rate that messages can flow independent of the underlying network bytes. For an application developer it is difficult to reason how many bytes a message is especially between languages. On the other hand, it is simple for a developer to reason about how many messages they are sending. This way, the service can arbitrage between the network flow control and the application flow control. Sometimes an application can process data faster than the network, and other times, the network can process data faster than the application. Having application flow control will ensure that tail latency is stable as well — again creating a responsive application. It also prevents the need for unbounded queues, a dangerous hack that is found in other applications.

As mentioned above, a huge drawback of RESTful web services is that they are (de facto) implemented as text-based. To send any binary data requires you Base64 encode the data —and serialize everything twice. What you really want is something that is binary — because it can represent anything — including text. Also, it is significantly more efficient for your application to process binary data than text, especially numbers. Additionally, they are naturally more compact — they don’t have extra braces, curly brackets, or angle brackets in them.  Finally, if your data is binary, there is a possibility too for zero copy serialization and deserialization, depending on the format. This is a little out of the scope of this article, but check things out like Simple Binary Encoding (SBE), and Flatbuffers. They are significantly faster than using JSON.

Finally, you want to be able to send your requests over different transports. RESTful web services typically use HTTP, which uses only TCP. What you really want is a way to abstract the networking away, so that you only program to a specification and don’t have to worry about the transport. At the same time, if it’s talking to browsers your application should be able to run over WebSocket. You should not have to switch to a new networking toolkit every time you want to change where your application is deployed, it should be easy to swap out transports without any applications changes.

Which Protocol Fits the Bill?

Some would suggest that REST and HTTP/2 are a better fit. HTTP/2 is better than HTTP/1 but if you read the specs, its sole purpose is to create a better web browser protocol. It was never designed or intended for use in microservices. And that is what it should be used for — server HTML to web browsers. Again, it was never intended for microservices communication. Furthermore, you still must deal with URLs and matching the different HTTP methods to your application — these methods were never really intended for server to server communication.

HTTP/2 does provide streaming, but it only provides it for server push. So, using REST over HTTP/2 requires initiating a request on a client and then pushing the data to the server. HTTP/2 flow control is byte-based flow control. This is good for a web browser, but not good for an application. There is still no way to control the flow of an application by the way that work is being done on an application.

There has been a lot of noise lately about using gRPC. gRPC is very similar in concept to SOAP. Instead of using XML to define services, it uses Protobuf. Like SOAP, it’s a hodge-podge of URL and Header magic — this time using HTTP/2. This means gRPC is explicitly tied to HTTP/2, a protocol designed for web browsers. And what is worse, it isn’t supported in a web browser.

Instead you must use a proxy to turn your gRPC calls in to REST calls, thus defeating the purpose for using it. This highlights how poorly designed gRPC is. Why would you use HTTP/2 for a protocol and not make sure it works in a browser? You are forever limited by its original purpose, yet not able to use it where it was intended. This leads to my next point: the biggest limitation of REST is the fact it’s tied to HTTP.

What you want is a protocol that is designed for service-to-service communication. Using a protocol that is specifically-designed for services to talk to each other will create significantly simpler and more reliable applications. There will not be any hacks, workarounds, or impedance mismatches.

Construction materials are a good analogy. Wood is great for building small bridges. You can use it to span a small stream or creek and it isn’t a problem.

When engineers started using it to span wider distances things got complicated.

Wood bridges like this worked. But, they had a very high failure rate compared to modern bridges made of better materials. They were also very complicated and took much, much longer to build. This why we now use steel and concrete. They are easier to maintain, cheaper to build, last longer, and can span far greater distances.

We need a modern material to replace HTTP for creating modern services. Open source RSocket is designed for services. It is a connection-oriented, message-driven protocol with built-in flow control at the application level. It works in a browser equally as well as on a server. In fact, a web browser can serve traffic to backend microservices. It is also binary. It works equally well with text and binary data, and the payloads can be fragmented. It models all the interactions that you do in your application as network primitives. This means you can stream data or do Pub/Sub without having to setup an application queue.

REST is a decent solution where it makes sense. One place it doesn’t make sense is microservices. Distributed systems are difficult enough on their own. The last thing that we need is to make them more complex by using something not designed for them.

About the Author

Robert Roeser is co-founder and CEO of Netifi. He is a 10-year veteran of distributed real-time systems leading large scale technical projects at Netflix and Nike.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Rest makes sense to me

    by David Valdivieso,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Rest makes sense using microservices. I don't need bidirectional communication, at least not yet. All my systems based on microservices work like a directed acyclic graph. But maybe the most critical issue would be when autoscaling. I want to create and destroy nodes easily, and having a dedicated sockets between services sound like a pain to me. However when the payload is huge I need to stream data,, but already managed over http. I also agree about sending just binary, but you know everyone is on rest.

    I hate WhatsApp, but all my family and non-technical friends are using it. So I keep using it.

  • But we already did corba

    by Eric Link,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Corba, other binary bidirectional rpc have been done. Mostly not worth it except in the most constrained situations.

  • Re: But we already did corba

    by William Smith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I guess it depends but I’ve always thought that using verbose text formats - XML/REST/SOAP whatever for your write protocol is kind of dumb. Why would you do that. It is supposedly easier to debug but honestly a plug-in for Wireshark for a binary format is not a difficult thing to write, but you do pay a performance penalty every single time for having human readable messages.

    Sigh

    Anyway RSocket looks really interesting to me - and the fact that it is being used in production at Facebook is really interesting. Being non-blocking on the network layer is really a big deal. It’s great to see luminaries like Martin Thompson and Todd Montgomery working to try and change this.

  • What benefits this bring compared to stomp over websocket

    by Giovanni Candido,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Looks similar to stomp, except that intentionally Rsocket is not backwards compatible. Stomp can fallback to other low level transport when websocket is not available. I think this is were limitation may come in, but currently I don't see any benefit of RSocket, what advanteges it brings today and why not other message protocols and tools (like socket.io and others)?

  • Re: What benefits this bring compared to stomp over websocket

    by Robert Roeser,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi-

    RSocket's transport is pluggable - so you can run it over WebSocket if you'd like. There is the Netty WebSocket implementation, and we have experimented with running RSocket over Tomcat websocket.

    Motivations for RSocket can be found here:
    github.com/rsocket/rsocket/blob/master/Motivati...

  • It's not REST, its the implementation of REST

    by Alan Schultz,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    REST is an Architectural style, not a protocol. REST principles can be applied to building services using any protocol. Your problem is with the currently popular ways of implementing RESTful services: - HTTP, JSON, XML etc.

  • Re: What benefits this bring compared to stomp over websocket

    by Rossen Stoyanchev,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    STOMP is a protocol for brokering messages to clients. Its main purpose is to define how clients subscribe to and receive messages in a way that is simple and general enough so that those pub-sub semantics can translate to a wide range of backend brokers such as RabbitMQ, ActiveMQ, and others.

    RSocket is at a slightly lower level but addresses more core infrastructure concerns important in messaging applications that run over two-way, streaming protocols like TCP, WebSocket, etc. For example RSocket defines specific message interaction models (fire and forget, request reply, request stream, two-way channel) which can work symmetrically between client and server. STOMP is strict about client vs server roles, it's pub-sub after all, and a couple of the above interaction models are left as a further exercise. RSocket defines backpressure across interacting peers so that a sender does not overwhelm its receiver. There is nothing in STOMP that deals with the broker producing too many messages. RSocket deals with multiplexing, fragmentation, resuming after a lost connection, among others. Those are all examples of lower level, core infrastructure concerns.

  • Re: It's not REST, its the implementation of REST

    by Robert Roeser,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    You are correct about REST the implementation - the colloquial way REST is referred too. REST the architectural style isn't appropriate either. It's an architecture style for distributed hypermedia systems (www.ics.uci.edu/~fielding/pubs/dissertation/fie...) I feel that last part "for hypermedia systems" gets overlooked and is important. I think in the context of microservices, using something designed for service to service communication is better.

  • Re: It's not REST, its the implementation of REST

    by Jesse Cary,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Agreed, a 10-year veteran of distributed real-time systems leading large scale technical projects at Netflix and Nike should understand this. Search www.ics.uci.edu/~fielding/pubs/dissertation/fie... for "JSON"... 0 results

  • 10 years late

    by John Zabroski,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Reminds me of Argot. I guess you're about 10 years late and missed out on the 6LowPan working group discussions.

    blog.argot-sdk.org/2009/10/argot-meets-contiki....

    Given this prior art, I'm not sure "no better alternative" argument is true.

    People like complicated, digital systems with continuous states to be easy to debug. We can still do REST with MessagePack

  • Re: What benefits this bring compared to stomp over websocket

    by John Zabroski,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    > RSocket defines backpressure across interacting peers so that a sender does not overwhelm its receiver.

    You should have written this article, not Robert. This is a pretty key piece of information.

  • Re: What benefits this bring compared to stomp over websocket

    by Rossen Stoyanchev,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks but IMO the article provides a lot of very valuable insight and context. It's hard to mention everything. For that I recommend reviewing the RSocket protocol. It's highly readable and it's quick to get an idea.

  • Re: What benefits this bring compared to stomp over websocket

    by Robert Roeser,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for the feed back - RSocket is ReactiveStreams over a binary boundary if that helps. As Rossen suggested you can check out the spec - here's the link rsocket.io/docs/Protocol.html

  • Re: What benefits this bring compared to stomp over websocket

    by 胡 豪,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    i agree you idea

    not all application is suit to REST when DEOPS is more and more popular,SOAP is a very impornet idea to kill bussiness things

  • RSocket vs gRPC / Java standards integration?

    by Ant hony,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I have a couple questions:

    Why would I invest in RSocket over gRPC, given that:
    1) gRPC is a CNCF-backed project
    2) with the release of gRPC-Web ( grpc.io/blog/grpc-web-ga ), the "it isn't supported in a web browser" statement in the article is no longer valid
    ?

    From what I understand, RSocket is an alternative to Servlets, so what are the plans for RSocket integration with MicroProfile (e.g. github.com/eclipse/microprofile-reactive-streams and github.com/eclipse/microprofile-reactive-messaging ) and/or Jakarta EE?

    Thanks in advance for any answers.

  • Re: RSocket vs gRPC / Java standards integration?

    by Robert Roeser,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    1) gRPC is a CNCF-backed project
    RSocket is developed by Facebook, Pivotal, Netifi, Alibaba is looking into RSocket - etc.

    Spring gets downloaded millions time a month…and there’s plans to add support RSocket:
    jira.spring.io/browse/SPR-16751

    More announcements around users, and community in the future.

    2) with the release of gRPC-Web ( grpc.io/blog/grpc-web-ga ), the "it isn't supported in a web browser" statement in the article is no longer valid
    ?

    Not only is it valid - this highlights the problem. There needs to be a special gateway to translate between a browser and a gRPC server. From their docs - “gRPC-Web clients connect to gRPC services via a special gateway proxy” Also no BiDi stream support. It appears that the gRPC-Web bridge supports only http/2 which could be a non-starter for some environments.

    RSocket literally works in browser to the point where a browser can accept RSocket calls from a service. Each side of an RSocket connection has a requester and responder - the only difference in RSocket between a client and server is who initiated the connection. RSocket is transport agnostic so it can go over existing HTTP/1.1 connections using WebSockets.

    3) From what I understand, RSocket is an alternative to Servlets, so what are the plans for RSocket integration with MicroProfile
    RSocket is not an alternative to Servlets - it is a binary protocol that models application interactions over a binary boundary with Reactive Stream semantics. Jakarta EE supports WebSockets - if you’re interested we can create a Jakarta EE transport with minimal effort, and RSocket would work there. We experimented doing this with Tomcat in the past. Should be similar.

  • Re: RSocket vs gRPC / Java standards integration?

    by Ant hony,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for your reply Robert.

    Yes, I'd be very interested to see a Jakarta EE transport. So if I understand it correctly, RSocket would be an alternative to using javaee.github.io/javaee-spec/javadocs/javax/web... then? And would things like CDI "just work" then? It would be really useful to see an example of how to integrate RSocket into a Jakarta EE application.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT