BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google Works on a Protocol Intended to Replace HTTP

Google Works on a Protocol Intended to Replace HTTP

Leia em Português

This item in japanese

Google proposes SPDY, a new application protocol running on top of SSL, a protocol to replace HTTP which is considered to introduce latencies. They have already created a prototype with a web server and an enhanced Chrome browser that supposedly loads web pages twice as fast.

Google initiative, "Let's make the web faster", launched some times ago aims to improve the speed on the Internet. The initiative covers several domains, from building a faster web server to building a faster browser. For example, Page Speed is a tool used to improve web pages to make them be downloaded faster. Google has open sourced a number of tools and published tutorials to help developers around the world improve the speed of their web sites.

But Google has not stopped there. They play with a new application protocol named SPDY, pronounced SPeeDY, which, if successful and widely adopted, would replace HTTP in what would be an overhaul of the entire Internet backbone. The SPDY whitepaper mentions the idea of going down the stack and replacing the transport protocol (TCP) with a better one, but they recognize it would be quite hard to deploy, so they intend to improve the application protocol instead, HTTP.

SPDY whitepaper mentions a number of limitations in the HTTP protocol introducing latency in page delivery:

  • Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests.  Browsers work around this problem by using multiple connections.  Since 2008, most browsers have finally moved from 2 connections per domain to 6. 
  • Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.
  • Uncompressed request and response headers. Request headers today vary in size from ~200 bytes to over 2KB.  As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.  
  • Redundant headers. In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.
  • Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format. 

One of SPDY’s goals is to reduce the page load time by 50%, making browsing twice as fast. While a few hundred milliseconds might not mean much for an user, every millisecond will count for the highly interconnected web applications of the near future. The current web content is not affected by the introduction of the new protocol, only the web servers and their clients need to be enhanced to make use of it.

What exactly does Google want with SPDY?

  • To allow many concurrent HTTP requests to run across a single TCP session.

  • To reduce the bandwidth currently used by HTTP by compressing headers and eliminating unnecessary headers.

  • To define a protocol that is easy to implement and server-efficient. We hope to reduce the complexity of HTTP by cutting down on edge cases and defining easily parsed message formats.

  • To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.  
  • To enable the server to initiate communications with the client and push data to the client whenever possible.

To achieve these goals, SPDY is built by adding a session layer on top of SSL allowing for “multiple concurrent, interleaved streams over a single TCP connection”. The HTTP GET and POST message formats remain unchanged but SPDY proposes a new “framing format for encoding and transmitting the data over the wire”. The streams are to be bi-directional, being initiated both by the client and the server. 

So far, Google has build an in-memory server that handles both HTTP and SPDY protocols with the code to be open sourced soon. Also, there is a modified Chrome version, internally and, they say, temporarily called flip (source code available), that works both with HTTP and SPDY. They have also created a suite of testing tools to make sure that pages are downloaded correctly with SPDY. These tools are also to be released soon.

Based on the prototype, the results are looking promising so far:

We downloaded 25 of the "top 100" websites over simulated home network connections, with 1% packet loss. We ran the downloads 10 times for each site, and calculated the average page load time for each site, and across all sites. The results show a speedup over HTTP of 27% - 60% in page load time over plain TCP (without SSL), and 39% - 55% over SSL.

It is obvious that Google won’t succeed alone in this endeavor even if they create the best protocol there could be. It is necessary that the community embraces and joins the effort to create a new application protocol. It is interesting to see how other major players will react.

Resources: SPDY draft protocol specification, SPDY enabled Chrome.

Rate this Article

Adoption
Style

BT