BigPipe at Facebook: Optimizing Page Load Time
Changhao Jiang, Research Scientist at Facebook, describes a technique, called BigPipe, that contributed to making the Facebook site, "twice as fast." BigPipe is one of several innovations, a "secret weapon," used to achieve reported performance gains: "[BigPipe] reduces user perceived latency by half in most browsers." The exception was Firefox 3.6, latency was reduced by approximately 50 ms - about a 22% reduction.
The motivation for BigPipe, and associated innovations:
Modern websites have become dramatically more dynamic and interactive than 10 years ago, and the traditional page serving model has not kept up with the speed requirements of today's Internet.
The increasing load time latency for sophisticated Web pages is not a new issue, nor is the use of some form of pipelining to effect performance gains. Aaron Hopkins discusses Optimizing Page Load Time on Die.net, including numerous factors, other than the traditional page request life cycle, that can affect the latency of page loading. One interesting point in Aaron's post:
IE, Firefox, and Safari ship with HTTP pipelining disabled by default; Opera is the only browser I know of that enables it. No pipelining means each request has to be answered and its connection freed up before the next request can be sent. This incurs average extra latency of the round-trip (ping) time to the user divided by the number of connections allowed. Or if your server has HTTP keepalives disabled, doing another TCP three-way handshake adds another round trip, doubling this latency.
Jiang did not indicate that BigPipe took advantage of a browsers innate pipelining functionality, and implied that it did not when saying that no changes in existing servers or browsers was required. It would be interesting to know whether or not the BigPipe innovations will continue to be useful when the browser does change - i.e. with the widespread implementation of HTML 5.
... compares data flow in HTML5 Web Sockets on the one side versus XML HTTP Request on the other. When I ran it, the results were astounding: 565 milliseconds against 31444 millieseconds–wow! The Web Sockets experience is 55 times faster, in part, because there is so much less unnecessary header traffic going over the wire.
The demo uses HTTP Pipelining, something that is generally considered to be "dangerous,"
but it is not HTTP pipelining. The network traffic is made up of WebSocket frames and not HTTP requests and responses. It is explicitly controlled by the application author and is not subject to the problems of HTTP/1.1 pipelining. Because WebSockets can send and receive at any time, are directly controlled by the programmer, and are not subject to proxy interference, the pipelining ability is safe and should not be disabled.
Komatsu's demo brings together the relationships between Facebook's innovations, HTTP pipelining questions, and the future capability of HTML 5, especially WebSockets and how they will eventually interact to increase Web site performance and minimize user experienced latency.