SPDY: An experimental protocol for a faster web (2024)

SPDY >

Executive summary

As part of the "Let's make the web faster"initiative, we are experimenting with alternative protocols to help reduce thelatency of web pages. One of these experiments is SPDY (pronounced "SPeeDY"), anapplication-layer protocol for transporting content over the web, designedspecifically for minimal latency. In addition to a specification of theprotocol, we have developed a SPDY-enabled Google Chrome browser and open-sourceweb server. In lab tests, we have compared the performance of these applicationsover HTTP and SPDY, and have observed up to 64% reductions in page load times inSPDY. We hope to engage the open source community to contribute ideas, feedback,code, and test results, to make SPDY the next-generation application protocolfor a faster web.

Background: web protocols and web latency

Today, HTTP and TCP are the protocols of the web. TCP is the generic, reliabletransport protocol, providing guaranteed delivery, duplicate suppression,in-order delivery, flow control, congestion avoidance and other transportfeatures. HTTP is the application level protocol providing basicrequest/response semantics. While we believe that there may be opportunities toimprove latency at the transport layer, our initial investigations have focussedon the application layer, HTTP.

Unfortunately, HTTP was not particularly designed for latency. Furthermore, theweb pages transmitted today are significantly different from web pages 10 yearsago and demand improvements to HTTP that could not have been anticipated whenHTTP was developed. The following are some of the features of HTTP that inhibitoptimal performance:

  • Single request per connection. Because HTTP can only fetch oneresource at a time (HTTP pipelining helps, but still enforces only aFIFO queue), a server delay of 500 ms prevents reuse of the TCPchannel for additional requests. Browsers work around this problemby using multiple connections. Since 2008, most browsers havefinally moved from 2 connections per domain to 6.
  • Exclusively client-initiated requests. In HTTP, only the client caninitiate a request. Even if the server knows the client needs aresource, it has no mechanism to inform the client and must insteadwait to receive a request for the resource from the client.
  • Uncompressed request and response headers. Request headers todayvary in size from ~200 bytes to over 2KB. As applications use morecookies and user agents expand features, typical header sizes of700-800 bytes is common. For modems or ADSL connections, in whichthe uplink bandwidth is fairly low, this latency can be significant.Reducing the data in headers could directly improve theserialization latency to send requests.
  • Redundant headers. In addition, several headers are repeatedly sentacross requests on the same channel. However, headers such as theUser-Agent, Host, and Accept* are generally static and do not needto be resent.
  • Optional data compression. HTTP uses optional compression encodingsfor data. Content should always be sent in a compressed format.

Previous approaches

SPDY is not the only research to make HTTP faster. There have been otherproposed solutions to web latency, mostly at the level of the transport orsession layer:

  • Stream Control Transmission Protocol (SCTP)-- a transport-layer protocol to replace TCP, which providesmultiplexed streams and stream-aware congestion control.
  • HTTP overSCTP-- a proposal for running HTTP over SCTP. Comparison of HTTP OverSCTP and TCP in High DelayNetworks describes a researchstudy comparing the performance over both transport protocols.
  • Structured Stream Transport(SST) -- a protocol which invents "structured streams": lightweight,independent streams to be carried over a common transport. Itreplaces TCP or runs on top of UDP.
  • MUX andSMUX -- intermediate-layer protocols(in between the transport and application layers) that providemultiplexing of streams. They were proposed years ago at the sametime as HTTP/1.1.

These proposals offer solutions to some of the web's latency problems, but notall. The problems inherent in HTTP (compression, prioritization, etc.) shouldstill be fixed, regardless of the underlying transport protocol. In any case, inpractical terms, changing the transport is very difficult to deploy. Instead, webelieve that there is much low-hanging fruit to be gotten by addressing theshortcomings at the application layer. Such an approach requires minimal changesto existing infrastructure, and (we think) can yield significant performancegains.

Goals for SPDY

The SPDY project defines and implements an application-layer protocol for theweb which greatly reduces latency. The high-level goals for SPDY are:

  • To target a 50% reduction in page load time. Our preliminary resultshave come close to this target (see below).
  • To minimize deployment complexity. SPDY uses TCP as the underlyingtransport layer, so requires no changes to existing networkinginfrastructure.
  • To avoid the need for any changes to content by website authors. Theonly changes required to support SPDY are in the client user agentand web server applications.
  • To bring together like-minded parties interested in exploringprotocols as a way of solving the latency problem. We hope todevelop this new protocol in partnership with the open-sourcecommunity and industry specialists.

Some specific technical goals are:

To allow many concurrent HTTP requests to run across a single TCP session.To reduce the bandwidth currently used by HTTP by compressing headers andeliminating unnecessary headers.To define a protocol that is easy to implement and server-efficient. We hopeto reduce the complexity of HTTP by cutting down on edge cases and definingeasily parsed message formats.
  • To make SSL the underlying transport protocol, for better securityand compatibility with existing network infrastructure. Although SSLdoes introduce a latency penalty, we believe that the long-termfuture of the web depends on a secure network connection. Inaddition, the use of SSL is necessary to ensure that communicationacross existing proxies is not broken.
  • To enable the server to initiate communications with the client andpush data to the client whenever possible.

SPDY design and features

SPDY adds a session layer atop of SSL that allows for multiple concurrent,interleaved streams over a single TCP connection.

The usual HTTP GET and POST message formats remain the same; however, SPDYspecifies a new framing format for encoding and transmitting the data over thewire.

Streams are bi-directional, i.e. can be initiated by the client and server.

SPDY aims to achieve lower latency through basic (always enabled) and advanced(optionally enabled) features.

Basic features

  • Multiplexed streams

SPDY allows for unlimited concurrent streams over a single TCP connection.Because requests are interleaved on a single channel, the efficiency of TCP ismuch higher: fewer network connections need to be made, and fewer, but moredensely packed, packets are issued.

***Request prioritization***Although unlimited parallel streams solve the serialization problem, theyintroduce another one: if bandwidth on the channel is constrained, theclient may block requests for fear of clogging the channel. To overcome thisproblem, SPDY implements request priorities: the client can request as manyitems as it wants from the server, and assign a priority to each request.This prevents the network channel from being congested with non-criticalresources when a high priority request is pending.
  • HTTP header compression

SPDY compresses request and response HTTP headers, resulting in fewer packetsand fewer bytes transmitted.

Advanced features

In addition, SPDY provides an advanced feature, server-initiated streams.Server-initiated streams can be used to deliver content to the client withoutthe client needing to ask for it. This option is configurable by the webdeveloper in two ways:

  • Server push**.**

SPDY experiments with an option for servers to push data to clients via theX-Associated-Content header. This header informs the client that the server ispushing a resource to the client before the client has asked for it. Forinitial-page downloads (e.g. the first time a user visits a site), this canvastly enhance the user experience.

  • Server hint.

Rather than automatically pushing resources to the client, the server uses theX-Subresources header to suggest to the client that it should ask forspecific resources, in cases where the server knows in advance of the clientthat those resources will be needed. However, the server will still wait forthe client request before sending the content. Over slow links, this optioncan reduce the time it takes for a client to discover it needs a resource byhundreds of milliseconds, and may be better for non-initial page loads.

For technical details, see the SPDY draft protocolspecification.

SPDY implementation: what we've built

This is what we have built:

A high-speed, in-memory server which can serve both HTTP and SPDY responsesefficiently, over TCP and SSL. We will be releasing this code as open sourcein the near future.A modified Google Chrome client which can use HTTP or SPDY, over TCP andSSL. The source code is at<http://src.chromium.org/viewvc/chrome/trunk/src/net/spdy/>. (Note that codecurrently uses the internal code name of "flip"; this will change in thenear future.)A testing and benchmarking infrastructure that verifies pages are replicatedwith high fidelity. In particular, we ensure that SPDY preserves originserver headers, content encodings, URLs, etc. We will be releasing ourtesting tools, and instructions for reproducing our results, in the nearfuture.

Preliminary results

With the prototype Google Chrome client and web server that we developed, we rana number of lab tests to benchmark SPDY performance against that of HTTP.We downloaded 25 of the "top 100" websites over simulated home networkconnections, with 1% packet loss. We ran the downloads 10 times for each site,and calculated the average page load time for each site, and across all sites.The results show a speedup over HTTP of 27% - 60% in page load time over plainTCP (without SSL), and 39% - 55% over SSL.

Table 1: Average page load times for top 25 websites

DSL 2 Mbps downlink, 375 kbps uplinkCable 4 Mbps downlink, 1 Mbps uplink
Average msSpeedupAverage msSpeedup
HTTP3111.9162348.188
SPDY basic multi-domain\* connection / TCP2242.75627.93%1325.4643.55%
SPDY basic single-domain\* connection / TCP1695.7245.51%933.83660.23%
SPDY single-domain + server push / TCP1671.2846.29%950.76459.51%
SPDY single-domain + server hint / TCP1608.92848.30%856.35663.53%
SPDY basic single-domain / SSL1899.74438.95%1099.44453.18
SPDY single-domain + client prefetch / SSL1781.86442.74%1047.30855.40%

* In many cases, SPDY can stream all requests over a single connection,regardless of the number of different domains from which requested resourcesoriginate. This allows for full parallelization of all downloads. However, insome cases, it is not possible to collapse all domains into a single domain. Inthis case, SPDY must still open a connection for each domain, incurring someinitial RTT overhead for each new connection setup. We ran the tests in bothmodes: collapsing all domains into a single domain (i.e. one TCP connection);and respecting the actual partitioning of the resources according to theoriginal multiple domains (= one TCP connection per domain). We include theresults for both the strict "single-domain" and "multi-domain" tests; we expectreal-world results to lie somewhere in the middle.

The role of header compression

Header compression resulted in an ~88% reduction in the size of request headersand an ~85% reduction in the size of response headers. On the lower-bandwidthDSL link, in which the upload link is only 375 Kbps, request header compressionin particular, led to significant page load time improvements for certain sites(i.e. those that issued large number of resource requests). We found a reductionof 45 - 1142 ms in page load time simply due to header compression.

The role of packet loss and round-trip time (RTT)

We did a second test run to determine if packet loss rates and round-trip times(RTTs) had an effect on the results. For these tests, we measured only the cablelink, but simulated variances in packet loss and RTT.

We discovered that SPDY's latency savings increased proportionally withincreases in packet loss rates, up to a 48% speedup at 2%. (The increasestapered off above the 2% loss rate, and completely disappeared above 2.5%. Inthe real world, packets loss rates are typically 1-2%, and RTTs average 50-100ms in the U.S.) The reasons that SPDY does better as packet loss rates increaseare several:

  • SPDY sends ~40% fewer packets than HTTP, which means fewer packetsaffected by loss.
  • SPDY uses fewer TCP connections, which means fewer chances to losethe SYN packet. In many TCP implementations, this delay isdisproportionately expensive (up to 3 s).
  • SPDY's more efficient use of TCP usually triggers TCP's fastretransmit instead of using retransmit timers.

We discovered that SPDY's latency savings also increased proportionally withincreases in RTTs, up to a 27% speedup at 200 ms. The The reason that SPDY doesbetter as RTT goes up is because SPDY fetches all requests in parallel. If anHTTP client has 4 connections per domain, and 20 resources to fetch, it wouldtake roughly 5 RTs to fetch all 20 items. SPDY fetches all 20 resources in oneRT.Table 2: Average page load times for top 25 websites by packet loss rate

Average msSpeedup
Packet loss rateHTTPSPDY basic (TCP)
0%1152101611.81%
0.5%1638110532.54%
1%2060120041.75%
1.5%2372139441.23%
2%2904153747.7%
2.5%3028170743.63%

Table 3: Average page load times for top 25 websites by RTT

Average msSpeedup
RTT in msHTTPSPDY basic (TCP)
201240108712.34%
401571127918.59%
601909152620.06%
802268172723.85%
1202927224023.47%
1603650277224.05%
2004498329326.79%

SPDY next steps: how you can help

Our initial results are promising, but we don't know how well they represent thereal world. In addition, there are still areas in which SPDY could improve. Inparticular:

  • Bandwidth efficiency is still low. Although dialup bandwidthefficiency rate is close to 90%, for high-speed connectionsefficiency is only about ~32%.
  • SSL poses other latency and deployment challenges. Among these are:the additional RTTs for the SSL handshake; encryption; difficulty ofcaching for some proxies. We need to do more SSL tuning.
  • Our packet loss results are not conclusive. Although much researchon packet-loss has been done, we don't have enough data to build arealistic model model for packet loss on the Web. We need to gatherthis data to be able to provide more accurate packet losssimulations.
  • SPDY single connection loss recovery sometimes underperformsmultiple connections. That is, opening multiple connections is stillfaster than losing a single connection when the RTT is very high. Weneed to figure out when it is appropriate for the SPDY client tomake a new connection or close an old connection and what effectthis may have on servers.
  • The server can implement more intelligence than we have built in sofar. We need more research in the areas of server-initiated streams,obtaining client network information for prefetching suggestions,and so on.

To help with these challenges, we encourage you to get involved:

  • Send feedback, comments, suggestions, ideas to the chromium-discussdiscussiongroup.
  • Download, build, run, and test the Google Chrome clientcode.
  • Contribute improvements to the code base.

SPDY frequently asked questions

Q: Doesn't HTTP pipelining already solve the latency problem?

A: No. While pipelining does allow for multiple requests to be sent in parallelover a single TCP stream, it is still but a single stream. Any delays in theprocessing of anything in the stream (either a long request at the head-of-lineor packet loss) will delay the entire stream. Pipelining has proven difficult todeploy, and because of this remains disabled by default in all of the majorbrowsers.

Q: Is SPDY a replacement for HTTP?

A: No. SPDY replaces some parts of HTTP, but mostly augments it. At the highest level of the application layer, the request-response protocol remains the same. SPDY still uses HTTP methods, headers, and other semantics. But SPDY overrides other parts of the protocol, such as connection management and data transfer formats.

Q: Why did you choose this name?

A: We wanted a name that captures speed. SPDY, pronounced "SPeeDY", capturesthis and also shows how compression can help improve speed.

Q: Should SPDY change the transport layer?

A: More research should be done to determine if an alternate transport couldreduce latency. However, replacing the transport is a complicated endeavor, andif we can overcome the inefficiencies of TCP and HTTP at the application layer,it is simpler to deploy.

Q: TCP has been time-tested to avoid congestion and network collapse. Will SPDY break the Internet?

A: No. SPDY runs atop TCP, and benefits from all of TCP's congestion controlalgorithms. Further, HTTP has already changed the way congestion control workson the Internet. For example, HTTP clients today open up to 6 concurrentconnections to a single server; at the same time, some HTTP servers haveincreased the initial congestion window to 4 packets. Because TCP independentlythrottles each connection, servers are effectively sending up to 24 packets inan initial burst. The multiple connections side-step TCP's slow-start. SPDY, bycontrast, implements multiple streams over a single connection.

Q: What about SCTP?

A: SCTP is an interesting potential alternate transport, which offers multiplestreams over a single connection. However, again, it requires changing thetransport stack, which will make it very difficult to deploy across existinghome routers. Also, SCTP alone isn't the silver bullet; application-layerchanges still need to be made to efficiently use the channel between the serverand client.

Q: What about BEEP?

A: While BEEP is an interesting protocol which offers a similar grab-bag offeatures, it doesn't focus on reducing the page load time. It is missing a fewfeatures that make this possible. Additionally, it uses text-based framing forparts of the protocol instead of binary framing. This is wonderful for aprotocol which strives to be as extensible as possible, but offers someinteresting security problems as it is more difficult to parse correctly.

SPDY: An experimental protocol for a faster web (2024)
Top Articles
Latest Posts
Article information

Author: Arielle Torp

Last Updated:

Views: 6138

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Arielle Torp

Birthday: 1997-09-20

Address: 87313 Erdman Vista, North Dustinborough, WA 37563

Phone: +97216742823598

Job: Central Technology Officer

Hobby: Taekwondo, Macrame, Foreign language learning, Kite flying, Cooking, Skiing, Computer programming

Introduction: My name is Arielle Torp, I am a comfortable, kind, zealous, lovely, jolly, colorful, adventurous person who loves writing and wants to share my knowledge and understanding with you.