TCP: Without Local Content, UFB won’t be Ultra Fast

New Zealanders lucky enough to live near a new Telecom ADSL2+ Cabinet may have download speeds as fast as 20mbps. Those special few on VDSL trials or fibre today could have speeds as fast as 100mbps! Even with these fast connections speeds, users find performance to overseas websites is often very slow.

A lot of the blame for slow International speeds has been directed at New Zealand’s sole direct trans-pacific link, the Southern Cross Cable. New cables to the US and Australia planned by Pacific Fibre and Kordia plan to enter the market to add capacity and lower transit costs. These new cables won’t necessarily help as much as expected.

The initial design of Transmission Control Protocol (the TCP in TCP/IP), a core protocol of the Internet, means that for standard Internet traffic distance dramatically reduces potential top speeds. “TCP performance depends not upon the transfer rate itself, but rather upon the product of the transfer rate and the round-trip delay.” (RFC 1323) This is because servers using TCP breaks files in to chunks and send a limited amount of data to a subscriber at a time. Servers then wait for an acknowledgement of received data from the subscriber before sending more – to ensure the subscriber is not overwhelmed by the amount of data sent. This limit is called a “TCP Receive Window“. Even with data moving at the speed of light, this behavior means that subscribers requesting data from servers far away will have worse performance than subscribers nearer to the servers.

The largest TCP windows originally allowed were 64KB, and for decades systems were designed with this limit in mind. A solution to the latency problem first emerged almost twenty years ago, with the release of TCP Extensions for High Performance in IETF RFC 1323, but implementation was slow. Linux didn’t support these extensions by default until mid-2004. Windows XP, run by more than half of all web users as of May 2011, doesn’t support them at all.

Many, but not all, Internet applications use TCP to transfer data, and thus suffer when confronted with high latency. Web pages (HTTP – the hypertext transfer protocol), email, file transfer (FTP – the file transfer protocol), Bittorrent, and most streaming media services like YouTube use TCP. Standard Web pages and Bittorrent programs can get around the problem to an extent by using multiple concurrent TCP streams – each in parallel and with its own bandwidth limitation. Single streams however, like YouTube and Skype, are hit hard – and although more robust protocols like RTP exist for streaming media, use of these protocols is rare. Academic studies have shown that up to 80% of all multimedia streaming across the Internet is HTTP (TCP) based.

Just how bad is the problem for Windows XP? The graph below demonstrates. I’ve taken latencies supplied by a local carrier and NANOG, and a formula demonstrated by Brad Hedlund for calculating TCP throughput to create the graph. On a typical ADSL connection with interleaving turned off, a user in Wellington can download at 17.5mbps from a local server, 14mbps from an Auckland based server, and 2.5mbps from a server in California. By upgrading to 100mbps UFB, the same user could download at 52mbps from a local server, 29mbps from an Auckland server, but only 2.8mbps from a server in California! That’s barely faster than ADSL.

It’s not just Windows XP users that can experience this issue. Modern computers behind routers and firewalls that perform NAT, Stateful Inspection or Deep Packet Inspection sometimes suffer as their firewalls hard limit traffic to a fixed 64KB TCP Window, rewriting larger window sizes as packets through.

Users of modern operating systems are also harmed by TCP performance over long distance links due to issues that have not been mitigated by concepts covered in RFC 1323. Although newer Microsoft and Apple products support larger (in fact dynamically adjusted) TCP window sizes, other robust features of TCP can cause severe performance degradation over long distances. “TCP Slow Start“, a congestion control and avoidance mechanism, requires that TCP windows start very small (typically less than 4KB) and double with each successful round trip transmission. Any packet lost along the way – a frequent occurrence when traversing multiple long distances on heavily contended international links – causes congestion control algorithms to halve the TCP window size and start the ramp-up process again.

Opening up new undersea fibre capacity to the rest of the world will help lower the cost of commodity Internet traffic, but given the design of TCP and the exceptional distances traffic to New Zealand has to transit, it is not the magic bullet for increasing home user broadband speeds – not to ultra fast broadband speeds. The real answer to enabling UFB connections that take advantage of local fibre speeds lies in local peering, aggressive caching, and location of Content Delivery Networks (CDNs) as close to end users as possible. Without these solutions, UFB might not lead to Ultra Fast Broadband.