TCP: Without Local Content, UFB won’t be Ultra Fast

New Zealanders lucky enough to live near a new Telecom ADSL2+ Cabinet may have download speeds as fast as 20mbps. Those special few on VDSL trials or fibre today could have speeds as fast as 100mbps! Even with these fast connections speeds, users find performance to overseas websites is often very slow.

A lot of the blame for slow International speeds has been directed at New Zealand’s sole direct trans-pacific link, the Southern Cross Cable. New cables to the US and Australia planned by Pacific Fibre and Kordia plan to enter the market to add capacity and lower transit costs. These new cables won’t necessarily help as much as expected.

The initial design of Transmission Control Protocol (the TCP in TCP/IP), a core protocol of the Internet, means that for standard Internet traffic distance dramatically reduces potential top speeds. “TCP performance depends not upon the transfer rate itself, but rather upon the product of the transfer rate and the round-trip delay.” (RFC 1323) This is because servers using TCP breaks files in to chunks and send a limited amount of data to a subscriber at a time. Servers then wait for an acknowledgement of received data from the subscriber before sending more – to ensure the subscriber is not overwhelmed by the amount of data sent. This limit is called a “TCP Receive Window“. Even with data moving at the speed of light, this behavior means that subscribers requesting data from servers far away will have worse performance than subscribers nearer to the servers.

The largest TCP windows originally allowed were 64KB, and for decades systems were designed with this limit in mind. A solution to the latency problem first emerged almost twenty years ago, with the release of TCP Extensions for High Performance in IETF RFC 1323, but implementation was slow. Linux didn’t support these extensions by default until mid-2004. Windows XP, run by more than half of all web users as of May 2011, doesn’t support them at all.

Many, but not all, Internet applications use TCP to transfer data, and thus suffer when confronted with high latency. Web pages (HTTP – the hypertext transfer protocol), email, file transfer (FTP – the file transfer protocol), Bittorrent, and most streaming media services like YouTube use TCP. Standard Web pages and Bittorrent programs can get around the problem to an extent by using multiple concurrent TCP streams – each in parallel and with its own bandwidth limitation. Single streams however, like YouTube and Skype, are hit hard – and although more robust protocols like RTP exist for streaming media, use of these protocols is rare. Academic studies have shown that up to 80% of all multimedia streaming across the Internet is HTTP (TCP) based.

Just how bad is the problem for Windows XP? The graph below demonstrates. I’ve taken latencies supplied by a local carrier and NANOG, and a formula demonstrated by Brad Hedlund for calculating TCP throughput to create the graph. On a typical ADSL connection with interleaving turned off, a user in Wellington can download at 17.5mbps from a local server, 14mbps from an Auckland based server, and 2.5mbps from a server in California. By upgrading to 100mbps UFB, the same user could download at 52mbps from a local server, 29mbps from an Auckland server, but only 2.8mbps from a server in California! That’s barely faster than ADSL.

It’s not just Windows XP users that can experience this issue. Modern computers behind routers and firewalls that perform NAT, Stateful Inspection or Deep Packet Inspection sometimes suffer as their firewalls hard limit traffic to a fixed 64KB TCP Window, rewriting larger window sizes as packets through.

Users of modern operating systems are also harmed by TCP performance over long distance links due to issues that have not been mitigated by concepts covered in RFC 1323. Although newer Microsoft and Apple products support larger (in fact dynamically adjusted) TCP window sizes, other robust features of TCP can cause severe performance degradation over long distances. “TCP Slow Start“, a congestion control and avoidance mechanism, requires that TCP windows start very small (typically less than 4KB) and double with each successful round trip transmission. Any packet lost along the way – a frequent occurrence when traversing multiple long distances on heavily contended international links – causes congestion control algorithms to halve the TCP window size and start the ramp-up process again.

Opening up new undersea fibre capacity to the rest of the world will help lower the cost of commodity Internet traffic, but given the design of TCP and the exceptional distances traffic to New Zealand has to transit, it is not the magic bullet for increasing home user broadband speeds – not to ultra fast broadband speeds. The real answer to enabling UFB connections that take advantage of local fibre speeds lies in local peering, aggressive caching, and location of Content Delivery Networks (CDNs) as close to end users as possible. Without these solutions, UFB might not lead to Ultra Fast Broadband.

4 thoughts on “TCP: Without Local Content, UFB won’t be Ultra Fast

  1. Hi jon,

    Some very valid comments made here. One other possible solution to deal the limits you describe are multithreaded applications. And let’s not forget that protocols like http1.1 are multithreaded. From memory googlemaps opens up around 300 tcp threads (or streams). Normal web applications today overcome the limits you describe by opening more than one TCP flow at the same time.

    Now while this is a somewhat acceptable approach, goodness me TCP thread hungry apps rapidly kill IPv6 work arounds such as carrier grade nat.

    So i guess that’s another reason to start supporting IPv6. A faster UFB 😉

    Like

    • The real hang-up is any media that streams over HTTP comes as a single TCP stream. Netflix, Youtube, Flash Video, iTunes, & Hulu all fall into this category – and they make up half of the content traversing the Internet in 2011.

      Half the content that people want is coming down as a single TCP stream. Half the users out there are on Windows XP. This spells trouble for users in New Zealand – unless those single streams start originating a lot closer to the end users.

      Like

  2. I understand what you are saying but am not competent to add anything technical to your discussion. My measure of speed is, using Youtube as an example, simply whether or not a video play back stutters.

    When the streaming video downloads faster than I can watch it, all is fine. That is all that matters to me (when watching Youtube). When I have to wait for the buffering, then there is something inadequate in my connection.

    When Youtube stutters, I have been known to fire up another computer, connect on a different ISP and compare the download times of the same clip over the two networks. It is interesting to note that one service can stutter (often taking 3 times longer to view the clip than the actual play time) and the other not.

    I can see only one reason for this difference (I have tried to eliminate other variables): One of the two ISPs controls data throughput from this service.

    If I use one of the ISPs broadband speedtest service, over a number of runs, I get consistent speeds, speeds that do not reflect the stuttering Youtube playback.

    When I complain to the ISP that appears to be throttling data from Youtube, they of course deny any manipulation of the connection speed.

    The two ISPs are Telecom Xtra and Vodafone. For each, I use their mobile service.

    I believe that this is a bigger issue for our experience of ultra fast broadband than the TCP issues discussed above.

    Does any one else experience apparent throttling from an ISP?

    Like

    • Hi John, thanks for your post.

      Sometimes it’s not ISP throttling your connection. If you’re using the mobile service on both, you’ve got as much (if not more) contention in the last mile as you do on International circuits. You’ve also got the potential for immense latency compared to fixed lines. OFCOM in the UK have just this year found the average last-mile latency for mobile broadband services in the UK is 192ms.

      For streaming media over TCP – like YouTube – this is the equivalent of moving an NZ local server to LA. A Windows XP user with average mobile broadband latency shouldn’t expect a single TCP stream of greater than 2.7mbps.

      Latency and TCP window size is however just one problem with TCP over wireless. Packet loss and jitter (inter-packet delay variation) also give TCP a very hard time. Performance over wireless is enough of a problem that serious academic research is underway to find a fix. As demonstrated by the slow uptake of RFC1323 though, once a fix is out there, it can take a decade to get in to commercial operating systems.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s