A pig in a poke no more: my students rate the ISPs (2)

151-front-homepage-pic-2

As we discussed last time, shopping for an ISP is a fraught endeavor. The numbers you get, if you can get them, never sit still for long. And even if they do, making comparisons between ISPs as you look for a deal is usually all apples and oranges. Ironic when you consider that this kind of competitive product research has become a way of life for North American shoppers, precisely because of how readily information can be obtained online. 

The “up to…” gotcha

For their ISP reports, our student investigators had one other task after getting plan details: capturing actual speeds from their current ISP so as to compare them to advertised speeds. Like the other information gathering on this assignment, the speed tests have a dual purpose. One is to sharpen the student’s grasp of technical concepts; the other is to sharpen their assessment of the ISP’s performance.

Tests of the kind we’re interested in typically measure three variables: download speed; upload speed; and latency (see below). One of the tricky features of advertised broadband speeds is that ISPs always qualify them as “up to” – no guarantees. There are many reasons for this, legit and otherwise.

The main issue is that the Internet operates very differently from telephone and TV networks. Unlike these conventional networks, whose performance specs never vary, the Internet is designed on a “best effort” basis. Although that principle greatly enhances flexibility and lowers costs, the disadvantage is that performance is all over the place – as most people know from being online at home. What most people don’t know is why these fluctuations occur.

In the legit category, we have the built-in quirks of delivering data packets; too many other people online at the same time; and slow servers at the other end. On the other hand, ISPs can and do cheat their paying customers out of the bandwidth they should be getting. Here the main culprit is “under-provisioning” – being cheap about network resources so that capacity can’t keep up with customer demand.

_____________________________________

fcc-open-net-page

The FCC has several Internet initiatives designed to promote consumer welfare

The discrepancies between advertised and actual speeds are important to policymakers, as well as consumers. For example, the FCC’s Consumer Advisory Committee recently announced it’s developing “nutritional labels” for Internet service shopping:

“A government-sanctioned committee last week unveiled a set of sample disclosure forms that Internet service providers, like Comcast or Verizon, would be encouraged to offer potential customers. These disclosure forms would outline prices for stand-alone Internet service, average speed measures, and any network management rules that apply.”

This laudable initiative faces several major stumbling blocks, including a lack of standards in how speeds are measured; and the highly technical nature of test-related information, which might be helpful to consumers if only they could understand it. A perfect illustration of the challenge is provided by a network characteristic known as latency, which makes download and upload speeds much more meaningful.

ookla

LATENCY is the time in milliseconds (ms) it takes for a data packet to travel from one machine to another and back again, typically an end-user computer and a test server. Confusingly, latency really tells us what the “speed” of a connection is, whereas almost everyone means “bandwidth” when referring to speed (the capacity of a channel, e.g. 50 Mbps or megabits per second). Moreover, you can have lots of bandwidth (a very “fast” connection) on your home service, but still get poor performance from, say, streaming video because of high latency (often caused by servers that are slow to respond). 

Now, to measure latency we use something called a “ping test,” a feature of popular test pages like speedtest.net, operated by networking firm Ookla, which returns measures for download and upload speeds, as well as latency. Despite the millions of tests run every day on this one platform, these tests are not exactly inviting for end-users who have little or no idea what these measures represent. And getting reasonably valid results takes some effort.

_____________________________________

test1

Test result for my home connection: TekSavvy DSL at 50Mbps down/10Mbps up. Oookla’s speedtest shows very low latency (5 ms), and speeds close to advertised.

To get a good outcome (and a teachable moment), I asked everyone to do two things. One was to run not one but three separate tests, then average them. That’s important since, like our streets and highways, Internet traffic runs faster at off-peak times and much slower during the evening rush. The other was to make sure there was as little interference as possible on the access line, by clearing the browser cache; rebooting the modem; turning off any other online devices; and hardwiring the connection from modem to computer using Ethernet, since Wi-Fi can greatly reduce the amount of available in-home bandwidth.

Positive pings

The download and upload numbers from Ookla’s speedtest came out close to advertised speeds in most of the student reports, a flattering result compared to every other factor, like ISP pricing. (Some of the measured speeds on the Rogers network came out higher than advertised.) While latency times have no advertised numbers to be compared to, they provide a good indication of overall network performance (close to zero ms is good; over 20 or 30 ms is usually bad; 100-plus ms means trouble for real-time applications like Skype calls and video games).

The minimal discrepancies between advertised and actual speeds in these tests wasn’t entirely unexpected. There are a number of different ways to measure network speed (i.e. bandwidth), and different techniques produce different results. In a nutshell, Ookla designs its testing platform to get the highest download and upload numbers possible, by measuring multiple simultaneous connections (over the TCP layer, FYI), as well as by cleaning up errors that make the Internet a “best effort” platform (like dropped packets). While this is legitimate, Ookla’s approach doesn’t give end-users insight into the quality of more typical connections, which are often impaired by factors like traffic congestion.

_____________________________________

test2

Second test of my 50/10 home connection, using CIRA’s test platform. Results are almost identical to Ookla above.

Given these limitations, I decided to up the ante for the ISP reports my two classes are doing currently. Aside from a few presentation details, the assessment formula remains the same – with one important addition. We’ve added a second test platform, the .CA Internet Performance Test, which operates under the auspices of CIRA, the Canadian Internet Registration Authority. Roughly speaking, the CIRA platform has been designed to measure the “average effort” of your ISP connection, whereas speedtest is designed to show it at its best.

With comparisons now possible between two major testing methods, our new batch of 40-odd reports promises further insights into Canada’s murky ISP marketplace. Topping it off, the assignment is a big hit with students.

D.E.