Years back, I operated a small ISP to serve a need until cable broadband arrived in the community. I had to monitor overall usage and throttle clients frequently with high torrent/video usage. It was also an issue with hotspots in RV Parks/campgrounds where there was a higher ratio of video usage. There are technical limitations to spectrum/bandwidth, I'm curious why this isn't mentioned more in discussions like these?
The article brings up that. The throttling was not done in order to ease congested network traffic. It is not a response to hotspots of high traffic, nor is it even classification of data types such as torrent and video usage.
If you use Boost ISP and watching sports on NBCSport it will work fine but watching the same match later on youtube is not. If you sit at CSpire and watch a paid movie on Youtube Red you will get worse experience then if you paid and watched the exact same movie on Netflix. If you sit at GCI and have both Netflix and Amazon Prime then go and unsubscribe your netflix as it will be much worse experience than amazon thanks to your ISP throttling.
This is not small ISP that monitor overall usage and throttle clients based on high usage and hotspots of traffic. The article is not about congestion control but rather marketing choices where one streaming service is favored over the other within the captured market of an ISP customers.
This kind of throttling is done to squeeze more money from anything crossing their network. All ISPs want a piece of the pie so they will charge at both ends anything coming in and anything going out to you.
That is just how the Internet works. Everyone is selling and buying access to networks. Whether directly or in the form of agreements. And most people are of course trying to get more for less. One could certainly wish that things would be different, but it isn't like they are doing something especially nefarious.
You pay your local delivery company $X per month and they promise that they'll deliver you 1 package a day from their local depot.
But it turns out that although you're ordering a package a day from Amazon, you're not receiving a package a day. You get your stuff, but it's delayed and backed-up somewhere.
And so as an experiment you sign up for a service that will receive a package for you, re-box it in their own packaging and then send it on to you. You then start to receive a package a day again.
At that point you start wondering, are they looking at the boxes, seeing they are from Amazon and then just deciding to leave them sitting around their warehouse for a while?
And the answer is yes, even though you're paying for the service, they also want whoever is sending the package to pay them to ensure that they get their package on time.
So while their may well be all sorts of agreements between all the different parcel companies about who will do what when, it's frustrating as an end-user that there's all these weird agreements that impact me. After all I'm paying for packages to be delivered, not 'packages that are offered by our partners'.
That’s just not how the Internet works and it’s never how it worked. This old article from Cisco (based on a 1999 book) explains that there can be various kind of business arrangements in an interconnected network: https://www.cisco.com/c/en/us/about/press/internet-protocol-.... The article likens the situation to airlines, where multiple transit providers might be involved and where there may be complex fee sharing arrangements between all the providers.
Note that this book predated the rise of streaming video providers by more than a decade. Back then, nobody thought that delivery of content over the Internet was this simple one-sided transaction. It’s a new view of the Internet that has arisen because it suits the interests of these very lucrative streaming businesses.
It's still the responsibility of the ISP to deliver connectivity to other hosts on the internet at their promised speed, regardless what is necessary to do so or how it operates internally. That's the service that they are advertising to the consumer.
If UPS decided to change the way they internally route their packages, it doesn't mean that they can just brush off promised delivery dates because "it doesn't work how you think it does." I think that's the point the OP was making.
Internet "access" is, and always has been, a link to an "access network." ISPs promise that that they'll route packets from that access network to the Internet, but nobody promises you end-to-end connectivity at a particular speed (unless you pay for a dedicated line). (Verizon, for example, caveats it's product as being a "gigabit connection to your home.")
People like to pretend like internet access service is a promise to get your packets from point A to point B anywhere on the internet at a given speed. That is based on the fiction indulged by the software folks, who just think about getting bytes between a pair of sockets and don’t care how the internet actually works. But as explained in that Cisco article, interconnection and transit has always been distinct from access, and has been the subject of separate commercial negotiations between network operators. That reflects the technical reality that the internet is not a single network, but an agglomeration of private access and transit networks. The idealized software abstraction of the internet doesn’t define what it actually is. If you read the contract that defines what you're buying, it's not promising you that idealized abstraction.
This is disingenuous. Yes, an ISP can’t do anything if you pay for 1Gbps but can’t download that fast from a server in Turkey. But the only reason a customer can’t get sufficient bandwidth from Netflix, whose servers are often 5 hops away in the same city, is because the ISP is not doing their job. Being an ISP implies also doing a proper job of setting up agreements such that the bandwidth you pay for is usable.
> It's still the responsibility of the ISP to deliver connectivity to other hosts on the internet at their promised speed, regardless what is necessary to do so or how it operates internally
So they don’t have to deliver 1 gbps to every end point, they just need to “do a proper job” to make the bandwidth “usable.” But what does “proper job” mean? Historically, it has meant making reasonable efforts to reach interconnection agreements with other network operators. It has not meant that you’re required to upgrade your peering so that 50%+ of all your traffic can come from a single peer, at no cost to the other peer.
I’m not backing off my argument; I’m arguing from a stance of reasonableness rather than perfection. And while we can argue on exactly what defines “reasonableness”, I think it’s unambiguously clear that a large number of people do in fact need 50% of their traffic to come from a single peer and I think it shouldn’t be unreasonable to expect an ISPs to address that rather than throttling customers. I know plenty of people whose internet usage consists of email, Facebook and hours upon hours of streaming Netflix.
It just doesn't make any sense. It is like saying that UPS should deliver to everywhere on earth in one day for the same price, despite large differences in cost between local delivery and e.g. air freight to a war zone.
The Internet is a bunch of interconnected networks. Any of those networks can largely have whatever conditions they want. That is what the Internet is. Do I think it should be different, yes, but a lot of people don't. Especially those arguing against things like throttling.
> It is like saying that UPS should deliver to everywhere on earth in one day for the same price
No, it's not like that at all. UPS makes it clear to the customer in advance what the cost is, how long it will take, and the customer agrees. Also, UPS is not held responsible for delays that they couldn't reasonably prevent (bad weather, etc).
It's also an exaggeration of the situation; it's essentially saying "If we can't achieve perfection, then we might as well not have any standards and we can't hold anyone accountable for anything." The fact is that those speeds are obtainable in most of the common situations that people actually need them, and the throttling only occurs out of unwillingness on the part of ISPs to negotiate agreements that would allow those speeds; it's greed and/or laziness, not physical or technical limitations.
You can have whatever opinion you want on what the Internet should be, just like I can. But the Internet today is different networks settings their own policy. Google can say "connect to us and we won't charge you for YouTube traffic", but that doesn't stop someone else from saying "no, pay us for access to our networks instead". It doesn't really matter if something is available or not as such because the whole model is based on exchanging access to infrastructure.
At the end of the day what you are describing is some sort of regulated nationalized Internet backbone. Which would to a larger extent support such features. As it is now if you don't like the "mix" of access you are getting, you should change providers (which could of course be a problem, but that is another issue).
It doesn’t have to be nationalized, it could alternatively be honestly priced and advertised. Most other businesses work this way: Starbucks doesn’t say “sorry, we only can give you half the coffee you paid for because our suppliers want more money and it would cut into our profits.” Instead, they work behind the scenes to make sure they can serve their product as advertised, and raise the price if necessary. I don’t know why you think the internet is any different; it’s a set of negotiations like any other business.
I was only trying to make the point that from the point of the view of the average consumer it's hard to know what it is that you're agreeing to pay for and what you're going to get. I understand the complexities involved.
You're sold a "XXXMb/s connection to 'the internet'", which people take to mean "XXXMb/s of the thing that I want, whatever that is" and not "XXXMb/s connection to the edge of the network, and then you get what you get given."
Your scenario isn't particularly outlandish. That is how a lot of economy services work. And also why you can usually get other deliveries while many other packages, e.g. from China, are waiting to be processed. Not that deliveries are much like the Internet.
Right... until you grandma can’t Skype with you because Microsoft didn’t pay the “decongestion” tax to that neighborhood IP she’s using or every ISP along the way.
Or when the firefighters get throttled because their unlimited data is consumed and your house goes up in smoke.
I really hope firefighters and other emergency services don't depend on the unreliable mobile network. Those will be some of the first things to go down in case of an actual emergency, along with the power grid.
They rely on it because there's no real good alternative for data. Europe uses TETRA [0] (the equivalent of the North American P25 but with better data support among other things) for critical communications but still needs a TETRA + 3G/LTE hybrid for large data transfers (above the hundreds of Kbps order speeds TETRA offers).
Most fires that require an emergency response are not at the scale that could possibly impact cellular serivce. Even a whole neighbourhood going up in smoke isn't going to take down cell towers unless that tower happens to actually be in the middle of that neighbourhood.
Throttling is generally defined as purposefully slowing some (or all) traffic, in the absence of network congestion.
That you may have a faster connection to one provider than another doesn't indicate throttling, unless you can show that the network paths are essentially the same and the slower provider isn't the one restricting the data rate.
Certainly, almost any last mile congestion should affect all providers; but most content providers are going try to deliver content from as close to the user as possible; either through CDN boxes deployed inside ISPs networks if possible, or at a nearby place where the ISP or its transit partners peer. I'm aware that Netflix, Youtube, and Amazon all run their own CDN programs, so it's very likely that they're interconnecting to the ISP on separate connections, and without insider knowledge from the ISPs or the CDNs, it's impossible to know what the congestion looks like there. CDN installations also have capacity limitations, and take time to upgrade. It's not unreasonable to assume that some content providers are running their networks hotter than others.
In the years since the Internet has become mainstream, people seem to have forgotten the bad old days of downloading a large file, and trying to figure out which mirror would serve the file the fastest; which usually meant figuring out which mirror was both physically close, and well connected to the downloader's network.
"The article is not about congestion control but rather marketing choices where one streaming service is favored over the other within the captured market of an ISP customers."
I wonder if that's true, or this is simply a result of the peering agreements that content providers have engaged in since forever.
For those who don't know: if Netflix is colocated in the same datacenter(s) as CSPire, it's possible to "throw an ethernet cable over the cage wall" and make a direct connection to CSPire. This results in dramatically better bandwidth and latency, but requires significant up-front costs (it's also not possible for every provider to peer with every distributor, for logistical reasons).
As much as I believe in the concept of net neutrality, I feel like the conversation misses many of the technical nuances of the way the internet actually works. Peering is one example: you can have an entirely "neutral" internet, but still observe dramatic differences between one provider and another, based on your network location.
(Edit: now that I've read TFA, it's pretty clear that they're not talking about favoring providers, but rather, throttling entire classes of traffic. Which is a different discussion.)
Could there be other causes than differentiation to explain the data collect? That would be an excellent questioning to ask the team at University of Massachusetts if they made a dissertation or research paper, since all the article says is that they used a established peer-reviewed technique that assumingly has gone through such questioning. It would be interesting to know the statistical probability that collocated servers would exclusively explain the data.
> now that I've read TFA, it's pretty clear that they're not talking about favoring providers, but rather, throttling entire classes of traffic.
I must strongly disagree there. What kind of different class of traffic would give throttled traffic to different forms of on-demand video stream services? NBCSports, Netflix, YouTube Premium and Amazon prime. I don't see it so please name the four different classes of traffic those sites represent that a congestion system would use as identifiers.
> What kind of different class of traffic would give throttled traffic to different forms of on-demand video stream services? NBCSports, Netflix, YouTube Premium and Amazon prime.
There may be a small amount of packet loss at the time due to the ISP shaping their traffic to prevent congestion.
The higher the latency, the more the packet loss will affect the throughput.
The services you mention all use different CDNs, which may all have different latencies at the point of observation:
> NBCSports, Netflix, YouTube Premium and Amazon prime
NBCSports - I would guess Akamai, who deploys at IX and ISP
Netflix - Usually Openconnect CDN devices at IX or ISP
YouTube Premium - Google Global Cache deployed at IX and/or ISP (GCCs are deployed in most consumer ISPs, probably the best-distributed CDN at the moment).
Amazon prime - Cloudfront, deployed mainly at IX, I am not sure if they have deployments in ISPs yet.
(I haven't tried to find more information on their approach, but https://dd.meddle.mobi/index.html seems to indicate that CDN issues shouldn't affect the way they measure, but until I do some checking on whether popular DPI platforms would be able to be fooled by this, and without them providing timestamped data, I'll have to test myself against DPI policies I wrote and I hope my ex-colleagues haven't changed too much ...)
The article makes it clear what they did, and what conclusions they drew:
"Using a previously established, peer-reviewed technique, the team conducted more than half a million data traffic tests across 161 countries. From this data, the team found that internet service providers are “giving a fixed amount of bandwidth—typically something in the range of one and a half megabits per second to four megabits per second—to video traffic, but they don’t impose these limits on other network traffic.”
Moreover, the data shown directly below that paragraph reveals little pattern in the provider/app throttling rates. The data shows mostly that video is being throttled across the board.
Could there be other explanations for this? Absolutely. Peering is just one of many plausible explanations.
I think the issue is that most ISPs are not transparent and their advertising comes off as double-speak. Watching their promotional materials they attempt to mislead consumers into thinking they are getting unlimited connections they are free to do whatever they want with when in reality they are getting a very specific bandwidth limit with a lot of strings attached.
> The problem is that no consumer will buy internet advertised as conditionally rate limited.ISPs have no reason s to stop what works.
And consumers have proved that they believed the first liars^H^H^H^H^Hmarketers who took this approach, so the rest, having tried the approach of being truthful and having lost market share, had to adopt the same approach ...
You should either guarantee a minimum speed with unlimited data or cap on maximum amount of data use can use. Throttling based on content means you were overselling. As an user I prefer to be in control of which applications get to use the Internet that I'm paying for.
But from a physics/technical standpoint, a radio only has so much capacity. How would you handle the situation, if you managed a provider, and 10 people at a bar started streaming an NBA game in HD? Somethings got to give.
The argument would go that if the radio has total capacity x and each user is allocated capacity y, then only x / y users should be allowed to join the service, otherwise it is "over selling".
It is a super straight forward model, but obviously much more expensive to provide. I imagine most users would be happy to pay 1/10 of the cost to get a service that might drop to 10% performance in the worst case, as long as it doesn't happen too frequently.
What you're asking for already exists. They are dedicated business circuits. I get where you're coming from, but there is a reason that the consumer advertisements always say "up to X mbps". If ISPs didn't oversell, consumer Internet would either be unbearably slow or obscenely expensive.
A guaranteed 1 gig circuit is on the order of thousands USD month. An oversubscribed consumer fiber provider like Google Fiber is an order of magnitude cheaper because they can oversubscribe based on consumer traffic models that cover 99% of users.
Also keep in mind that your model isn't really straightforward at all. The ISP can only provide you guarantees within the lines they control. There could be any number of choke points at IXPs that mean the user can't reach site X at their guaranteed rate. So ISPs never even offer hard guarantees on connections to the wider Internet. The most you will be able to buy is dedicated circuits to their core, an IXP, some branch office, etc.
Are you okay with gyms selling more unlimited memberships than the number of machines they have?
The monopoly is a “natural” one in that there is only so much spectrum to go around and two companies can’t share it.
Oversubscribing is a vastly preferable economic solution in this case. Everyone who can pay a small amount gets access to the network, and most of the time they get the bandwidth they need.
Selling a limited number of tickets at a massively increased price, or selling a massive number of tickets each with a dedicated data rate that was almost always idle are both strictly inferior solutions by very wide margins.
That throttling is essential to network management is undisputed. There were even carve outs in the old NN refs for it. The problem is if the “throttling” is not just limited video streams in general during peak times, but actually just limiting just certain services’ streams.
First, that’s not relevant to this particular thread, since there are several major wireless ISPs. Second, it’s besides the point. All gyms oversubscribe because that’s the only sensible model for consumer access. If you didn’t allow oversubscription, Google could sell only a 75/35 service (each GPON has 2.4G down /1.2G up shared by 16-32 users). Would that be better?
If cell providers advertises their services as "Always 100 Mbps guaranteed" that would be a problem, but do they really?
Because that is not how wireless communication works. E.g. in LTE the user is scheduled both in frequency and time. The operator has a certain frequency spectrum that gives a certain maximum transmission rate (one user getting all frequencies and time).
The scheduling happens based on (amongst other things) signal to noise ratio and priority of each user and their traffic types (voice, data, etc). The goal of the operator is to maximize the overall capacity of the network. This is beneficial for the user as well.
I think the problem in the US is the lack of competition. Not net neutrality issues. In places where this works well you just change operator if you are not happy with your current one. This way the provider is free to experiment with how to run their network and everyone benefits.
This is precisely correct, and well put. Advocates of Net Neutrality are trying to solve what is an antitrust issue, but not using antitrust law to do it.
I think it's because most NN advocates have a bias that regulation is a good thing, so of course regulating ISPs will be the right way to make the internet better.
However keep in mind that it is regulators who have granted the last mile monopolies to their crony firms and who have helped prevent last mile competition.
Notably one thing the FCC has done of late is to remove restrictions on reconfiguring wires on utility poles that benefitted incumbents. Now under the new rules, an ISP who is adding lines to existing utility poles is authorized to move/modify the equipment already on the pole when doing the installation.
Previously the rules allowed incumbent firms months to schedule a technician to go out to move the equipment around, which caused many delays for new firms trying to compete.
My internet service is pretty bad (XFinity) and Net Neutrality is not the solution, the solution is more ISPs to choose from. I'd much rather have three or four fiber providers to choose from than the (bad) choice between XFinity and AT&T. There are only two choices because of regulators, not in spite of them.
Also, video absolutely obliterates a wireless radio. Just to keep with the example you mentioned, a wireless radio doesn’t necessarily breakdown into 1/10 per user for 10 users. 2-3 users could flood the connection, the building interference can eat 20-30% off the top, different cell phone models may have stronger radios etc. Then throw in a bunch of 4K ultra hd streams and those contract terms stop making any sense.
And when you call up to get your internet connected and they tell you "sorry, we're at capacity. Try back next month"? What then? Would you rather have no connection or a variable speed connection?
The latter is a much better solution. You don't seem to appreciate the realities at play here.
I agree, definitely. It was an interesting experiment, and the realities of the then current wireless tech were so limiting, it was eye-opening to be on the provider side for once.
Then we shouldn't have done things like spend millions pushing 4G only to bait and switch with 4G LTE once everyone had upgraded. We shouldn't claim certain capabilities that simply aren't there. The worst thing you can do for a bad product is good marketing, and that's exactly what these ISPs are doing.
Curious, regarding monthly download caps, has anyone found a solution to using a 4G LTE connection for frequent Netflix consumption? It currently burns through too many gigs to be a cable internet replacement. I was looking for a solution for doing longer term domestic (US) travel.
Seconding T-mobile. Pretty much every major media provider is zero-rated and they welcome suggestions for new additions, so it's actually pretty fair as far as zero-rating goes. Just about the only thing you're out of luck on is if you A) use a VPN or B) stream from a private server.
Some providers, T-Mobile I know, offer to bypass the data cap for certain services. The drop the quality to 480p, but that's not a huge deal on my 5" screen.
All ISPs, as a rule, oversell. They oversell at a ratio of usage in part due to how networks are structured (8-10 times bottleneck bandwidth is seen as "good" service, source: I am in the industry). There are bottlenecks in almost every network. The problem is when they oversell at an unacceptable limit for regular usage. Bottlenecks can be mitigated but that costs money and like almost every rent seeking business there is no incentive to spend any money or invest until there is risk of a losing enough users to be unprofitable.
The economics of ISPs suck which means ISPs will suck pretty much as a rule.
And add that , if you are in a sparsely used area, you can achieve even better rates? In the end it's just easier to say that your video might be throttled
You don't have to guarantee a minimum speed but you have to advertise the relevant parameters that you can reach at least a certain percentage of the time. Saying just "up to" should only be allowed if you also use it for the pricing and the clients pay what they can. Saying "unlimited" should be allowed only with the dictionary definition. And all "limiting" conditions should be advertised the same way as the main offer, not small font on a buried page.
So the offer could look like this: "Up to 50Mbps, with 20Mbps offered at least 50% of the time between 06:00 and 22:00. Limited to 200GB/mo after which the speed is dropped to 1Mbps."
Also throttling and prioritizing types of traffic makes sense when you reach capacity. But if you suddenly find more capacity for someone paying extra by throttling others more or throttle indiscriminately and lift the limits for a price then you are not doing network management, you are are squeezing for money. And this should also be clearly stated when advertised and while doing the throttling.
As a customer paying the requested price I want some SLAs. I want to know what to expect for the service and be compensated for not getting it. I want to know realtime if my traffic or service is being throttled or not when I use it.
> You don't have to guarantee a minimum speed but you have to advertise the relevant parameters that you can reach at least a certain percentage of the time.
But this still suffers from exactly the same problem, just in a statistical way. i.e., what if someone's frequently in a place that has poor signal? How are you going to predict what percentage of the time they'll be in such a situation?
Then you compensate that person for not being able to provide the service. Just as you'd expect in any other case when a service is not delivered. And if we're being honest, that's not actually the reason most of us get throttled. Most of the time it's not even network management, it's just a money grab.
I don't intend to go into the technicalities because they're irrelevant. The point is they should offer a quality level at least "statistically".
Wireless ISPs don’t advertise any particular speed level. They sell “best effort” service and advertise it that way. What you’re asking for is an SLA (presumably for consumer level prices).
Unlimited - not limited or restricted in terms of number, quantity, or extent.
Best effort is when your taxi driver tries to take you to the airport as fast as possible but traffic gets in the way and you're late. When he stops in a tunnel and requests more money because he has to drive through his lunch break, or else he'll just inch forward that's not best effort. That's what I said before: squeezing for money.
Reading the article helps: this is not network congestion, this is artificially limiting the speed.
Clearly a connection cannot be “unlimited” in reality, because there are many physical limits. That’s why you introduce the idea of “artificial limits,” but what are those? The number of cell towers in an area is also an “artificial” limit—the provider could build more. So even by your reasoning, you can’t take “unlimited” literally, and have to apply context. (And that is, in fact, the relevant legal consideration. Advertising must not be misleading, but that does not mean you resort to dictionary definitions and ignore context and history.)
And what is that context? Well, “unlimited” has always meant the opposite of time or usage limited service. I.e. you don’t pay extra for exceeding a certain usage. If you asked me in 1997 what "unlimited" means, I wouldn't have pulled out the dictionary and said "unlimited means 'not limited or restricted in terms of number, quantity, or extent.'" I would've said "that means you don't have to pay extra for AOL after your 20 hours is up." All this supposed confusion seems entirely contrived to me.
And I'm perfectly OK with being allowed to hit physical limits. Unless you're arguing that 100GB of traffic is a physical limit of the network. Or that a software limiting the speed to something decided by management is also a physical limit. I'd love to see someone try to make that case.
As per the links posted above:
> Santa Clara Fire paid Verizon for "unlimited" data but suffered from heavy throttling until the department paid Verizon more.
I'm an engineer so maybe I have a different definition for what an artificial limitation means. If you can pay for it and instantly get it then maybe it was there all along, just artificially limited. Physical limitations don't get removed with when your payment is cleared. And I'm paying for access to a network which has some intrinsic parameters. One of them is the number of towers, not the potential number of towers. I care about the actual physical speed and capacity describing the actual network.
We also seem to have different definitions of what "misleading" means since you obviously believe saying nothing about the limitations or their true nature while selling an untouchable maximum and even a patently false claim of "unlimited" is in line with "not resorting to dictionary definitions and ignoring context and history". Your explanation above is nonsensical. Nothing else gets judged by 1997 standards. VW should get a free pass for pollution in the context of "this was clean in 1997". EVs can claim they have unlimited range. Can't ignore that history. Maybe you also accept to be paid in 1997 dollars so when you charge a customer $80.000 they can just give you $50.000 and call it even. And some day you might even buy a 50TB HDD that will only have 50GB and some lawyer on the internet will tell you that in '97 people were happy with 5GB and you can remove that "physical limitation" with a paid FW upgrade.
The reason "unlimited" takes different meanings in these cases is that some countries are actually lead by lobbyists and when they say "jump" you ask "how high?". Not because you can ever reasonably argue "unlimited" means 20 hours because 1997. The proof? In many civilized countries such claims are illegal.
And I will say it again: actually reading the article would help you make some important distinctions.
I'm curious what the win condition for this argument is. Obviously, no appeal to semantics is going to alter the business model of a wireless provider. Are you really just after them changing the word "unlimited" to something else?
The guidelines ask specifically to respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith - I can assure you what I'm saying is in good faith. "Unlimited" was just one of the misleading things that you can notice without peeking under the hood on the ISP. The throttling of data is another more pervasive one but harder to spot with the naked eye. And the way all this is presented when advertised is misleading at best, flat out lie more realistically. I made several mentions that were ignored in favor of the weaker interpretation of "semantics".
> saying nothing about the limitations or their true nature while selling an untouchable maximum and even a patently false claim of "unlimited"
> this is not network congestion, this is artificially limiting the speed
> they should offer a quality level at least "statistically"
> I want to know what to expect for the service and be compensated for not getting it. I want to know realtime if my traffic or service is being throttled or not when I use it.
> Informing the users properly is the the least they could do. Today they willingly mislead consumers
Changing the word "unlimited" is just one thing that would help clear up the confusion that you are getting unlimited data. That's one of many things they could be required to at least disclose (if not discontinue completely) in a clearer way especially since these usually involve multi-year contracts based on that misdirection. Basically exactly what's expected or required from most other companies' advertisements.
I honestly thought this would feel like common sense to anybody...
P.S. AT&T or Verizon have the following lines in their offers:
AT&T may temporarily slow data speeds when the network is congested
During times of congestion, your data may be temporarily slower than other traffic
This is disingenuous as it implies that they never do it unless the network is congested and that they don't carefully throttle only specific types of traffic from specific sources. That has been proven a false claim repeatedly.
I'm not responding to any particular interpretation of your argument. I'm asking an orthogonal question. I tried to fish an answer out of this comment, but couldn't find one. It's a simple question: do you expect a behavior change from the carriers, or a marketing change?
I'm not trying to justify what they're currently doing either. What I'm trying to explain is that making something that legally works the way we all want it to is a different and much tougher problem than merely being able to imagine a world in which the current practices don't exist.
Just a side note while I was reminiscing about the pain of being an ISP. Here's a couple photos of the tower for this project. I got frustrated with satellite internet, and bought a 140' tower from the scrap yard before it was chopped up for about $2k. Got a good deal on some Tranzeo 5.8ghz equipment from a cable company and the learning experience commenced. I think I was 23 in that photo.
Funny thing, it was located on the same road where Tabasco sauce is made. I'm sure a tour bus passenger or two randomly caught me standing on top doing the (Brazilian) Christ the Redeemer (statue) pose.
>Got a good deal on some Tranzeo 5.8ghz equipment from a cable company and the learning experience commenced.
Did you have to get a license to operate on 5.8ghz in your country? If so, what was the process like to get one with permission to offer commercial service?
I didn't have to (USA). You were able to offer 900mhz, 2.4ghz, 5.8ghz without a license. There are regulations on total output power of a device, which comes from the antenna's dB gain and the output power of the wireless radio inside. I believe there are other higher bands that are unlicensed, but I didn't use those. I built a lot of the equipment using m0n0wall/soekris/pcengines gear.
>I didn't have to (USA). You were able to offer 900mhz, 2.4ghz, 5.8ghz without a license.
Ah, sorry brainfart on my part, I didn't realize you were doing standard wifi. I always think of 802.11A(C) space as just 5GHz when it covers basically all of 5-6GHz.
(For anyone interested: this has a nice list of channels https://en.wikipedia.org/wiki/List_of_WLAN_channels )
>If I did it again, in a very rural area, I would look into licensed 4G/5G vs traditional wifi.
I was looking into this same thing (rural with low subscriber numbers) a few years ago. The costs of 4G and equipment for it were obscene compared to putting up ubiquiti equipment on 5g band.
Have you checked out LimeSDR? There are a few software defined radio platforms that also have an open source community. I was thinking along those lines.
I don't know either. The significant thing shouldn't be that video vs pics vs text is throttled (although that's an important thing that should be part of purchasing information), but whether Hulu vs Netflix are throttled.
Yep, there's a reason that's legal under net neutrality laws:
- If you need to block streaming video because that's the only way your public WiFi on trains is going to work, iirc you can do that without any legal issues.
- If you zero-rate music services as a marketing stunt, go ahead, but then you shall zero-rate all music services. (Still seems annoying, as a music service, to go around the world and apply to every ISP offering this (i.e. it still makes it slightly harder for newcomers), but I guess it's not unreasonable.)
Not sure. The old Dutch ones certainly didn't, but after t-mobile didn't care and went to the ECJ, Dutch laws were brought in line with European ones and this is what we have now.
I’ve been looking to install WiFi internet for some tenanted accommodation (think 8 flats or so). As it’s an “included with rent” perk, I need a way to dynamically throttle based on number of clients and protocol.
Is there a simple prosumer solution for this? My current path is leading me towards openwrt on a beefy router with some custom firewall rules.
You can install a separate box running pfSense to act as a gateway, configuring different rules/load balancing etc. It also has a plugin ecosystem to enable additional features. That would allow you to size the box and grow as you need. I usually virtualize it with VMWare on Dell Server hardware. That way you get good remote management via Dell idrac, and can make config changes/reboot without losing access.
On the access point side, if you want more granular control to handle congestion, Ubiquiti Unifi and their various mgmt platforms work well. I can't recommend their Security Gateway Appliance (frequent reboots), but the backhauls, APs and CPEs are solid. That will allow you to visualize utilization using a gui, and adjust limits per device/group easily. If you already have the APs, then a gateway device may be better.
I've found over the years, and in the datacenter, that running pfsense on server hardware is very reliable. You can run on a single board computer kit but the CPU doesn't hold up to heavier traffic/vpns etc, worse remote management.
Thanks! That seems quite expensive in terms of hardware needed and the license for VM Ware. Any thoughts on this (this is an investment decision for low cost housing in Africa)?
Also, is there a way to charge for traffic beyond a certain quota? There’s a product called Sputnik with some firmware you can download for various router brands which lets you run a hotspot/captive portal with billing
The regular VMware vSphere Hypervisor is free, and works on most hardware. You could go used, or check delloutlet.com. (sub $1k) It doesn't need high specs at all. The reason I mention is that usually you end up needing other small services. Virtualizing allows you to have a single box and add VMs as needed. It's also easier to setup carp/failover later if the system grows and you need redundancy. Running an ISP is 24/7 so being able to have full console access, and confidently reboot services saves late night trips. It's been a while so am not familiar with the current hotspot products.
I'm not entirely sure I understand what these graphs are saying, because they're somewhat confusingly presented.
Specifically, I see no evidence of differentiation. The data looks so sparse that I'm not even sure I understand what the signal is supposed to be. In the first chart, does an absence of a number mean that no throttling was detected, or does it mean that the combination was simply not tested?
I assume the latter, and in that case I don't really see anything interesting, especially since we're not given N or a standard deviation.
ATT, for example, shows 1.5, 1.4, 1.4, NA, NA for its data points, which means that it doesn't appear to differentiate between NBCSports/Netflix/Youtube. The other data points show similar patterns.
Also, is there significance in the separation into two 'blocs' of carriers? I thought it was 'US (ATT-ViaSatInc)' and then 'global (du-iWireless)', but O2UK was in the former set and cricket seems to be without a home.
“Broadband Internet access service” continues to include services provided over any technology platform, including but not limited to wire, terrestrial wireless (including fixed and mobile wireless services using licensed or unlicensed spectrum), and satellite.
In contrast to the Commission's 2010 Open Internet Order, here we are applying the same regulations to both fixed and mobile broadband Internet access services.
Netflix blocks PIA, a VPN, from accessing its servers. Rights holders’ unchecked power is the problem. Abstracting their power through net neutrality terminology distracts from their damage and promotes their cause.
If you want a shortcut to the solution, it includes “how could I make Netflix indfifferent as to how I access their content?”
I’ve been using a VPN on an EC2 server for over a year with Netflix with no issues. I think the trick is to be the only one on the IP address and to use the same IP address frequently.
It’s an interesting problem because really, Netflix just needs to pretend to care, enough to satisfy the rightsholders they are buying licenses from. At one point, there were millions of users in Australia even though Netflix didn’t even have a presence there.
> Netflix blocks virtually all VPS provider IP ranges
Netflix has done a terrific job of deflecting the issue from themselves onto ISPs. Yet Netflix et al support rights holders through their policies. It’s a complicated battlefield.
True, Netflix first got popular in my country by people using VPN to access it. Now that they operate locally and are a part of the establishment they've been diligent in squashing VPNs.
I'm really interested to see how they accomplished this. Considering PIA puts up new nodes pretty quickly, and doesn't have one congruent address space, how does Netflix detect PIA or other vpn users?
No. Right holders want this. Netflix could just say no. Of course some stuff might not be available then, but at least their integrity would be intact.
Why do they have to enforce VPN check for Netflix originals? It seems that Netflix does not own rights for Netflix originals? Wouldn't it be easier to simply say some videos have restrictions but these videos do not?
A lot of "Netflix Originals" are just shows they've brought over from overseas. They will be rights restricted in their home countries still. I don't even know what the Netflix Original logo means anymore.
"doing throttling" is not the problem. If I'm using a lot of bandwidth and I get slowed down in fairness to other users or for network stability, I can accept that.
Throttling based on my remote network peer rather than on how much data I'm moving is anticompetitive towards that party and unfair to me as a paying customer. That's the issue. If I'm downloading Linux ISOs, streaming Netflix, grabbing all the latest podcast episodes, archiving my Google Drive locally, or grabbing the whole Simtel DOS archive for my DOSBox installation, my ISP should treat my traffic the same.
That's pretty much what happened in France a few years back. SFR (I think it was SFR, could have been one of the others, sorry) wanted more money from Google, to which Google said nope. I don't even know how that one resolved, but for quite a while many folk in the office had no YouTube at home.
The problem is not with throttling per se (it’s a laws of physics after all!) but not disclosing and/or misleading customers to think there is an unlimited stable bandwidth and/or not disclosing in clear words when, how and how much throttling will be triggered.
Do that - and the grief from both sides will be avoided.
Without proper and clear disclosure both sides think the other side is an aholes
> The problem is not with throttling per se (it’s a laws of physics after all!)
Are you using "throttling" to mean the same thing other people do? When people say "throttling" they refer to the deliberate/artificial action to that effect, not the natural throughput limitation.
Available bandwidth vs. natural inclination of humans to pay minimal fee yet engage in non stop downloading of free hi def movies and watch streaming ....
Something like that
As a provider to survive and keep providing service to all members - throttling of offenders need to take place
When I was in Thailand on Dtac they had a bunch of packages where you could choose your throttling rate, eg unlimited data at 512kbps or 1mbs, 4mbs etc. Or you could cap the data amount with the speed at max. It worked pretty well. 1 mbs unlimited was good for me - you could watch 360p video constantly if you wanted etc. Maybe 480p if lucky. And the cost was like $10/mo. I kind of miss that in the UK where I don't think any mobile networks do unlimited data.
This was the case in the early 2010 version of the rules, but I believe the Wheeler FCC rules from 2015 applied to both fixed and mobile broadband providers. The justification was that use of mobile internet had grown significantly since then, and could no longer be excluded without punishing people living in remote areas.
I can't read the massive PDF of the 2015 order on my phone though, so I could be wrong.
Is it still considered to be throttling when it's in the service agreement?
I have Cricket wireless, and they tell me I am limited to 3mbps speed, which is exactly what reflects in all speed tests I've tried. I am totally OK with that, in exchange for having an "unlimited" data usage plan.
All streaming video I've tried (Youtube, Netflix, DirecTV Now) seem to work and look just fine. They all look no different from when I'm streaming on my 100mbps Wi-Fi.
Generally "throttling" is when your connection is intentionally lowered beyond the base speed. So if you can use that 3mbps all month then it's not throttling.
But are you sure it looks "no different"? 3mbps will probably get you 720p at 30fps. It probably won't get you 1080 or higher, or 60fps.
There is one bearer that is shared between all the wireless connections in a cell - the EM spectrum. Cells interfere with each other, so if you drop the power and up the frequencies of the cell you can have more cells in the same area but as you go up the frequencies to do that you have problems with things like walls and trees and also you have to have a wired network to provide the backhaul.
Research shows you should use the proper name: QoS
The name "Throttling" is an attempt to frame this pejoratively. All ISPs of every type engage in bandwidth shaping / QoS. Its the laws of nature that enforce this (demand always is greater than supply of every resource).
“There’s no evidence that any of these policies are only happening during network overload,” said Choffnes. “They’re throttling video traffic even when the network doesn’t need to. It happens 24/7, and in every region where we have tests.”
Yes, it isn't particularly hard to figure out why they are throttling. Video services especially will take up all the bandwidth when they can and then annoy people when it inevitably gets downgraded again. For most people "speed" is "does my video load" and "does this file complete fast", not "4k everywhere". This should be fairly obvious for anyone who have even shared a connection with a couple of people. And unfortunately the Internet is lacking in support of adaptive anything.
4K video is about 25MBit. If I have a faster than 25Mbit internet connection I should be able to watch 4K video. This should be fairly obvious. Oversubscription is the responsibility of the ISP and it is not an excuse.
Wireless is pretty cheap compared to wired, because you don’t need the wires and laying the wires is the expensive part.
If you can’t give people 25Mbit connections with your wireless technology, don’t sell them 25MBit connections.
But coincidentally typically the stuff where the ISPs have all these terribly difficult problems are the services that refuse to pay their extra fees. Kind of like what happens to restaurants that don’t pay the protection fee.
The physics limit is a misleading talking point that always comes up when justifying throttling and not providing what was promised. If physics were truly the limiting factor dense cities would have the worst data quality and rural areas the best. Instead cities have the best short of overwhelming events like tech conventions and disasters.
The cell footprints are adjustable. A localized network of smaller area cells can handle more people and data than one big tower which covers a broad area.
Reminds me of one Vice President of Verizon who visited and tried to bullshit classes of electrical and computer engineers about the end of unlimited data as a technical necessity. Competition intervened via I believe T-Mobile and suddenly it wasn't a problem although Wall Street got pissy about the decreased margins.
I meant it more that they disrupted the pricing model shift they wanted to pull off than they were saints or anything. Capitalism works ideally when being selfish benefits others sort of thing. Per data pricing is something that doesn't seem too viable given lack of user control over it and ability to know ahead of time.
This article inadvertently supports why stopping Net Neutrality was a good idea.
If it takes a deep investigation to discover "throttling", and there has been no outcry from consumers, then the throttling did not cause any harm.
Net Neutrality bans many sensible forms of traffic shaping. What we are seeing is network capacity being greatly increased through sensible traffic shaping, with no downside for consumers. That's why Net Neutrality is a bad idea in the first place!
I'm not sure I follow your logic - it doesn't take a deep investigation to discover throttling, it takes a deep investigation to prove that poor performance is caused by throttling. It doesn't take a deep investigation to see that your cash register is short, but it might take one to prove that an employee is taking money from the drawer.
And we're not seeing network capacity greatly increased through sensible traffic shaping - that's an assertion made without evidence. If that were the case, we'd expect different video providers to be shaped the same way, and we'd expect the throttling to change based on time of day and peak traffic, which the researchers did not observe.
There could be many causes of what appears to be differential throttling.
For a small ISP there might be a much higher quality backbone link to one video provider than to another. This is probably a coincidence.
Also, popular services such as Netflix result in the build out of more dedicated bandwidth that is shaped to match peak demand.
Broadly, traffic shaping happens from QoS rules, ISP upstream provider choices, backbone infrastructure build-outs, etc. It's not a static system.
I guess my point is that this is too complex a thing for regulators to understand or to regulate in a sensible way. I prefer simple antitrust enforcement of last mile providers, even if it entails the government "confiscating" the transmission lines installed by ISPs and any of several ISPs to use them to serve a customer based on the customer's chosen ISP.
All of those are not supported by the evidence in the linked article. They observed specific fixed speeds per provider, regardless of location, time, or load. You're correct that there are emergent effects from peering, routing, etc, but that is not what the evidence suggests is the cause of the issue.
And the specific regulation of "Do not throttle or shape legal traffic based on origin or contents" doesn't require large complicated regulatory intervention. A simple complaint system "we observe that traffic to origin X exhibits dramatically different performance than traffic to origin Y" and response would work basically fine. Of course it would be better if ISPs would behave reasonably without regulation, but we can see clearly that it's not the case.
> So, if something can be done and no one notices it can't be a bad thing?
Not only that, the article clearly indicates that people do notice the impacts, they just don't know that the cause is deliberate throttling by their provider.
It's like if people frequently complained about loose coins missing from their homes, but it took some legwork to find the crooks who trained cockroaches to steal the coins and deliver them—would you say that the thefts shouldn't be illegal because it wasn't immediately obvious that he losses were thefts?
And it’s just isolated people noticing against a monolithic corporation. It’s hard for any small-time person who isn’t absurdly rich to get them to do anything but waste hours of their time on the phone or internet discussing the issue.
Traffic shaping does exactly that. It prioritizes traffic in a way that makes sense for each protocol. So yes, the point is that being aware of the protocols lets the shaping create more perceived bandwidth out of thin air.
This is the reality as long as we have "residential quality" broadband, which is more like a finance product arbitraging bandwidth, intended to lower cost and minimize (but not reduce) the impact of congestion. If you want dedicated bandwidth the price is significantly higher.
I wish I could afford a dedicated high bandwidth circuit as my residential connection. I can't, but I think that ISPs are better able to choose what QoS makes sense than regulators are.
The ISPs created this "arbitrage" style of overselling bandwidth, and we shouldn't let their use of loose definitions serve as an excuse to further degrade their products against their marketing claims.
My residential broadband from AT&T is sold to me as Gigabit. Do you know how often I've ever gotten what they sold to me? Never. I top out at about 300Mbps, and even that is rare.
There are other ways to address that such as false advertising or rules defining what kind of service can be called "gigabit" when sold.
I've been trying to find the lowest latency connection possible and was excited to call AT&T and find out they had fiber available. I asked the rep multiple times if it was a 100% fiber circuit, and was told it was.
Then on the day of installation, the installer called up and mentioned copper. I said, "Uh, what do you mean copper?". Turns out it was fiber to the pole and copper the rest of the way. The rep simply lied.
I would like to see firms that lie to customers in that kind of way removed from the market due to competition. So in my view regulation of ISPs should focus on allowing competition and breaking up the last mile market power. Why can't the municipality own the last mile transmission line and rent it to whichever ISP wins the customer's business?
Dictatorships are fine when the dictator is wise and benevolent and acts in the best interest of his/her people.
When they start acting in their own best interest is when it becomes a problem.
We're just at the beginning of the non-net neutrality era and carriers are testing the waters.
The market favors commercial interests, so we will begin to see some of the more negative predicted outcomes -- ie Amazon paying Boost Mobile to not throttle their traffic(while throttling their competitors)
The core issue Net Neutrality intends to solve is the excessive market power of last mile providers. Of all of the possible ways to address that, having lawmakers regulate QoS is silly.
People act as though NN means that everyone deserves a dedicated bandwidth circuit. That's simply not what everyone gets even under NN.
Regulators should be doing things like figuring out ways to get more last mile transmission lines installed so there can be real market competition.
I also don't have a problem with an ISP selling a "Netflix and email only" internet package to someone for a big discount. Some people would rather settle for that and spend the savings on something else.
> Regulators should be doing things like figuring out ways to get more last mile transmission lines installed so there can be real market competition.
It's not clear to me why we can't or shouldn't do both. Keep NN in place until we have followed through with making the broadband market is competitive.
What exactly are these perceived downsides? Netflix is operational and seems to be doing just fine. And if an ISP outright stopped supporting it, the majority of people would immediately find an alternative ISP that did not.
Throttling video speeds may very well be a good thing for many (most?) users who have a fixed data cap. Throttling means Netflix/Youtube/etc react by sending lower quality video to your tiny screen, where you won’t be able to appreciate higher quality anyway.
> Throttling video speeds may very well be a good thing for many (most?) users who have a fixed data cap.
I would assume that most users with recent contracts are going to have an "unlimited" amount of included data, given that most (all?) major US providers sell unlimited data by default.
By default? Don't fall for the marketing bullshit, "Unlimited" is like 20GB and then you're sent down to the penalty box and EDGE data speeds. Real unlimited is very expensive if it exists, which it usually does not.