> To clarify, IP blocking now produces collateral damage because there are far fewer IP addresses in use than there is demand and so different users must share, right?
I'd say not right for the following reasons: 1) The article never states anything about the demand of IPs exceeding the number of IPs 2) The article never claims such a lack of IP supply is the reason for shared addresses.
These may be your reasons for using a shared IP but that doesn't mean it's everyone's or the only reason for using a shared IP and it certainly doesn't mean that's what the article is written about.
> So in a hypothetical world where we are using IPv6 no one would need to share hence there would no longer need to be any collateral damage.
And so being in this hypothetical world does not actually imply there are no more shared addresses it just implies address demand isn't a reason for shared addresses. Things like convenience and scaling still drive sharing services on a single IPv6 address in the real world.
> Except they deny this by comparing the size of the DNS namespace to the IPv6 address space and stating because it isn’t 1-1 that it doesn’t work.
They never really deny this as they didn't even consider the possibility they were supposed to be confronting that argument in the article. This section is purely an answer to the question they pose immediately prior "Here’s an interesting question: could we, or any content service provider, ensure that every IP address matches to one and only one name? The answer is an unequivocal no, and here too, because of a protocol design -- in this case, DNS.". When read as if the article is making the argument for why shared addresses are used I could then see where you're coming from but the article never talks about the why it just says they are. This section is actually arguing the design of the internet never intended them to have a 1:1 mapping not that the lack of such a mapping is why Cloudflare (and others) use shared IPs.
That may seem an extraordinarily pedantic difference from what you said but when you realize the article spends most of its time showing the internet was never designed with the intent for names to have dedicated addresses instead of explaining why shared addresses are in use it makes more sense and completely changes what that section is talking about.
> I’m not sure what you are getting at with your example of RAM - if I had 64 bit addresses then I would expect that all of them should be treated as valid, which they are. It is the same here. Not every address in a 128 but space would resolve to something valid, but any of them could.
Well it's all a bit moot to bring up if you're not familiar but the whole point of the analogy is that many initially expect all 64 bits of virtual address space to be used then are surprised to learn half the added 32 bits aren't even valid as virtual addresses then sometimes even less than that are valid physical memory addresses.
Similar things happen in IPv6 but to an even greater extreme. The internet of having such a large 128 bit address space was not to have 2^128 addresses or the like it was to make things simpler by using vast swaths of the space on convenience or getting around real world scaling limitations. That is to say the goal in expanding the address space so much wasn't to forgo things like shared addresses in favor of always unique ones even if one of the goals was to reduce address scarcity.
Thank you for the thoughtful response, Im having a hard time with a thorough reply due to being on mobile but I would like to ask for examples of when you would want multiple services (that could be considered colateral damage, so completely discrete entities) to have the same address? I think that would most clearly address my question of why IPv6 would not be sufficient on its own to bring the internet back into a state where IP blocking as a practice would not result in wide spread collateral damage?
If that is too much of an ask, what are some examples of why we do it now? I ask purely from curiosity.
As for the RAM thing, I believe I understand you but I do not see how it relates. TFA says that ip blocking wont work even with IPv6 because the IPv6 space doesnt map 1:1 with the DNS space. That appears to be the wrong analysis, who cares if you arent able to exhaust the IPv6 space anyway? The way I am understanding what you wrote is that the space isnt as big as labeled, but the bar I am measuring against is "functionally limitless".
After writing that I think i can sum up as; if the goal is limiting or preventing collateral damage from ip blocking then ipv6 would fulfill that goal by providing functionally endless addressing space because the natural inclination would be to use as many addresses as you pleased, so blocking one or a set of ip addresses would typically only impact access to one web site.
p.s I appreciate the attention to detail, caring about nuance is never pedantic imo
Any shared hosting is fair game for wanting it really. Managing separate IPs for separate entities makes sense when the entities are managing themselves but as soon as that becomes a shared task suddenly 100,000,000 IPs worth of websites can be done in 100 (per the article) and that's just damn convenient to not have to deal with individually. Not to mention it's more flexible - you can route/load balance/failover in arbitrary application logic instead of network reachability logic. An example of this is anycasting gets you to the nearest data center (i.e. IP is reachability) and then the load balancer logic can dynamically spin up/down backend servers with different addresses actually handling the traffic for those millions of sites throughout the day (i.e. name is identity) based on load or other factors. This is taking the strengths of each layer and pairing them. Trying to do that type of thing at scale with just unique individual IP addresses at the reachability layer and you'll end up with not only a giant network infrastructure mess but routing tables so big putting a carrier grade internet router at the top of each rack still wouldn't scale to the IP churn and table size. In all it's just plain more work and more costly to do dedicated addresses, even if the addresses themselves are free.
The RAM thing (and the IPv6 doesn't map 1:1 with DNS thing) aren't about whether or not there are enough addresses it's about explaining why we use large address space to enable something more than having the most addresses possible. That is to say to show the intent in adding more addresses wasn't to turn IP into something that's supposed to be good at providing unique identity it was to do other things. E.g. arguably IPv6 is really a 64 bit protocol, the upper 64 bits are really there for convenience of the end subnet always being the same size (/64) and easily encoding existing client info into the address. Even the largest network gear isn't designed to handle much more than ~16 bits of unique IPv6 client endpoints in all subnets combined yet a single subnet has 64 bits of client address space. A similar thing happens on the internet itself, when we advertise networks it's never smaller than a /48 because we need to be conscious of how the internet route table scales and fitting it into hardware over time. Again that's not to say there isn't some way to encode 10 billion services into IPv6 and have it work it's just further proof the IP layer was never the one meant to provide this type of functionality so we shouldn't try to shoehorn it in and should instead let it focus on being the reachability layer.
And that really boils down the reasons for why shared hosting - it's convenient, it scales better, and it's the better way to do things. One could chose to do things inconvenient, in a poorly scaling fashion, and with more limitations in how you do things and gain the ability to block by IP instead of by name but it just seems a horrible trade off.
And to clarify I'm not one of those anti-IPv6 nuts - I actually run a lot of IPv6 only infrastructure directly on the net through AS400503. Even though I have a /40 all to myself, enough for 65536 /64 subnets, I still do shared web hosting with the v6 addresses because it's less work to do so.
I'd say not right for the following reasons: 1) The article never states anything about the demand of IPs exceeding the number of IPs 2) The article never claims such a lack of IP supply is the reason for shared addresses.
These may be your reasons for using a shared IP but that doesn't mean it's everyone's or the only reason for using a shared IP and it certainly doesn't mean that's what the article is written about.
> So in a hypothetical world where we are using IPv6 no one would need to share hence there would no longer need to be any collateral damage.
And so being in this hypothetical world does not actually imply there are no more shared addresses it just implies address demand isn't a reason for shared addresses. Things like convenience and scaling still drive sharing services on a single IPv6 address in the real world.
> Except they deny this by comparing the size of the DNS namespace to the IPv6 address space and stating because it isn’t 1-1 that it doesn’t work.
They never really deny this as they didn't even consider the possibility they were supposed to be confronting that argument in the article. This section is purely an answer to the question they pose immediately prior "Here’s an interesting question: could we, or any content service provider, ensure that every IP address matches to one and only one name? The answer is an unequivocal no, and here too, because of a protocol design -- in this case, DNS.". When read as if the article is making the argument for why shared addresses are used I could then see where you're coming from but the article never talks about the why it just says they are. This section is actually arguing the design of the internet never intended them to have a 1:1 mapping not that the lack of such a mapping is why Cloudflare (and others) use shared IPs.
That may seem an extraordinarily pedantic difference from what you said but when you realize the article spends most of its time showing the internet was never designed with the intent for names to have dedicated addresses instead of explaining why shared addresses are in use it makes more sense and completely changes what that section is talking about.
> I’m not sure what you are getting at with your example of RAM - if I had 64 bit addresses then I would expect that all of them should be treated as valid, which they are. It is the same here. Not every address in a 128 but space would resolve to something valid, but any of them could.
Well it's all a bit moot to bring up if you're not familiar but the whole point of the analogy is that many initially expect all 64 bits of virtual address space to be used then are surprised to learn half the added 32 bits aren't even valid as virtual addresses then sometimes even less than that are valid physical memory addresses.
Similar things happen in IPv6 but to an even greater extreme. The internet of having such a large 128 bit address space was not to have 2^128 addresses or the like it was to make things simpler by using vast swaths of the space on convenience or getting around real world scaling limitations. That is to say the goal in expanding the address space so much wasn't to forgo things like shared addresses in favor of always unique ones even if one of the goals was to reduce address scarcity.