What happened to her is tragic. However, I don’t think warnings or age verification would change anything. Kids are going to do things regardless if there is a warning or age verification system.
I think the best thing we can do for our children is talk to them, and to start talking to them early.
You can do both. Not everyone will talk to their kids (lots of both useless and under resourced parents out there), and guardrails are possible, so best to not throw up our hands and say "welp, the world is just a terrible place."
"There is a cost" or "I don't want to" are not reasonable excuses, depending on use case and regulatory regime you're operating under. It sucks, but there are many terrible people out there. Hopefully the EFF and ACLU can work to balance out regulation from government in this space.
(what sites access is gated by age is a distinct conversation)
It's not "the world is just a terrible place", but rather "the world inevitably has things that kids cannot handle". If you want digital entertainment for your kids, then seek out products which explicitly offer this. The unfettered Internet is a less appropriate babysitter than a red light district.
And talking about "age verification" as if it's some straightforward addition is an utterly dishonest framing. The core idea of the distributed Internet is the barest of communication which further complexity/policy can be layered on top of. "Age verification" actually implies the much more draconian and chilling meatspace identity verification.
Nobody has a problem with a DigitalKidsPlayLand which performs identity verification, strictly curates/moderates content, and escrows all activity for later review. It's this push to legally require such things for everyone, based on some idea that everything needs to be made kid-safe, that is horribly authoritarian and needs to be soundly rejected.
Your own link talks about the many downsides, not least of which entrenching the idea that website owners regularly demand government id from their users. No possible downsides to that...
There are always tradeoffs. There is no law that says website owners cannot demand ID already. We might have different belief systems and perspectives on the topic of safety and privacy as it relates to non adults and Internet accessibility, in which case we won't find middle ground. It happens. Democracy is messy. I encourage engagement regardless of your position on the topic. That is how we find (or at least attempt to) the least worst policy.
> There is no law that says they have to, thankfully.
Eight states as of this comment have legislation that has passed requiring age verification. Ten other states have introduced legislation that has not yet passed. (US centric)
> In 2022, Louisiana passed a law requiring the use of age verification on websites that contain a “substantial portion” (33.33%) of adult content. Websites must utilize commercial age verification systems that check a user’s government identification or “public or private transactional data” to confirm that a user is at least 18 years old. Louisiana’s law has sparked a flurry of copycat legislation to be introduced in state houses around the country.
There is at least GDPR, if you have users of European citizenship, that requires a legal basis to do so if it is mandatory in your registration process
Basic age verification is pretty easy, no? I’m not sure about the details but this seems like a pretty low bar for a site like this. Not that I’m advocating it be required but just that if it were me I would not make something like this without at least making the best possible attempt at age verification.
Why wouldn't something based on unlinkable blind signatures work? Basically site issues a token to user, user gets token unlinkably blindly signed by some recognized age verification entity (government agency, bank) that already has their personal information, user returns signed token to site, site verifies it was signed by the recognized age verification entity.
What is the "best possible attempt"? There's was a checkbox added (possibly after this suit was filed) that was a "I'm over 18 and understand I'm meeting random people". That's something every teen already clicks past constantly to see increasingly large swathes of the internet. Any actual "verification" seems quite difficult beyond just relying on self-attestation.
> What is the "best possible attempt"? There's was a checkbox added (possibly after this suit was filed)
It was after the suit was filed (prior to the suit, AIUI, Omegle had an over-18 warning (with no confirmation) on the Unmoderated chat option, and a stated policy that users had to be 18+ or 13+ with parents permission.
Also, it may not have been because of this suit, there is at least one other suit that was found not to be barred by Section 230 (this one avoided S230 immunity because it is a product liability suit, not one contingent on their role as a publisher; the other one I've seen, IIRC, was found to raise a triable question of fact regarding whether Omegle's behavior was within the category of knowing involvement in trafficking that brought it out of S230 protection.)
It’s because the cops can show up and demand ID from everyone inside, they have to make sure everyone has one.
In this case, they have no obligation to ensure everyone has ID on their person.
Can you sue a bar you used fake ID to get into?
My real question wasn’t if there are kids on the system or not, but why are they allowed to sue when they themselves and nobody else have lied about the age verification question?
Yes, kids can and have sued because they got served alcohol while underage - even if they asked for it. The whole premise is as minor they couldn’t understand the consequences, and weren’t fully responsible for their actions.
And establishments get shut down all the time for it.
Your link from an ID verification company says “it depends” wrt fake id liability. I suppose there are sane places and crazy places in the world, for a limited time at least
Only if they ask for ID, check it, and it looks so good no one could tell it was fake. That’s about as far from checkbox in a random website pop up as we can get though, right?
In your new example:
- is there a regulatory reason that it is illegal for them to serve someone named Bob? Or is there a real risk/harm that people named Bob would suffer that they know about and is predictable?
- did they do any of the checks they are legally required to do to prevent someone named Bob from accessing the service and therefore suffering that injury? Or make a good faith effort to not just injure any Bob’s, at a minimum?
If they didn’t, then yet a Bob could sue if he managed to get through and get injured.
I think the best thing we can do for our children is talk to them, and to start talking to them early.