I thought it was interesting when Twitch partners started talking about a Twitch policy that seems to hold the partner responsible for moderating their own chat. That is, if you are a partner and you have community members posting prohibited content into your Twitch chat then you stand to pay the penalty through a ban or losing your partnership. You are forced to moderate your own chat thereby relieving Twitch of having to do so (and presumably giving them some plausible argument that they are enforcing some level of site wide moderation).
It was interesting to see the reactions of these streamers since they aren't typical business people or legal experts. There was quite some debate amongst them about the fairness of the streamer being held responsible for random trolls that entered their chats. When I considered the viewpoint of individuals instead of corporations it did expand my view of responsibility/accountability.
Indeed. The problem with imposing liability on moderators is that they're already doing about as well as they reasonably can at a job that isn't easy. Nobody wants a platform full of spam and disinformation.
But it's inherently a difficult trade off between heavy-handed censorship that catches too many dolphins in the shark net vs. not catching strictly 100% of the bad stuff. If you start imposing liability on the moderators then it forces the trade off into an all or nothing -- either they give up and stop moderating whatsoever or they have to murder all the dolphins because now a single shark sighting puts them out of business and you can't always tell the difference.
It also eliminates the possibility for different platforms to experiment with making the trade off in different ways. Maybe The New York Times wants to have an editor read every user comment before posting it but Reddit has a stronger commitment to free speech. Shouldn't we have both and let the different readers make their choices? Isn't that better than locking in the same compromised criteria for everybody?
I think it's simple. The US postal service uses postal inspectors to try and identify packages containing narcotics[0], and yet we don't make them liable for the packages they miss. Any attempt to moderate undesirable content on a website should not then make you liable for the content you miss.
> I think it's simple. The US postal service uses postal inspectors to try and identify packages containing narcotics[0], and yet we don't make them liable for the packages they miss. Any attempt to moderate undesirable content on a website should not then make you liable for the content you miss.
The same thing that we do with every kind of organization. We demand adequate controls, incentives, whistleblower protection, paper trail (auditability and accountability for the higher ups), documented internal policies and sticking to them. (And demand that they pay for external audits, mock cases, red team tests, and so on.)
This sort of works. And it can be made very effective. (Basically by criminalizing grossly negligent auditing and spending public money on enforcing it, the whole structure snaps into place eventually.) Currently financial organizations are allowed to be lax, because audits are lax, because the banking license authority doesn't give much of a shit.
If you're ignorant or malicious the same thing happens: the government kindly informs you that someone is using your website to break the law, and then you can either do something or become clearly guilty of knowingly supporting them. (The government knows you know because they know they told you.)
Law enforcement, typically tasked with enforcing laws, would not ignore the new cases. Individual citizens enforcing laws themselves is nice but overall we don't depend on vigilantes.
Parent presumably means if the person in the moderator position ignores new cases.
To which it seems like a fair response would rely more on free speech perspectives.
Essentially, can it be said that the majority of the platform is used for illegal activities? Or are the illegal activities a minority of some other, lawful activities?
I'd want a system where Reddit is in the clear, but CreditCardSkimmersChat is not.
> Parent presumably means if the person in the moderator position ignores new cases.
Which is what they're supposed to do. They're not the police, the police are the police.
> Essentially, can it be said that the majority of the platform is used for illegal activities? Or are the illegal activities a minority of some other, lawful activities?
This doesn't work because it ends up prohibiting the lawful things anyway. You have a platform that promises witch hunts and the innocent victims of witch hunts get evicted from there, and you have a platform that promises no witch hunts and those people go there but so do all the witches and then you get calls to shut it down because there are many witches. You're left with nowhere for the innocent victims of witch hunts to go.
> I'd want a system where Reddit is in the clear, but CreditCardSkimmersChat is not.
They tried this with SESTA and it was a monumental failure. It turns out when you do this, CreditCardSkimmersChat.com goes away and is replaced with CreditCardSkimmersChat.ru and all that does is make law enforcement's job harder.
You want CreditCardSkimmersChat.com to carry on existing, because that's the place you send your agents to camp out in the chat with a logger and execute a warrant to have their ISP capture all their traffic, and investigate anybody who shows up from your jurisdiction, and collect stolen credit card numbers to report to the credit card companies before they can be used for fraudulent purchases.
You don't want to shut it down because once you know it exists it's a honeypot with a previous reputation for not being a honeypot, and all shutting it down will do is cause it to reappear in Russia or on Tor or at a new site you haven't rediscovered yet meanwhile lots more credit card fraud is happening, which only makes law enforcement's job harder and less effective.
It's like finally discovering the phone number of the crime boss and calling the phone company and ordering them to disconnect their phone.
> That's supposing law enforcement has viable ways to leverage knowledge of its existence into investigation and prosecution.
Which they do, because that's their job and they do it all day long.
We're somehow talking about both end to end encryption and moderated public forums, but those are two different things. If you have a public forum where anybody on the forum can read the messages then anybody on the forum can read the messages -- including law enforcement. So they join the forum and start investigating in all the usual ways, and get a warrant to have the ISP upstream from the site start logging its traffic so they can start locating the users and getting warrants to bug their homes etc.
When you have end to end encrypted communications, the only people who have the message are the sender and the receiver. Then there is no dedicated CreditCardSkimmersChat site, they can use any generic secure messaging software for that. But this isn't any more difficult for law enforcement than criminals who communicate in person -- you still have to come by some reasonable suspicion of them to begin with somehow, and once you have you get a warrant to install bugs, which overcomes end to end encryption because then you're collecting data at one of the endpoints.
> Which they do, because that's their job and they do it all day long.
With varying degrees of success. Let's not pretend an encrypted-everywhere world (which includes e2e, onion routing, and other options) opens up like an oyster at the first sign of a warrant.
What would your opinion be if CreditCardSkimmersChat advertised itself as a place to meet other people involved in credit card skimming? With all conversation taken to e2e chats as soon as two people were introduced
Not constructing a pathological case here. Am really interested in how folks feel about the social responsibilities of serious encryption.
Not if it's end-to-end encrypted, which is exactly the topic here. With this bill it's highly likely that policing will be required of services providing end-to-end encryption too, which then makes that encryption impossible while complying with the "best practices".
Also, even if the content itself is 100% public, there's no way site owners can be expected to assign someone to monitor every discussion in real-time. Even just investigating content flagged by other users can easily turn into a full-time job for a moderate-size service.
Up until very recently, Twitch's moderation tools have been total garbage. I can only assume that they put the chat moderation stuff into their partner contract so they can get rid of undesirable partners easily. "We hate you, and here is a message that your moderators missed so buh bye." I also think that partners are well aware of the possibility of Twitch getting rid of them at any time with no recourse; this is why they heavily advertise their social media, Discord, and YouTube channels. You don't ever want to put all your eggs in one basket. If they have to move, it is likely that some of their audience will follow them.
Overall, I don't think Twitch is using chat moderation as much of a bargaining chip in practice. There are cases of streamers rallying their viewers to abuse other streamers. But in chat, really all you can do is have your bots spam racist messages, ASCII art, or "CUTE BOTS AYAYA CUTE BOTS". Links are mostly banned in practice (not by Twitch but with common extensions), Twitch moderates emotes (aggressively now, even for partners), and whatever hurtful things people say... most people opt out of by watching someone else. All of the stuff that is going to result in legal action tends to happen outside of Twitch (Twitter and Discord are cesspools of drama; read something like /r/OverwatchTMZ for examples).
I guess my point is, I don't think Twitch chat is going to be affected much by any regulations. I suppose if your chat consistently harrasses minors, you could be banned. But it seems very very uncommon even in the extremely toxic gaming community. When it does happen, it's incited by the streamer, and so the specifics of chat don't matter.
> I also think that partners are well aware of the possibility of Twitch getting rid of them at any time with no recourse; this is why they heavily advertise their social media, Discord, and YouTube channels.
I believe this is just good self-marketing, not paranoia. If you're a content creator in today's age, you're going to be trying to get traction on every major platform. It's like in SEO, make sure all your site pages are linked to each other, aka "Make sure people can access all relevant content". It's a way to increase retention and traction, not a fail-safe for being banned.
Plenty of people have complex opinions about 230, but it's a law that says, if you see a comment that defames you, sue the person who made it, it's got nothing to do with e.g. whoever runs the skateboarding forum. Who opposes this? It's just codifying the common sense understanding of the internet.
If I'm reading it right, the article agrees with you. It's saying that they want 230's protection, but the EARN IT act will take it away from people doing end-to-end protection like Signal.
In legal terms, Internet, especially commercial Internet is exceptional because of it is young and has unprecedented reach. Previously you could sue a paper for what they printed as it was thought obvious they will read and agree with whatever they print. And there was nothing compared to the Internet now.
A better analogy is a community bulletin board, like at a library.
Just thinking aloud, what would I expect if my library’s bb was always covered in hate speech? Probably that the librarian would put it behind locked glass and moderate posts. Or take it down altogether. I’d hope for the former.
If you put it behind locked glass, people are going to stop posting much of anything on that bulletin board. And dealing with the rest is going to take up too much of the librarian's time, distracting from other duties. It's a reasonable approach for official communications from the library, but not for a community forum.
Sue John Doe, and ask the court to issue a subpoena to the forum for identifying information, and then to the ISP, and once you have that, add the account holder as a defendant to the suit.
It's not fast, and it's not easy, but such is life.
I'm really coming more from the perspective of valuing anonymity. I'd prefer a world where you simply can't sue the guy, and have to suck it up that people say things you don't like.
You may not have thought this through. Defamation isn't people saying things you don't like. It's people saying things which are untrue, and cost you your livelihood. You could invest a lifetime in building the respect of your peers only to have it snuffed out by a competitor running a smear campaign.
Most of that is only a problem because of the underlying assumption that if it weren't true you'd sue the person saying it for defamation. Which really makes the problem worse in the cases where you can't sue, because the poster is outside your jurisdiction, you can't identify the poster, lawyers cost too much, etc.
If anyone can post anything without repercussions, who would believe it without solid evidence?
So like, for example, the pernicious doxxing of anyone who dares to question the left-liberal doctrine and loses their job via doxxing and social shame? I'm glad to know you're on-side with preventing that kind of defamation and smear campaign.
Whoever is running the proxy can also get a subpeona. If you run it yourself the ISP will know who you are. Someone is paying to access the internet, so they probably have records.
> Someone is paying to access the internet, so they probably have records.
I doubt their complete customer list is going to help narrow down the search very much. They need to know who is paying to use the service; they don't need to know which user was responsible for a particular connection.
I mean, 230 doesn't entitle them to immunity to subpeonas for records about which IP address made a comment, and internet providers routinely translate those to real identities in response to valid legal requests.
But that might work. Maybe they don't keep records. How do you sue defamatory information scrawled on a bathroom wall? Sometimes we don't have records for things. That's life.
I did the same a few weeks ago. Here's the automated response from Feinstein's office:
> Dear [Name]:
> Thank you for writing to me to share your concerns about law enforcement access to encrypted communications. I appreciate the time you took to write, and I welcome the opportunity to respond.
> I understand you are opposed to the “Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act of 2020” (S. 3398), which I introduced with Senators Lindsey Graham (R-SC), Richard Blumenthal (D-CT), and Josh Hawley (R-MO) on March 5, 2020. You may be interested to know that the Senate Judiciary Committee—of which I am Ranking Member—held a hearing on the “EARN IT Act” on March 11, 2020. If you would like to watch the full hearing or read the testimonies given by the hearing witnesses, I encourage you to visit the following website: https://sen.gov/53RV.
> The “EARN IT Act” would establish a National Commission on Online Sexual Exploitation Prevention to recommend best practices for companies to identify and report child sexual abuse material. Companies that implement these, or substantially similar, best practices would not be liable for any child sexual abuse materials that may still be found on their platforms. Companies that fail to meet these requirements, or fail to take other reasonable measures, would lose their liability protection.
> Child abuse is one of the most heinous crimes, which is why I was deeply disturbed by recent reporting by The New York Times about the nearly 70 million online photos and videos of child sexual abuse that were reported by technology companies last year. It is a federal crime to possesses, distribute, or produce pictures of sexually explicit conduct with minors, and technology companies are required to report and remove these images on their platforms. Media reports, however, make it clear that current federal enforcement measures are insufficient and that we must do more to protect children from sexual exploitation.
> Please know that I believe we must strike an appropriate balance between personal privacy and public safety. It is helpful for me to hear your perspective on this issue, and I will be mindful of your opposition to the “EARN IT Act” as the Senate continues to debate proposals to address child sexual exploitation.
> Once again, thank you for writing. Should you have any other questions or comments, please call my Washington, D.C. office at (202) 224-3841 or visit my website at feinstein.senate.gov. You can also follow me online at YouTube, Facebook and Twitter, and you can sign up for my email newsletter at feinstein.senate.gov/newsletter.
She should have been voted out of office in the last election. Her performance as a representative for CA has been incredibly abysmal. The way she handled the Kavanaugh allegations really changed my mind about her.
She seems like an old time Senator, steeped in politics but out of touch with the reality of the state she represents.
Why is legislation like this being presented.. Its simple.. Less people are looking. Of course this bill is intended to do what we all want. The writing is on the walls.
To many people sit around in their passive lifestyle and pass the care to someone else... Doesnt matter who just as long as they dont need to get off the couch and stop scrolling fakebook.
This passes... Youll see 1984 realllll soon... I wish more people would start taking shit seriously. The gov. Has been hijacked .. We all screwed.. Period.. Stop kidding yourself.
The premise of 1984 was that the government COULD watch what you were doing at any given time. In certain respects we're already past that and technologies like Signal are trying to move us out of that Orwellian world.
> At a high level, what the bill proposes is a system where companies have to earn Section 230 protection by following a set of designed-by-committee “best practices” that are extraordinarily unlikely to allow end-to-end encryption.
As diligently stated by Signal, EARN IT makes end-to-end encryption difficult, but not impossible. All relevant companies would like to prevent having to transition their current architecture over to a design that fits the specification laid out by EARN IT. This is quite understandable as this would bring with it a heavy cost, but if push comes to shove, that’s what they’re going to have to do. I expect that we’ll be hearing a lot more about this issue over the coming months. If this bill is passed, it will quickly be challenged in the Supreme Court.
Sorry, I could have stated this clearer. The point is that it basically gives law enforcement an extremely broad hammer for forcing service providers to design their systems however they want to help law enforcement, over their users. It would in practice make end-to-end encryption impossible to implement, not just difficult.
I see what you mean. That made me wonder what type of approach they would take for something that can vary so much and here’s what I found:
“EARN IT works by revoking a type of liability called Section 230 that makes it possible for providers to operate on the Internet, by preventing the provider for being held responsible for what their customers do on a platform like Facebook. The new bill would make it financially impossible for providers like WhatsApp and Apple to operate services unless they conduct “best practices” for scanning their systems for CSAM.
Since there are no “best practices” in existence, and the techniques for doing this while preserving privacy are completely unknown, the bill creates a government-appointed committee that will tell technology providers what technology they have to use. The specific nature of the committee is byzantine and described within the bill itself. Needless to say, the makeup of the committee, which can include as few as zero data security experts, ensures that end-to-end encryption will almost certainly not be considered a best practice.”
It seems that it would be in the best financial interests of large tech companies to try and revoke the bill if it’s passed. This is why I believe it will quickly be brought to the Supreme Court.
Telegram does the same thing. In fact, so does Instagram, which I find most egregious, since it asks for your number for 2FA purposes then notifies anyone who has your number saved that you’ve joined.
Every coach, recruiter, drug dealer or one night stand I’ve had in my life doesn’t need to know when I sign up to Instagram. Some of them might not have even had my real name until they got that notification.
IMO this should be illegal, now that I think about it.
I think the issue here is that Instagram has a different idea of the importance of a phone number than you do. They consider a phone number to be more personal information than your full name. Your phone number is also trivially tied to your full name anyway, unless you paid for a burner phone in cash wearing a ski mask.
For a government perhaps, but not your average person. This is similar to claiming that my ISP provided IP address is trivially tied to my full name because someone could subpoena it.
This is false. You are thinking via a carrier; but it turns out that your phone number is mapped to your name and address in hundreds of private databases which are sold to data brokers.
Getting name and email and address from phone number is widely available and cheap for anyone.
Hashing phone numbers isn't useful for privacy. You can test the entire space of 10^10 phone numbers against a list of hashes in hours, and you only have to do that step once.
This misses the point, just because someone sometimes added me to their phone contacts and uses signal does not mean I want them notified when I start using signal too.
I agree with your point, and I suspect that part of this comes from the Signal android app, which can be used as a replacement for all messaging (including SMS). Like Apple's Messages, it automatically upgrades from SMS to Signal when the recipient supports receiving messages on that platform.
A phone number does not proactively provide information. Should I have to use a professional suit and tie profile picture because my boss' boss might use Signal? What about LGBT people in potentially hostile environments? Etc
> This misses the point, just because someone sometimes added me to their phone contacts and uses signal does not mean I want them notified when I start using signal too.
It sounds like Signal is trying to solve a different problem than the one you have, so you should probably look for a different solution.
IIRC, Signal's goal is easy to use mass-market E2E encrypted replacement for SMS messaging. If they didn't automatically notify people's contacts, then most of them would probably continue to SMS or use FB messenger, etc.
With Telegram at least, you do not have to share your contacts with the app. You can build up a Telegram-specific list of contacts based on who you message on the platform.
This is the inverse of the issue. I don't want everyone that has my phone number to be able to see/add me on telegram. That would require those users not to upload their contacts, which is out of my control.
Everyone can't. You can adjust the privacy settings such that you have to add someone to your Telegram contacts before they can discover you are on Telegram. It's easily done in Telegram's privacy settings. This is a primary reason I prefer it to signal despite the screeching from armchair crypto experts on HN anytime Telegram is mentioned.
Specifically they use the first ten bytes of SHA1(phoneNumber) where phone number is like +12345553215 for the US phone number 1-234-555-3215 or say +4424061184 for the number I had as a child in a village in England.
This form of number is also the one your (mobile) phone actually uses, although they let you type in any sloppy human attempt at a phone number and translate.
Signal apps periodically reach out to Signal's servers to do two things: Confirm that this specific user does still have Signal (and so messages to them should be accepted) and optionally upload a set of these hashes for their contacts to see if any of those have Signal and so messages to those numbers can go securely via Signal instead.
How would a messaging app work without contact discovery? You try a friend's number, and see if the message goes through? Well if that's what you want, then you can do this for all your phonebook numbers, and all the ones that go through are on Signal, and all the ones that error are not. Oops, you've reinvented contact discovery.
> How would a messaging app work without contact discovery?
"Hey, add me on telegram, my username is @andrewzah". This isn't a hard problem.
I don't know why we decided apps hoovering up our contact lists in exchange for convenience was so important. For an app that touts itself as private and secure, I still had to explain to my brother why giving it his contact list wasn't a good idea.
This is a hard problem. The evidence for this is the decades of failed attempts to get people to use pgp and other systems where I need to have a freaking party in order to figure out who I can message and how before I actually start communicating.
I would argue that's a failure of pgp, not sharing in general. People have less resistance to easy to use apps like whatsapp, riot, etc versus something like pgp.
I guess I just don't value "contact discovery" as a feature?
I just don't see it as a casual thing the way the target users of these apps apparently do.
I want to explicitly control, per any form of communication, each person who is to be made to know that I operate that form of communication and whether or not that form of communication with me is open to them.
I do not want to open up a new app and have a large populated list of past acquaintances appear, I absolutely do not want to appear in such a list, I would rather not use a given app than risk having someone show up messaging me uninvited.
In order to do this, the signal server must maintain a list of who is allowed to talk to who, otherwise the list of people available on signal can be obtained through enumeration.
This is a privacy trade off. Some services chose to keep this list. Signal chose to use phone numbers.
At least that requires the other person to have my contact saved, and actively try to reach me. I don't clear out my contact list frequently, so I don't want old contacts to be PROACTIVELY messaged about me joining Signal... if they are looking for me, fine.
Because it requires the customer to try to send a message to the contact. They would have to continuously do that if they wanted to be notified when I joined.
This is very different than Signal doing it automatically when I join.
A. You can design it in such a way so that sending a message to a non-user is indistinguishable from having a user see it and not reply/acknowledge it.
B. You can exchange identifiers with people whom you want to communicate it just like with any other non-phone-based system: "Hey I'm @username on signal", "Cool, I'm @username2" - composes well with method A.
Solution A does not compose well with how Signal does encryption. In order to make this indistinguishable, Signal would basically have to man-in-the-middle all non-existent users. And if one of those users signed up for Signal it would have to stop man-in-the-middling them, causing all the people who were talking with their ghost to observe a key change. It's complicated at best, and sketchy at worst.
Solution B ties in to a bigger argument I won't address
There would be no key change because there would be no initial key, signal facilitates contacts anyway the only difference is that the sides have no ability to control with whom it takes place.
Messages to non-contacts will not be sent because there would be noone in your contacts to send them to, hence indistinguishable.
I don't follow. I'm sending (or trying to send) messages to my contacts. If I know their phone number, I'm going to try to initiate a Signal conversation with them. So I ask the Signal server for a signed prekey. Your argument is that Signal should not respond with "I don't know this person" and should instead respond with something indistinguishable from a "real" response. So they must send me something that looks like a signed prekey, right? Well then I would use that in order to do a key exchange and now we're in the situation I described above.
Signal server should respond with "I either don't know this person or they have not approved to be contacted by you". You should only get a prekey if they are in fact a signal user and have opted in to be discoverable by everyone or only by select people including you.
Ah, ok. So in that case, people would probably be have to be uncontactable by default (otherwise we'd be back to the current universe). Now you'll have to explicitly opt in to being contacted by every new person you meet. This is fine for some, but it's a massive usability tradeoff. Think about all the non-tehcnical people that Signal is intended to be used by, and imagine trying to teach them about Safety Numbers and a necessarily-opt-in contacting scheme.
Nothing necessary about it - those people (also all people) would be presented with a screen upon installation asking for their preferred discoverability policy - do they want to be contactable by: everyone in their phone contacts, only specific people from their contacts, only people they approve ad-hoc.
personally, I have admitted defeat and accepted that most people will not use my favorite messaging app. so I just default to sms and ask people who I talk to often if they have (or would consider installing) my favorite app. at this point, it's only slightly less convenient to exchange usernames.
the contact prepopulation isn't even that useful. telegram in particular seems to drop notifications if you don't exclude it from power management on Android. if you don't know the user has notifications turned on and has jumped through the hoops to make them actually work, you may as well send messages into a black hole. this is part of why I never assume it's a good way to contact someone without explicitly asking.
Perhaps the parent is downvoted not because of the underlying content of the message, but because of the black/white framing.
Every tool has a barrier to use, and even with a strong commitment to security/privacy, Signal seems to have decided that it's more beneficial for their users know which one of their friends are on Signal, than there is potential for abuse. Further, there may come a time where this calculus changes, for example, if spammers arrive on the Signal platform and start decreasing usability.
The creators of Signal surely believe that 'promoting the more widespread use of Signal itself' benefits their users. They're not for-profit company that exists for some other purpose.
Contact Discovery is not seen as privacy invasive, I guess. It says, X has Signal, but that's it. So I see how it is, strictly speaking, broadcasting 'private' information, but it is hard to care terribly. I'm far more concerned by the privacy of my conversations than the fact I at one point installed signal.
"Using this service, Signal clients will be able to efficiently and scalably determine whether the contacts in their address book are Signal users without revealing the contacts in their address book to the Signal service."
This is why Signal gets so much benefit of the doubt from the cryptography/security/privacy community. Their default approach to these problems is conservative in favor of the user until they can invent the technology needed to support a feature with security/privacy.
> They invented a way to do contact discovery in a secure way:
Their solution is to run contact discovery on a DRM Secure Enclave system. Ironic that their privacy solution is to use a technology that privacy advocates say is the spawn of Satan because it hands over control of your machine to Intel.
The Signal App, which is open source, periodically sends truncated cryptographically hashed phone numbers to the Signal server, which is also open source. The server does not store the truncated hashes that the app sends to the server. So, they only temporarily have partial hashes which they do not store or share with anyone.
See, there's an apparently archaic concept in software called user preference - they could ask people upon joining if they want to be contact-discoverable or not.
And that's fair! It's not perfect. And, ignorantly, I would assume it'd be easy to add, so they probably should.
But I can understand why this is a trade-off they'd make in terms of your comfort level (relatively rare to care about this) vs. massive use-ability and onboarding gains.
What you call "comfort level" is in fact the primary value proposition of a tool like signal and the fact that they have chosen to compromise on it to drive adoption is symptomatic of the many things wrong with the setting in which this decision was made.
Not unlike calling out the EARNIT senators on using child abuse justifications in bad faith to promote the agenda of censorship and surveillance.
(Four hours after your comment and it wasn't just downvoted: it has now been flagged, marked dead, and is unavailable for anyone not logged in and you can no longer reply to it.)
EARN IT is pretty disingenuous in how it is designed, of course, but I am all for making it harder and harder to retain Section 230 immunity: It's a mistake that we allow it in the first place.
We should indeed continue to erode the eligibility for Section 230 to the point that either the limitations of remaining eligible for immunity makes it easy for competitors to produce better offerings without immunity, or that these companies accept legal responsibility for their actions as a cost to doing business the way they want to. Perhaps this is a vehicle upon which we gradually sunset immunity-reliant platforms.
Section 230's supporters constantly push hilariously insane narratives about it's importance, suggesting that without it companies would be inherently violating the law any time one of their users violated the law, or that taking reasonable measures to prevent platform abuse is "impossible" at the scale Big Tech operates at.
It's more than past time that we regulate tech companies and hold them responsible for massive abuses permitted by their platforms just as we regulate every other sector of business.
Regulation of this sort generally just helps the incumbent players create a better moat around themselves. They can pay for the AI and humans to moderate things while newcomers can't. So it's question of trading of user benefit against giving even more power to Big Tech.
This is the standard scream of incumbent players when they want to discourage regulation. It both ignores the fact that what's "reasonable" for an incumbent monopoly and a small startup are different, and that the law generally accounts for scale.
Not in a useful way I've noticed because a small company can service hundreds of thousand easily thanks to the power of the internet. CCPA, for example, essentially sets the cutoff at 50000 users which you can get to pretty quickly with a consumer startup. The cutoff helps the local pizzeria I guess but not any actual competitor to incumbents.
this is an interesting counter-counter-argument i've not seen before. does discussion of this sort of derivative behavior exist elsewhere? i.e. is there an established narrative of incumbents pushing against regulatory capture, or examples of this behavior?
It's just a general behavioral trend I (and plenty of others) have noticed in arguments against regulation coming from monopolies. When a big tech company claims a regulation it dislikes would hurt newer players from competing with it, you have to ask... why are they so opposed then?
Is it out of the goodness of their hearts that large companies complain about regulation hurting small businesses? Or is it because the regulation will cost them a ton of money they'd rather keep in the bank, and they know they already have enough market capture to continue to obliterate small businesses either way?
When someone says that regulations on large companies will actually hurt small businesses, the first thing you should do, is look who is claiming that, and see where they get their funding from. It's almost always a think tank funded by the biggest player in the market being discussed.
> When a big tech company claims a regulation it dislikes would hurt newer players from competing with it, you have to ask... why are they so opposed then?
First off, I haven't seen big companies making this argument. Can you point us to a high quality source where one does?
Secondly, suppose the size of a market is X and BigCo has a 0.9X slice of it. Suppose the cost of complying with new regulation is 0.5X. Now it is impossible for anyone else to compete with BigCo, since there is no way for second place to be profitable in the market. However, BigCo is still worse off. Their revenues are now X, but their profits are now -0.4X compared to what they were before, since they gained 0.1X revenue and lost 0.5X in compliance costs.
So, if a BigCo were to make such an argument, that's at least one case where they could deploy it honestly while at the same time having their own best interests at heart.
There is a difference between short term and long term impact. Short term regulation costs money and a large player doesn't want that since it'd hurt their stock price. Long term they probably benefit from it but wall street doesn't care as much about that.
Does it "generally account for scale"? Citation needed. The GDPR has a fine structure of up to 4% of world-wide turnover or €20 million. Whichever is HIGHER. That means for any company doing less than say, €20 million in revenue and found to be non-compliant, GDPR gives the legal authority to fine them out of existence. I only mention GDPR as a specific example because of the familiarity here, but the general pattern of non-scaling regulation that results in regulatory capture and monopolization is the norm, not the exception.
The fines must be effective, proportionate and dissuasive for each individual case. For the decision of whether and what level of penalty can be assessed, the authorities have a statutory catalogue of criteria which it must consider for their decision. Among other things, intentional infringement, a failure to take measures to mitigate the damage which occurred, or lack of collaboration with authorities can increase the penalties. For especially severe violations, listed in Art. 83(5) GDPR, the fine framework can be up to 20 million euros, or in the case of an undertaking, up to 4 % of their total global turnover of the preceding fiscal year, whichever is higher. But even the catalogue of less severe violations in Art. 83(4) GDPR sets forth fines of up to 10 million euros, or, in the case of an undertaking, up to 2% of its entire global turnover of the preceding fiscal year, whichever is higher.
You’re conveniently ignoring that the percentage-based fine structure in itself is almost literally “accounting for scale”.
The minimum (of the maximum) set by the “whichever is higher” clause is needed to remain effective with non- and low-revenue entities. Something like Clearview (universal face recognition but startup with little revenue) would otherwise be free to ignore the law.
If your small company does enough damage to warrant a 20 million fine, it probably deserves to die. These fines also aren’t assessed arbitrarily: there’s a specific list of factors to take into account, and all decisions are subject to judicial review under the established principles of proportionality.
> These fines also aren’t assessed arbitrarily: there’s a specific list of factors to take into account, and all decisions are subject to judicial review under the established principles of proportionality.
You hope bureaucrats do not act mechanistically and do not apply proportionality, but over and over again in recent history you see that exact behavior. Which is why no business trusts a statement of 'they'll be might be nice, but nothing is effectively stopping them from not being nice other than some platitudes'!
The GDPR does not explicitly account for scale in practice, but pretends you're a $300 million dollar business, with the resources to do proper GDPR compliance, which is why the response of many small businesses is to stop serving europeans. And these businesses had nothing to do with privacy invasion, such as classes you pay for, paid note taking apps an so on.
While in theory you are right, there is a distinction.
GDPR makes business more difficult, but it codifies a human right to privacy that, much like worker safety, should have been implemented anyway.
The present regulation does the opposite. It ultimately implies that any company that does not comply with the insane US post 9/11 laws on large scale spying, extrajudicial sanctioning and, arguably, racial and ethnic discrimination, will be sued into the ground.
Scary especially for non US citizens, who are not in practice afforded judicatory or constitutional rights in the US
Whatabouting about the GDPR doesn't address any of the criticism of it, I wasn't talking about the EARN IT act in my comment.
I bet many agree here any that those US laws and actions are bad too. I really care about privacy myself also, more than the typical tech worker looking at the actions of my coworkers compared to me.
If there was GDPR casual or GDPR lite for small businesses, kind of like small business taxes where hiring an accountant once a year for a few thousand euros was enough, then I don't think people would be up in arms about the GDPR.
If GDPR-casual was good faith level of actions where all you did is kept a log of privacy data clearing requests, like a copy of an email and a copy saying you did it back, and then did a best effort cleanup, I don't think anyone would care. It's the very large compliance burden that GDPR imposes, with a lot of liability gotchas everywhere if you don't implement bureaucratic detail #2929 that put people up in arms. And if you say that it isn't like that, it shows you haven't really looked at what is required by the GDPR or tried to implement it in your company.
> Section 230's supporters constantly push hilariously insane narratives about it's importance, suggesting that without it companies would be inherently violating the law any time one of their users violated the law,
Can you explain in your own words what you think Section 230 actually does? Because yes, without it, that was very much the case (see Stratton Oakmont, Inc. v. Prodigy Services Co.) unless the company decides not to moderate at all, which is not an Internet that most of us want.
It would certainly change the face of the modern web. In the early 2000's, many blogs and news sites did not have comment sections.
Maybe a service like a third-party Disqus would come out that would split the user-generated content from the actual sites (and source the content via P2P networking).
The fact that your proposed scenario is scary / unpleasant / difficult does not automatically make the alternatives better, as we have learned over the last two decades. We need to seriously consider that perhaps these things are not actually simple unless you ignore the consequences.
No, and they wouldn't be by any informed understanding of the law. That's not how the law has ever worked in any developed society.
Generally, law has both the concept of intent and reasonableness. As such, a company that inadequately polices malicious and abusive content because that content is wildly profitable (hi Google and Facebook), we should have the legal ability to fine these companies into oblivion, because their behavior is not reasonable and the intent behind it can be divined from their records.
Meanwhile, if you an individual with a blog, see someone making a bad comment on your blog and you ban the person, the law would recognize that as a pretty reasonable moderation practice.
> No, and they wouldn't be by any informed understanding of the law.
You are misinformed about the history of 230. 230 was proposed exactly because the law was interpreted the way you're saying it wouldn't be.
From Wikipedia below, added emphasis mine:
> This concern was raised by legal challenges against CompuServe and Prodigy, early service providers at this time. CompuServe stated they would not attempt to regulate what users posted on their services, while Prodigy had employed a team of moderators to validate content. Both faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, Stratton Oakmont, Inc. v. Prodigy Services Co. found that as Prodigy had taken an editorial role with regard to customer content, it was a publisher and legally responsible for libel committed by customers.
> [...]
> United States Representative Christopher Cox (R-CA) had read an article about the two cases and felt the decisions were backwards. "It struck me that if that rule was going to take hold then the internet would become the Wild West and nobody would have any incentive to keep the internet civil", Cox stated.
---
It's become increasingly popular for people to say that Section 230 was a mistake. Usually they support that with claims that concerns about its repeal are purely theoretical fearmongering, despite the fact that we literally have case president on the books right now about what the Internet would look like without Section 230, and how the existing laws were being interpreted.
When people raise concerns that without Section 230 the Internet would be divided up into completely unmoderated platforms and aggressively curated gatekeepers, that's not fearmongering. It's history.
Ironically, the only websites that wouldn't be affected by a repeal of Section 230 are the completely unmoderated hellholes we want to discourage online, because they have Compuserve's precedent and the 1st Ammendment to hide behind.
So this is the thing that is really confusing me: isn't Signal like CompuServe? Signal doesn't moderate my content and in fact can't; so why would a repeal of Section 230 matter to Signal? And like, yes: maybe the people at Signal personally care... but that's not how this article is written. I feel like most of the people who are super knee-jerk pro-230 are ignoring this precedent you have pointed to of CompuServe: if you build something that really and truly is a distribution platform, shouldn't that be OK?
I think so, at least in theory. In practice, I suspect that would eventually get challenged in court. But (IANAL), I also suspect that you're right, and a platform like Signal would fall under the same category as CompuServe and could make a strong argument for itself using that case.
Here's where it gets tricky though -- Signal is kind of an anomaly, and there are a lot of platforms being built that both moderate content and incorporate E2E encryption. Matrix is the prime example, but even non-obvious platforms like Mastodon are talking about e2e encryption for DMs. To get a really good fediverse rolling, or even just to encourage platforms like Facebook to start using more zero-knowledge encryption, we need the ability to use E2E encryption alongside moderated content.
Pure distribution platforms are rarer than people think. I'm not particularly worried that ending Section 230 will be a disaster for private, closed, encrypted channels. But most of the best parts of the Internet happen in public channels and semi-open communities, and getting rid of 230 would have a really big negative impact on the general discourse within those communities and the freedom of like-minded people to get together and form communities online without a fear of lawsuits.
That being said, I think Signal does itself something of a disservice by not strongly asserting it's a pure distribution channel. They could talk about how this is dangerous for encryption overall while still advocating that the law wouldn't apply to someone in their position. We can simultaneously say that repealing Section 230 would be really bad for online communities, but not existentially bad for closed communication channels like Signal.
And purely from a strategic point of view, we should be interested in saying things like that, because if Section 230 does get repealed it would be very nice to have a fallback position that's already been articulated and made clear to Congress and general audiences, and that preserves at least some encryption.
But, Signal has their own set of real lawyers, so it may be that they disagree that CompuServe would apply, or it may be that they think that Congress would just keep challenging them until it found some attack that worked, or it may just be that they think aligning themselves alongside Open platforms like Matrix is more valuable than making a case that they would be exempt. I'm not going to pretend to know what's going through their minds.
I personally think you would be surprised at how much of what we currently have could continue to work in a world without Section 230. Right now, people are just taking a cheap shortcut of "let's just hire some moderators to moderate it", and enjoying it as it gives them control over narrative (letting them choose when to apply a firm hand in moderating and when to be lazy about it: there are just so many examples of companies abusing their moderation power in ways that have nothing to do with politics, along with issues of both subtle and not so subtle racism and misogyny--such as bans on photos of women breastfeeding--being perpetuated by the current system). I bet most of what we have right now could continue to work, albeit with pretty major architectural changes to the web... ones which admittedly might not still be conducive to large players extracting rent for hosting and organizing everything (maybe with more decentralized client-side mechanisms as opposed to centralized server-side mechanisms for helping people navigate content); and, what doesn't translate, was maybe not worth preserving in the first place. Either way, it seems to me like we should be having an honest conversation about the details of what we have and what we like and what we need to keep pulling it off, so we can figure out what the tradeoffs are, and this article from Signal equating a loss of Section 230 with somehow not being able to have end-to-end encryption is the exact opposite of that: it is more misinformation being thrown at an already giant mess of misunderstanding.
> I personally think you would be surprised at how much of what we currently have could continue to work in a world without Section 230
Hackernews wouldn't.
I advocate for digital rights online; particularly the Right to Communicate[0]. But the Right to Communicate goes hand in hand with the Right to Filter[1]. Human moderation isn't a shortcut, it's the backbone of small, cozy forums and independent sites. Human moderation on a personalized scale is what makes smaller communities so much nicer than giant algorithmically curated platforms like Twitter or Youtube.
The way we marry the Right to Filter and the Right to Communicate is with systems like the Fediverse that make it easy for people to form new communities on the fly, to join and leave existing communities without any pain or fuss, and to copy their content around or download it out of data silos whenever they'd like to. While we give users that convenience, we also recognize that communities have an inalienable right to organize themselves and filter the content that they host and see. In this way, the Right to Communicate and the Right to Filter reinforce each other, filling in the problematic gaps and abuses that either right would have in isolation.
Section 230 is what makes that possible. Decentralization isn't magic. The law and the DOJ will attack community organizers and label them as publishers regardless of whether or not they are personally hosting the content in their communities. It doesn't matter what architecture you use; if you're going to have an open community someplace, that community needs to be able to enforce its own rules and norms. And Section 230 will make them liable if they attempt to do so.
And even outside of the Fediverse, so much of the Internet matters.
To hear you very lightly say something like:
> and, what doesn't translate, was maybe not worth preserving in the first place
I'm almost not sure how to respond to a claim like that. HN isn't worth preserving? IRC channels aren't worth preserving? Matrix isn't worth preserving? Self-publishing storefronts, independent forums, and comment sections on blogs aren't worth preserving? Email isn't worth preserving?
> Signal equating a loss of Section 230 with somehow not being able to have end-to-end encryption
For Signal, no, maybe not. For a lot of other services, including the vast majority of the Fediverse, yes. I think your reading of Signal's status as a distributor is pretty reasonable. But don't jump from that reading to saying that this won't have an impact on encryption.
Signal is a zero-knowledge, closed communication platform. It's not decentralized, it has essentially no moderation of any kind, and it has no communities of any kind. An open community with its own norms, memes, and content standards is not zero-knowledge about the content it's hosting. A law that meant that only closed, blind systems like Signal could make use of E2E encryption wouldn't eliminate all encryption, but it would restrict a large number of platforms from using encryption to make themselves more private and more secure.
But in a world where we feel it was backwards that moderators were punished and unmoderated platforms weren't... Congress decided "let's just make everyone immune" was the right way to go?
And again, I think the examples here are missing the same concept that Section 230 fails to recognize: Profit, as I discussed here: https://news.ycombinator.com/item?id=22816016 It seems like the author of Section 230 failed to recognize we're in a capitalist society when this regulation was drafted.
When platforms are taking a cut out of illegal activity, as Big Tech platforms do when they operate ad networks, courts would have to agree that any platform party, regardless of whether or not they currently moderate, should be held to some manner of responsibility.
Right now, when an old lady clicks a Google search result for "mapquest", clicks the top link for "Maps Quest"[0] because Google ads aren't distinguishable from real search results to the untrained eye, is pushed to install a browser extension (from the Chrome Web Store) that hijacks her browser's new tab and search, injects malicious ads, and scrapes her private info to relay to an attacker, Google makes money. And is wholly protected by Section 230 for that activity and unable to be held responsible for refusing to delist the malicious ad.
In what world is that the right legal position?
[0] (This is a very real world example, I've done a lot of senior citizen tech support, and this is how 90% of them get owned.)
I don't like this malware example. Yes Section 230 protects Google from that and yes google is in a position of trust for the content they serve up but there's something wrong with your stance.
The point in your old lady's chain of actions where a law was and should be considered broken was when the malware ads were injected, not before. You can't go that far up the chain, there are too many proxies, too many people with intents that are not obviously malicious. People should be given the benefit of the doubt in most cases.
In addition, in your profit explanation that you linked to you stated that if the service can't scale up human interactions to match with complaints then that service shouldn't exist. That's laughable. To do so would make service owners so vulnerable to automated complaints that legitimate ones would never make it through, that goes for up and down the business scale. What your proposal ends up doing is creating a non-anonymous internet by necessity.
That example has absolutely nothing to do with Sec 230. Google’s ad design is all on Google. If it were illegal, Sec 230 wouldn’t protect them. And while Google might be protected against liability for Mapquest’s business practices, Mapquest isn’t. If their behavior is harmful and illegal, they are liable.
MapQuest did nothing wrong in this example. The problem is the fake sites that are taking the top spot in search results above the legitimate MapQuest link when you search Google for MapQuest, and Google refuses to delist them. And of course, Google lets people buy ads for other companies' trademarks, which is a whole different ball of issues.
(MapQuest is a popular one for malicious sites to pretend to be because most of the people searching for it are seniors... they heard about it twenty years ago and then never moved on from searching for it when they want directions somewhere.)
The moment someone points out Google makes a huge amount of money on scams and malware, and due to Section 230, can't really be held responsible for it.
Fining Google doesn't help the problem there, you would want to work with Google to find out who made the deceptive ad and deal with them so they can't continue on to hurt more people
To follow up, we've also tried going in the opposite direction from 230 more recently with SESTA/FOSTA.
From that Wikipedia page, some of the current effects (again, emphasis mine):
> Craigslist ceased offering its "Personals" section within all US domains in response to the bill's passing, stating "Any tool or service can be misused. We can’t take such risk without jeopardizing all our other services." Furry personals website Pounced.org voluntarily shut down, citing increased liability under the bill, and the difficulty of monitoring all the listings on the site for a small organization.
> The effectiveness of the bill has come into question as it has purportedly endangered sex workers and has been ineffective in catching and stopping sex traffickers. The sex worker community has claimed the law doesn't directly address issues that contribute to sex trafficking, but instead has drastically limited the tools available for law enforcement to seek surviving victims of sex trade. Similar consequences of the law's enactment have been reported internationally.
> A number of policy changes enacted by the popular social networks Facebook and Tumblr (the latter having been well known for having liberal policies regarding adult content) to restrict the posting of sexual content on their respective platforms have also been cited as examples of proactive censorship in the wake of the law, and a wider pattern of increased targeted censorship towards LGBT communities.
----
Now, this kind of effect doesn't get as much mainstream attention because people are primed not to think of sex censorship as "real" censorship. But again, we have examples on the book of what happens to legitimate services (both large and small) when laws like this get passed. It's not fearmongering, it's history.
People have these assumptions that laws are going to be reasonably applied -- that's not a safe assumption to make if you pay attention to the history of these laws.
I'm largely unsympathetic to those arguments for the same reason that I'm unsympathetic to all of the lawmakers saying, "well this time we regulate encryption it will be different." We have a number of examples of how this can go wrong (and has gone wrong). If somebody wants to propose that it'll be different the next time we weaken 230 or add exceptions, then I think the onus is on them to provide some kind of compelling evidence as to why it's going to be different this time.
What makes you certain that the policies you propose won't have the same effect as FOSTA/SESTA?
----
As to why these laws primarily affect platforms that are already trying to moderate and not free-for-all hellholes, that's in part because of existing case law around the difference between a publisher and a distributor.
From Wikipedia's entry on Compuserve's case (once again, emphasis mine):
> The court held that "CompuServe has no more editorial control over such a publication [as Rumorville] than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so."
Bills like SESTA/FOSTA have managed to pass without a lot of opposition because, again, people are primed to think that sex censorship isn't real censorship. But where more mainstream content is concerned, you should understand that proposing punishments for distributors is a pretty big change to existing libel/speech laws. Big enough that I don't even feel comfortable speculating on what the legal challenges or possible effects would be. That's a radical departure from how we currently think about speech in the US, not just on the Internet but in physical/print spaces as well.
"Unreasonable removal" isn't actually much of a concern here under our current legal doctrine: As these companies are private entities, they can decide that they simply don't want this or that on their platform, and that can be as unreasonable as they like.
Presumably, platforms which profit off user content have a financial incentive already to allow user content as much as they can, Section 230 only removes the financial incentive to remote bad content. Removing Section 230 will restore balance: Companies will still be motivated to keep as much non-abusive content as they can, but will face legal challenge if they fail to remove abusive content.
(There's an argument to be made that Facebook and Google represent "public spaces" in the modern Internet era, but we currently have no legal precedent for applying first amendment rights to privately owned properties. Either we'd need a huge legal shift to apply the first amendment to private spaces or we'd need to nationalize online platforms.)
> Section 230 only removes the financial incentive to remote bad content.
Please go read the actual law. It’s neither long nor complicated.
Section 230 corrected a problem in other law that made it dangerous to even attempt to moderate content. Before it became law, websites basically had to choose between not moderating at all, or assuming liability for all content.
Hell, I’ll just quote the relevant part in full:
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
I can understand companies not being protected from profiting off of ads that come before viral lies (facebook, youtube), especially when the companies have a hand in spreading them with algorithms promoting addiction.
But if they're not promoting the content, and aren't profiting from it in a different way than other content, we can't hold them responsible. These providers create platforms. Would you hold CVS responsible for selling me the tape/sharpie/poster board to make a racist sign?
If I create a twitter clone, post it online, and it somehow blows up overnight with child porn and terrorism, why do I deserve to be punished?
"Would you hold CVS responsible for selling me the tape/sharpie/poster board to make a racist sign?"
No, but I'd hold CVS responsible for displaying the sign in their stores.
"But if they're not promoting the content, and aren't profiting from it in a different way than other content, we can't hold them responsible."
That's the biggest issue Section 230 fails to account for: These companies are profiting off it. When Google or Facebook take down content, they still keep the profits they got from advertising it. Some of the most long-running ads on high traffic search terms on Google distribute malware, and they refuse to delist them due to the amount of money they make. Facebook refuses to restrict blatant lies on political ads because those political ads make it a huge amount of money.
Section 230 is a failure because Section 230 removes any financial incentive for platforms to moderate responsibility. If we were to replace Section 230, rather than removing it entirely, we would need a solution that makes it inherently expensive to host bad content, such that platforms are strongly incentivized to hire qualified staff to moderate and manage content.
If I report harmful content on Twitter or Facebook or Google, we need a system that ensures I receive a non-automated, competent response, and that the company is legally responsible for the decision they just made, such that they can't pawn it off on an algorithm or someone making 5 cents an hour.
> Section 230 is a failure because Section 230 removes any financial incentive for platforms to moderate responsibility.
What in the world does "moderate responsibility" mean? It's their site, they get to decide what goes on it as long as it's legal. If it's not legal, it has to be removed anyway!
> If I report harmful content on Twitter or Facebook or Google, we need a system that ensures I receive a non-automated, competent response, and that the company is legally responsible for the decision they just made, such that they can't pawn it off on an algorithm or someone making 5 cents an hour.
Yeah okay, fight for that then. This legislation isn't that.
Isn't that the original intent of section 230? Because these websites couldn't possibly moderate all possible user submissions for illegal content, that when illegal content is discovered that liability is held with the user and not the website hosting it?
Yes, that's the point of 230. It doesn't make anything legal that wasn't before, or illegal that was legal before. It simply assigns the responsibility of illegal content to the party that created it. Which is just a reasonable application of common sense.
I simply do not understand the motives behind people who want to abolish 230 - they would turn the internet into a stark split between heavily moderated websites, looking out only for their own liability because should they lay a finger on anything, they are culpable for everything - and unmoderated hellholes. Maybe they enjoy the hellholes and want more sites like that? Misery loves company.
I suspect most of the posters arguing against 230 are:
* Uninformed about what the law actually does
* Purposefully antagonistic and contrarian, or part of a coordinated troll campaign to sow discord
* Folks who have a bone to pick with big tech and will support any law, no matter how ridiculous, thinking it would cause big companies grief
* Spiteful that their post got moderated off a popular platform, and want websites to be forced to broadcast their content (despite this being a clear 1A violation of the company's rights)
* Really, truly, think that sites on the Internet should be either a wasteland or approval-only-posting, and you have to pick one
In any case, this kind of discussion around 230 is kind of burying the lede of the EARN IT act, which is a desperate attempt at not only further eroding 230 protections after the monstrosities of FOSTA/SESTA, but to allow the government to take away these common sense protections from a site unless they capitulate to government spying.
Which really should be the focus here, but somehow we're all distracted in the comments dismantling the faulty "platform or publisher, pick one!" argument again.
If a platform can't scale to handle content moderation requests, it shouldn't exist at scale. Presumably a company shouldn't be responsible to respond to bot submissions, and could potentially ban complainants who abuse the system. (Although doing so would potentially open them to legal recourse if they were banning someone for filing legitimate reports they just didn't want to deal with, for example.)
There are reasonable controls that can be put in place, but ultimately, Big Tech companies' responsibility needs to be seated in the legal system, and there needs to be a way to escalate to the legal system when these companies operate in a societally harmful fashion.
"We're just a platform, it's not our fault" should never be a conclusive answer to conversations about these companies' operations.
No human review system can scale to automated reporting; the number of attackers you have is not bounded by your legitimate user base.
The system you describe would basically mean every online service that allows human interaction would always run under risk of any trolls being able to permanently take them down by abusing content moderation requests.
> If a platform can't scale to handle content moderation requests, it shouldn't exist at scale.
Agreed. This whole "we got so big chasing crazy growth that making us responsible for cleaning up our own mess would make us lose money" argument is very tiresome and one that I can't see holding any water outside of tech.
Exactly. There's an exclusive mindset in tech that it's okay to automate human problems and then just say there's nothing they can do when automation isn't adequate. Other businesses have huge percentages of their workforce tackling problems that tech companies just say they're not responsible for, like content moderation, customer service, etc.
> suggesting that without it companies would be inherently violating the law any time one of their users violated the law,
Well, that’s almost literally what immunity means in this case. I’ve read that post of yours you linked downthread, and you’re basically just saying “courts will be wise enough and make reasonable decisions”.
I’m somewhat sympathetic to some expansion of liability. Revenge porn, for example, shouldn’t exist. That’s real harm being done every day to real people. And the tube sites are not just unwilling to spend money on moderating uploads. They obviously know that a large percentage of uploads are made without full consent, and that content represents a significant chunk of their revenue.
BUT Sec 230 is specifically aimed to indemnify websites that do try to moderate content. Before Sec 230 there was a brief period of time where that theory everyone on the internet believes in even though it is completely stupid was actually true, namely that the act of moderating some content somehow creates an obligation to moderate all content.
Uhh... disagree? Even a 5 person startup should be responsible for every single thing their users post? Or do you want some arbitrary line of employees above which it’s illegal and below which it’s fine?
But no, there shouldn't be an arbitrary line. Judges can make fair determinations on when a company is or is not doing a reasonable job controlling abuse on their platform, and the profit motivations behind those decisions.
The fact that you think this is going to get decided by a judge is at the root of your misunderstanding.
Nobody can afford to be defending the thousands of cases that would be brought against platforms over user-generated content if there wasn't a simple knock-out rule to get them dismissed. It doesn't matter if you're doing something reasonable, somebody would argue that you're not because there's still enough chance they might win and get awarded millions of dollars. And even if you win you still had to spend a million dollars on lawyers to prove it to a judge, and then someone else files a new lawsuit next week.
So the platforms can't just be right, they have to be so far away from the line of wrongness that they can get claims against them dismissed out of hand. But since it's inherently a trade off between false positives and false negatives, getting right up next to the line without crossing it is what you really want (that's the middle ground), and then that's off the table. Which leaves avoiding liability by not moderating at all or avoiding liability by moderating hyper-aggressively beyond all reason, since those are the two options to keep it from needing to be decided by a judge (which is the thing that bankrupts you).
While we're at it, we should also get the phone companies. I'm sure that the people who want to say bad things about me online are also telling people over the phone. How can they allow this!?
It was interesting to see the reactions of these streamers since they aren't typical business people or legal experts. There was quite some debate amongst them about the fairness of the streamer being held responsible for random trolls that entered their chats. When I considered the viewpoint of individuals instead of corporations it did expand my view of responsibility/accountability.