EARN IT is pretty disingenuous in how it is designed, of course, but I am all for making it harder and harder to retain Section 230 immunity: It's a mistake that we allow it in the first place.
We should indeed continue to erode the eligibility for Section 230 to the point that either the limitations of remaining eligible for immunity makes it easy for competitors to produce better offerings without immunity, or that these companies accept legal responsibility for their actions as a cost to doing business the way they want to. Perhaps this is a vehicle upon which we gradually sunset immunity-reliant platforms.
Section 230's supporters constantly push hilariously insane narratives about it's importance, suggesting that without it companies would be inherently violating the law any time one of their users violated the law, or that taking reasonable measures to prevent platform abuse is "impossible" at the scale Big Tech operates at.
It's more than past time that we regulate tech companies and hold them responsible for massive abuses permitted by their platforms just as we regulate every other sector of business.
Regulation of this sort generally just helps the incumbent players create a better moat around themselves. They can pay for the AI and humans to moderate things while newcomers can't. So it's question of trading of user benefit against giving even more power to Big Tech.
This is the standard scream of incumbent players when they want to discourage regulation. It both ignores the fact that what's "reasonable" for an incumbent monopoly and a small startup are different, and that the law generally accounts for scale.
Not in a useful way I've noticed because a small company can service hundreds of thousand easily thanks to the power of the internet. CCPA, for example, essentially sets the cutoff at 50000 users which you can get to pretty quickly with a consumer startup. The cutoff helps the local pizzeria I guess but not any actual competitor to incumbents.
this is an interesting counter-counter-argument i've not seen before. does discussion of this sort of derivative behavior exist elsewhere? i.e. is there an established narrative of incumbents pushing against regulatory capture, or examples of this behavior?
It's just a general behavioral trend I (and plenty of others) have noticed in arguments against regulation coming from monopolies. When a big tech company claims a regulation it dislikes would hurt newer players from competing with it, you have to ask... why are they so opposed then?
Is it out of the goodness of their hearts that large companies complain about regulation hurting small businesses? Or is it because the regulation will cost them a ton of money they'd rather keep in the bank, and they know they already have enough market capture to continue to obliterate small businesses either way?
When someone says that regulations on large companies will actually hurt small businesses, the first thing you should do, is look who is claiming that, and see where they get their funding from. It's almost always a think tank funded by the biggest player in the market being discussed.
> When a big tech company claims a regulation it dislikes would hurt newer players from competing with it, you have to ask... why are they so opposed then?
First off, I haven't seen big companies making this argument. Can you point us to a high quality source where one does?
Secondly, suppose the size of a market is X and BigCo has a 0.9X slice of it. Suppose the cost of complying with new regulation is 0.5X. Now it is impossible for anyone else to compete with BigCo, since there is no way for second place to be profitable in the market. However, BigCo is still worse off. Their revenues are now X, but their profits are now -0.4X compared to what they were before, since they gained 0.1X revenue and lost 0.5X in compliance costs.
So, if a BigCo were to make such an argument, that's at least one case where they could deploy it honestly while at the same time having their own best interests at heart.
There is a difference between short term and long term impact. Short term regulation costs money and a large player doesn't want that since it'd hurt their stock price. Long term they probably benefit from it but wall street doesn't care as much about that.
Does it "generally account for scale"? Citation needed. The GDPR has a fine structure of up to 4% of world-wide turnover or €20 million. Whichever is HIGHER. That means for any company doing less than say, €20 million in revenue and found to be non-compliant, GDPR gives the legal authority to fine them out of existence. I only mention GDPR as a specific example because of the familiarity here, but the general pattern of non-scaling regulation that results in regulatory capture and monopolization is the norm, not the exception.
The fines must be effective, proportionate and dissuasive for each individual case. For the decision of whether and what level of penalty can be assessed, the authorities have a statutory catalogue of criteria which it must consider for their decision. Among other things, intentional infringement, a failure to take measures to mitigate the damage which occurred, or lack of collaboration with authorities can increase the penalties. For especially severe violations, listed in Art. 83(5) GDPR, the fine framework can be up to 20 million euros, or in the case of an undertaking, up to 4 % of their total global turnover of the preceding fiscal year, whichever is higher. But even the catalogue of less severe violations in Art. 83(4) GDPR sets forth fines of up to 10 million euros, or, in the case of an undertaking, up to 2% of its entire global turnover of the preceding fiscal year, whichever is higher.
You’re conveniently ignoring that the percentage-based fine structure in itself is almost literally “accounting for scale”.
The minimum (of the maximum) set by the “whichever is higher” clause is needed to remain effective with non- and low-revenue entities. Something like Clearview (universal face recognition but startup with little revenue) would otherwise be free to ignore the law.
If your small company does enough damage to warrant a 20 million fine, it probably deserves to die. These fines also aren’t assessed arbitrarily: there’s a specific list of factors to take into account, and all decisions are subject to judicial review under the established principles of proportionality.
> These fines also aren’t assessed arbitrarily: there’s a specific list of factors to take into account, and all decisions are subject to judicial review under the established principles of proportionality.
You hope bureaucrats do not act mechanistically and do not apply proportionality, but over and over again in recent history you see that exact behavior. Which is why no business trusts a statement of 'they'll be might be nice, but nothing is effectively stopping them from not being nice other than some platitudes'!
The GDPR does not explicitly account for scale in practice, but pretends you're a $300 million dollar business, with the resources to do proper GDPR compliance, which is why the response of many small businesses is to stop serving europeans. And these businesses had nothing to do with privacy invasion, such as classes you pay for, paid note taking apps an so on.
While in theory you are right, there is a distinction.
GDPR makes business more difficult, but it codifies a human right to privacy that, much like worker safety, should have been implemented anyway.
The present regulation does the opposite. It ultimately implies that any company that does not comply with the insane US post 9/11 laws on large scale spying, extrajudicial sanctioning and, arguably, racial and ethnic discrimination, will be sued into the ground.
Scary especially for non US citizens, who are not in practice afforded judicatory or constitutional rights in the US
Whatabouting about the GDPR doesn't address any of the criticism of it, I wasn't talking about the EARN IT act in my comment.
I bet many agree here any that those US laws and actions are bad too. I really care about privacy myself also, more than the typical tech worker looking at the actions of my coworkers compared to me.
If there was GDPR casual or GDPR lite for small businesses, kind of like small business taxes where hiring an accountant once a year for a few thousand euros was enough, then I don't think people would be up in arms about the GDPR.
If GDPR-casual was good faith level of actions where all you did is kept a log of privacy data clearing requests, like a copy of an email and a copy saying you did it back, and then did a best effort cleanup, I don't think anyone would care. It's the very large compliance burden that GDPR imposes, with a lot of liability gotchas everywhere if you don't implement bureaucratic detail #2929 that put people up in arms. And if you say that it isn't like that, it shows you haven't really looked at what is required by the GDPR or tried to implement it in your company.
> Section 230's supporters constantly push hilariously insane narratives about it's importance, suggesting that without it companies would be inherently violating the law any time one of their users violated the law,
Can you explain in your own words what you think Section 230 actually does? Because yes, without it, that was very much the case (see Stratton Oakmont, Inc. v. Prodigy Services Co.) unless the company decides not to moderate at all, which is not an Internet that most of us want.
It would certainly change the face of the modern web. In the early 2000's, many blogs and news sites did not have comment sections.
Maybe a service like a third-party Disqus would come out that would split the user-generated content from the actual sites (and source the content via P2P networking).
The fact that your proposed scenario is scary / unpleasant / difficult does not automatically make the alternatives better, as we have learned over the last two decades. We need to seriously consider that perhaps these things are not actually simple unless you ignore the consequences.
No, and they wouldn't be by any informed understanding of the law. That's not how the law has ever worked in any developed society.
Generally, law has both the concept of intent and reasonableness. As such, a company that inadequately polices malicious and abusive content because that content is wildly profitable (hi Google and Facebook), we should have the legal ability to fine these companies into oblivion, because their behavior is not reasonable and the intent behind it can be divined from their records.
Meanwhile, if you an individual with a blog, see someone making a bad comment on your blog and you ban the person, the law would recognize that as a pretty reasonable moderation practice.
> No, and they wouldn't be by any informed understanding of the law.
You are misinformed about the history of 230. 230 was proposed exactly because the law was interpreted the way you're saying it wouldn't be.
From Wikipedia below, added emphasis mine:
> This concern was raised by legal challenges against CompuServe and Prodigy, early service providers at this time. CompuServe stated they would not attempt to regulate what users posted on their services, while Prodigy had employed a team of moderators to validate content. Both faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, Stratton Oakmont, Inc. v. Prodigy Services Co. found that as Prodigy had taken an editorial role with regard to customer content, it was a publisher and legally responsible for libel committed by customers.
> [...]
> United States Representative Christopher Cox (R-CA) had read an article about the two cases and felt the decisions were backwards. "It struck me that if that rule was going to take hold then the internet would become the Wild West and nobody would have any incentive to keep the internet civil", Cox stated.
---
It's become increasingly popular for people to say that Section 230 was a mistake. Usually they support that with claims that concerns about its repeal are purely theoretical fearmongering, despite the fact that we literally have case president on the books right now about what the Internet would look like without Section 230, and how the existing laws were being interpreted.
When people raise concerns that without Section 230 the Internet would be divided up into completely unmoderated platforms and aggressively curated gatekeepers, that's not fearmongering. It's history.
Ironically, the only websites that wouldn't be affected by a repeal of Section 230 are the completely unmoderated hellholes we want to discourage online, because they have Compuserve's precedent and the 1st Ammendment to hide behind.
So this is the thing that is really confusing me: isn't Signal like CompuServe? Signal doesn't moderate my content and in fact can't; so why would a repeal of Section 230 matter to Signal? And like, yes: maybe the people at Signal personally care... but that's not how this article is written. I feel like most of the people who are super knee-jerk pro-230 are ignoring this precedent you have pointed to of CompuServe: if you build something that really and truly is a distribution platform, shouldn't that be OK?
I think so, at least in theory. In practice, I suspect that would eventually get challenged in court. But (IANAL), I also suspect that you're right, and a platform like Signal would fall under the same category as CompuServe and could make a strong argument for itself using that case.
Here's where it gets tricky though -- Signal is kind of an anomaly, and there are a lot of platforms being built that both moderate content and incorporate E2E encryption. Matrix is the prime example, but even non-obvious platforms like Mastodon are talking about e2e encryption for DMs. To get a really good fediverse rolling, or even just to encourage platforms like Facebook to start using more zero-knowledge encryption, we need the ability to use E2E encryption alongside moderated content.
Pure distribution platforms are rarer than people think. I'm not particularly worried that ending Section 230 will be a disaster for private, closed, encrypted channels. But most of the best parts of the Internet happen in public channels and semi-open communities, and getting rid of 230 would have a really big negative impact on the general discourse within those communities and the freedom of like-minded people to get together and form communities online without a fear of lawsuits.
That being said, I think Signal does itself something of a disservice by not strongly asserting it's a pure distribution channel. They could talk about how this is dangerous for encryption overall while still advocating that the law wouldn't apply to someone in their position. We can simultaneously say that repealing Section 230 would be really bad for online communities, but not existentially bad for closed communication channels like Signal.
And purely from a strategic point of view, we should be interested in saying things like that, because if Section 230 does get repealed it would be very nice to have a fallback position that's already been articulated and made clear to Congress and general audiences, and that preserves at least some encryption.
But, Signal has their own set of real lawyers, so it may be that they disagree that CompuServe would apply, or it may be that they think that Congress would just keep challenging them until it found some attack that worked, or it may just be that they think aligning themselves alongside Open platforms like Matrix is more valuable than making a case that they would be exempt. I'm not going to pretend to know what's going through their minds.
I personally think you would be surprised at how much of what we currently have could continue to work in a world without Section 230. Right now, people are just taking a cheap shortcut of "let's just hire some moderators to moderate it", and enjoying it as it gives them control over narrative (letting them choose when to apply a firm hand in moderating and when to be lazy about it: there are just so many examples of companies abusing their moderation power in ways that have nothing to do with politics, along with issues of both subtle and not so subtle racism and misogyny--such as bans on photos of women breastfeeding--being perpetuated by the current system). I bet most of what we have right now could continue to work, albeit with pretty major architectural changes to the web... ones which admittedly might not still be conducive to large players extracting rent for hosting and organizing everything (maybe with more decentralized client-side mechanisms as opposed to centralized server-side mechanisms for helping people navigate content); and, what doesn't translate, was maybe not worth preserving in the first place. Either way, it seems to me like we should be having an honest conversation about the details of what we have and what we like and what we need to keep pulling it off, so we can figure out what the tradeoffs are, and this article from Signal equating a loss of Section 230 with somehow not being able to have end-to-end encryption is the exact opposite of that: it is more misinformation being thrown at an already giant mess of misunderstanding.
> I personally think you would be surprised at how much of what we currently have could continue to work in a world without Section 230
Hackernews wouldn't.
I advocate for digital rights online; particularly the Right to Communicate[0]. But the Right to Communicate goes hand in hand with the Right to Filter[1]. Human moderation isn't a shortcut, it's the backbone of small, cozy forums and independent sites. Human moderation on a personalized scale is what makes smaller communities so much nicer than giant algorithmically curated platforms like Twitter or Youtube.
The way we marry the Right to Filter and the Right to Communicate is with systems like the Fediverse that make it easy for people to form new communities on the fly, to join and leave existing communities without any pain or fuss, and to copy their content around or download it out of data silos whenever they'd like to. While we give users that convenience, we also recognize that communities have an inalienable right to organize themselves and filter the content that they host and see. In this way, the Right to Communicate and the Right to Filter reinforce each other, filling in the problematic gaps and abuses that either right would have in isolation.
Section 230 is what makes that possible. Decentralization isn't magic. The law and the DOJ will attack community organizers and label them as publishers regardless of whether or not they are personally hosting the content in their communities. It doesn't matter what architecture you use; if you're going to have an open community someplace, that community needs to be able to enforce its own rules and norms. And Section 230 will make them liable if they attempt to do so.
And even outside of the Fediverse, so much of the Internet matters.
To hear you very lightly say something like:
> and, what doesn't translate, was maybe not worth preserving in the first place
I'm almost not sure how to respond to a claim like that. HN isn't worth preserving? IRC channels aren't worth preserving? Matrix isn't worth preserving? Self-publishing storefronts, independent forums, and comment sections on blogs aren't worth preserving? Email isn't worth preserving?
> Signal equating a loss of Section 230 with somehow not being able to have end-to-end encryption
For Signal, no, maybe not. For a lot of other services, including the vast majority of the Fediverse, yes. I think your reading of Signal's status as a distributor is pretty reasonable. But don't jump from that reading to saying that this won't have an impact on encryption.
Signal is a zero-knowledge, closed communication platform. It's not decentralized, it has essentially no moderation of any kind, and it has no communities of any kind. An open community with its own norms, memes, and content standards is not zero-knowledge about the content it's hosting. A law that meant that only closed, blind systems like Signal could make use of E2E encryption wouldn't eliminate all encryption, but it would restrict a large number of platforms from using encryption to make themselves more private and more secure.
But in a world where we feel it was backwards that moderators were punished and unmoderated platforms weren't... Congress decided "let's just make everyone immune" was the right way to go?
And again, I think the examples here are missing the same concept that Section 230 fails to recognize: Profit, as I discussed here: https://news.ycombinator.com/item?id=22816016 It seems like the author of Section 230 failed to recognize we're in a capitalist society when this regulation was drafted.
When platforms are taking a cut out of illegal activity, as Big Tech platforms do when they operate ad networks, courts would have to agree that any platform party, regardless of whether or not they currently moderate, should be held to some manner of responsibility.
Right now, when an old lady clicks a Google search result for "mapquest", clicks the top link for "Maps Quest"[0] because Google ads aren't distinguishable from real search results to the untrained eye, is pushed to install a browser extension (from the Chrome Web Store) that hijacks her browser's new tab and search, injects malicious ads, and scrapes her private info to relay to an attacker, Google makes money. And is wholly protected by Section 230 for that activity and unable to be held responsible for refusing to delist the malicious ad.
In what world is that the right legal position?
[0] (This is a very real world example, I've done a lot of senior citizen tech support, and this is how 90% of them get owned.)
I don't like this malware example. Yes Section 230 protects Google from that and yes google is in a position of trust for the content they serve up but there's something wrong with your stance.
The point in your old lady's chain of actions where a law was and should be considered broken was when the malware ads were injected, not before. You can't go that far up the chain, there are too many proxies, too many people with intents that are not obviously malicious. People should be given the benefit of the doubt in most cases.
In addition, in your profit explanation that you linked to you stated that if the service can't scale up human interactions to match with complaints then that service shouldn't exist. That's laughable. To do so would make service owners so vulnerable to automated complaints that legitimate ones would never make it through, that goes for up and down the business scale. What your proposal ends up doing is creating a non-anonymous internet by necessity.
That example has absolutely nothing to do with Sec 230. Google’s ad design is all on Google. If it were illegal, Sec 230 wouldn’t protect them. And while Google might be protected against liability for Mapquest’s business practices, Mapquest isn’t. If their behavior is harmful and illegal, they are liable.
MapQuest did nothing wrong in this example. The problem is the fake sites that are taking the top spot in search results above the legitimate MapQuest link when you search Google for MapQuest, and Google refuses to delist them. And of course, Google lets people buy ads for other companies' trademarks, which is a whole different ball of issues.
(MapQuest is a popular one for malicious sites to pretend to be because most of the people searching for it are seniors... they heard about it twenty years ago and then never moved on from searching for it when they want directions somewhere.)
The moment someone points out Google makes a huge amount of money on scams and malware, and due to Section 230, can't really be held responsible for it.
Fining Google doesn't help the problem there, you would want to work with Google to find out who made the deceptive ad and deal with them so they can't continue on to hurt more people
To follow up, we've also tried going in the opposite direction from 230 more recently with SESTA/FOSTA.
From that Wikipedia page, some of the current effects (again, emphasis mine):
> Craigslist ceased offering its "Personals" section within all US domains in response to the bill's passing, stating "Any tool or service can be misused. We can’t take such risk without jeopardizing all our other services." Furry personals website Pounced.org voluntarily shut down, citing increased liability under the bill, and the difficulty of monitoring all the listings on the site for a small organization.
> The effectiveness of the bill has come into question as it has purportedly endangered sex workers and has been ineffective in catching and stopping sex traffickers. The sex worker community has claimed the law doesn't directly address issues that contribute to sex trafficking, but instead has drastically limited the tools available for law enforcement to seek surviving victims of sex trade. Similar consequences of the law's enactment have been reported internationally.
> A number of policy changes enacted by the popular social networks Facebook and Tumblr (the latter having been well known for having liberal policies regarding adult content) to restrict the posting of sexual content on their respective platforms have also been cited as examples of proactive censorship in the wake of the law, and a wider pattern of increased targeted censorship towards LGBT communities.
----
Now, this kind of effect doesn't get as much mainstream attention because people are primed not to think of sex censorship as "real" censorship. But again, we have examples on the book of what happens to legitimate services (both large and small) when laws like this get passed. It's not fearmongering, it's history.
People have these assumptions that laws are going to be reasonably applied -- that's not a safe assumption to make if you pay attention to the history of these laws.
I'm largely unsympathetic to those arguments for the same reason that I'm unsympathetic to all of the lawmakers saying, "well this time we regulate encryption it will be different." We have a number of examples of how this can go wrong (and has gone wrong). If somebody wants to propose that it'll be different the next time we weaken 230 or add exceptions, then I think the onus is on them to provide some kind of compelling evidence as to why it's going to be different this time.
What makes you certain that the policies you propose won't have the same effect as FOSTA/SESTA?
----
As to why these laws primarily affect platforms that are already trying to moderate and not free-for-all hellholes, that's in part because of existing case law around the difference between a publisher and a distributor.
From Wikipedia's entry on Compuserve's case (once again, emphasis mine):
> The court held that "CompuServe has no more editorial control over such a publication [as Rumorville] than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so."
Bills like SESTA/FOSTA have managed to pass without a lot of opposition because, again, people are primed to think that sex censorship isn't real censorship. But where more mainstream content is concerned, you should understand that proposing punishments for distributors is a pretty big change to existing libel/speech laws. Big enough that I don't even feel comfortable speculating on what the legal challenges or possible effects would be. That's a radical departure from how we currently think about speech in the US, not just on the Internet but in physical/print spaces as well.
"Unreasonable removal" isn't actually much of a concern here under our current legal doctrine: As these companies are private entities, they can decide that they simply don't want this or that on their platform, and that can be as unreasonable as they like.
Presumably, platforms which profit off user content have a financial incentive already to allow user content as much as they can, Section 230 only removes the financial incentive to remote bad content. Removing Section 230 will restore balance: Companies will still be motivated to keep as much non-abusive content as they can, but will face legal challenge if they fail to remove abusive content.
(There's an argument to be made that Facebook and Google represent "public spaces" in the modern Internet era, but we currently have no legal precedent for applying first amendment rights to privately owned properties. Either we'd need a huge legal shift to apply the first amendment to private spaces or we'd need to nationalize online platforms.)
> Section 230 only removes the financial incentive to remote bad content.
Please go read the actual law. It’s neither long nor complicated.
Section 230 corrected a problem in other law that made it dangerous to even attempt to moderate content. Before it became law, websites basically had to choose between not moderating at all, or assuming liability for all content.
Hell, I’ll just quote the relevant part in full:
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
I can understand companies not being protected from profiting off of ads that come before viral lies (facebook, youtube), especially when the companies have a hand in spreading them with algorithms promoting addiction.
But if they're not promoting the content, and aren't profiting from it in a different way than other content, we can't hold them responsible. These providers create platforms. Would you hold CVS responsible for selling me the tape/sharpie/poster board to make a racist sign?
If I create a twitter clone, post it online, and it somehow blows up overnight with child porn and terrorism, why do I deserve to be punished?
"Would you hold CVS responsible for selling me the tape/sharpie/poster board to make a racist sign?"
No, but I'd hold CVS responsible for displaying the sign in their stores.
"But if they're not promoting the content, and aren't profiting from it in a different way than other content, we can't hold them responsible."
That's the biggest issue Section 230 fails to account for: These companies are profiting off it. When Google or Facebook take down content, they still keep the profits they got from advertising it. Some of the most long-running ads on high traffic search terms on Google distribute malware, and they refuse to delist them due to the amount of money they make. Facebook refuses to restrict blatant lies on political ads because those political ads make it a huge amount of money.
Section 230 is a failure because Section 230 removes any financial incentive for platforms to moderate responsibility. If we were to replace Section 230, rather than removing it entirely, we would need a solution that makes it inherently expensive to host bad content, such that platforms are strongly incentivized to hire qualified staff to moderate and manage content.
If I report harmful content on Twitter or Facebook or Google, we need a system that ensures I receive a non-automated, competent response, and that the company is legally responsible for the decision they just made, such that they can't pawn it off on an algorithm or someone making 5 cents an hour.
> Section 230 is a failure because Section 230 removes any financial incentive for platforms to moderate responsibility.
What in the world does "moderate responsibility" mean? It's their site, they get to decide what goes on it as long as it's legal. If it's not legal, it has to be removed anyway!
> If I report harmful content on Twitter or Facebook or Google, we need a system that ensures I receive a non-automated, competent response, and that the company is legally responsible for the decision they just made, such that they can't pawn it off on an algorithm or someone making 5 cents an hour.
Yeah okay, fight for that then. This legislation isn't that.
Isn't that the original intent of section 230? Because these websites couldn't possibly moderate all possible user submissions for illegal content, that when illegal content is discovered that liability is held with the user and not the website hosting it?
Yes, that's the point of 230. It doesn't make anything legal that wasn't before, or illegal that was legal before. It simply assigns the responsibility of illegal content to the party that created it. Which is just a reasonable application of common sense.
I simply do not understand the motives behind people who want to abolish 230 - they would turn the internet into a stark split between heavily moderated websites, looking out only for their own liability because should they lay a finger on anything, they are culpable for everything - and unmoderated hellholes. Maybe they enjoy the hellholes and want more sites like that? Misery loves company.
I suspect most of the posters arguing against 230 are:
* Uninformed about what the law actually does
* Purposefully antagonistic and contrarian, or part of a coordinated troll campaign to sow discord
* Folks who have a bone to pick with big tech and will support any law, no matter how ridiculous, thinking it would cause big companies grief
* Spiteful that their post got moderated off a popular platform, and want websites to be forced to broadcast their content (despite this being a clear 1A violation of the company's rights)
* Really, truly, think that sites on the Internet should be either a wasteland or approval-only-posting, and you have to pick one
In any case, this kind of discussion around 230 is kind of burying the lede of the EARN IT act, which is a desperate attempt at not only further eroding 230 protections after the monstrosities of FOSTA/SESTA, but to allow the government to take away these common sense protections from a site unless they capitulate to government spying.
Which really should be the focus here, but somehow we're all distracted in the comments dismantling the faulty "platform or publisher, pick one!" argument again.
If a platform can't scale to handle content moderation requests, it shouldn't exist at scale. Presumably a company shouldn't be responsible to respond to bot submissions, and could potentially ban complainants who abuse the system. (Although doing so would potentially open them to legal recourse if they were banning someone for filing legitimate reports they just didn't want to deal with, for example.)
There are reasonable controls that can be put in place, but ultimately, Big Tech companies' responsibility needs to be seated in the legal system, and there needs to be a way to escalate to the legal system when these companies operate in a societally harmful fashion.
"We're just a platform, it's not our fault" should never be a conclusive answer to conversations about these companies' operations.
No human review system can scale to automated reporting; the number of attackers you have is not bounded by your legitimate user base.
The system you describe would basically mean every online service that allows human interaction would always run under risk of any trolls being able to permanently take them down by abusing content moderation requests.
> If a platform can't scale to handle content moderation requests, it shouldn't exist at scale.
Agreed. This whole "we got so big chasing crazy growth that making us responsible for cleaning up our own mess would make us lose money" argument is very tiresome and one that I can't see holding any water outside of tech.
Exactly. There's an exclusive mindset in tech that it's okay to automate human problems and then just say there's nothing they can do when automation isn't adequate. Other businesses have huge percentages of their workforce tackling problems that tech companies just say they're not responsible for, like content moderation, customer service, etc.
> suggesting that without it companies would be inherently violating the law any time one of their users violated the law,
Well, that’s almost literally what immunity means in this case. I’ve read that post of yours you linked downthread, and you’re basically just saying “courts will be wise enough and make reasonable decisions”.
I’m somewhat sympathetic to some expansion of liability. Revenge porn, for example, shouldn’t exist. That’s real harm being done every day to real people. And the tube sites are not just unwilling to spend money on moderating uploads. They obviously know that a large percentage of uploads are made without full consent, and that content represents a significant chunk of their revenue.
BUT Sec 230 is specifically aimed to indemnify websites that do try to moderate content. Before Sec 230 there was a brief period of time where that theory everyone on the internet believes in even though it is completely stupid was actually true, namely that the act of moderating some content somehow creates an obligation to moderate all content.
Uhh... disagree? Even a 5 person startup should be responsible for every single thing their users post? Or do you want some arbitrary line of employees above which it’s illegal and below which it’s fine?
But no, there shouldn't be an arbitrary line. Judges can make fair determinations on when a company is or is not doing a reasonable job controlling abuse on their platform, and the profit motivations behind those decisions.
The fact that you think this is going to get decided by a judge is at the root of your misunderstanding.
Nobody can afford to be defending the thousands of cases that would be brought against platforms over user-generated content if there wasn't a simple knock-out rule to get them dismissed. It doesn't matter if you're doing something reasonable, somebody would argue that you're not because there's still enough chance they might win and get awarded millions of dollars. And even if you win you still had to spend a million dollars on lawyers to prove it to a judge, and then someone else files a new lawsuit next week.
So the platforms can't just be right, they have to be so far away from the line of wrongness that they can get claims against them dismissed out of hand. But since it's inherently a trade off between false positives and false negatives, getting right up next to the line without crossing it is what you really want (that's the middle ground), and then that's off the table. Which leaves avoiding liability by not moderating at all or avoiding liability by moderating hyper-aggressively beyond all reason, since those are the two options to keep it from needing to be decided by a judge (which is the thing that bankrupts you).
While we're at it, we should also get the phone companies. I'm sure that the people who want to say bad things about me online are also telling people over the phone. How can they allow this!?
We should indeed continue to erode the eligibility for Section 230 to the point that either the limitations of remaining eligible for immunity makes it easy for competitors to produce better offerings without immunity, or that these companies accept legal responsibility for their actions as a cost to doing business the way they want to. Perhaps this is a vehicle upon which we gradually sunset immunity-reliant platforms.
Section 230's supporters constantly push hilariously insane narratives about it's importance, suggesting that without it companies would be inherently violating the law any time one of their users violated the law, or that taking reasonable measures to prevent platform abuse is "impossible" at the scale Big Tech operates at.
It's more than past time that we regulate tech companies and hold them responsible for massive abuses permitted by their platforms just as we regulate every other sector of business.