Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a bit annoyed by it. Censoring "lawful but awful" speech is the thin edge of the wedge. An existing precedent of censoring legal websites reduces my confidence that Wikipedia will be able to stand up to censorship pressure (including from its own editors) in the future.


So called "lawful and fine" speech don't need free speech protections, nor any protections for that matter. It's precisely the so called "lawful but awful" speech that do.


The category of speech most in need of free speech protection is "unlawful but fine".


Let me put it this way: Nobody is going to censor fine speech, FSVO fine.


I think the argument in this case is that it may cross the line into unlawful behavior. Kiwifarms has been linked to suicides, and encouraging suicide is a crime. 8chan has similarly been linked to violent crimes.

There are cases where speech is illegal, even in the USA, which probably has the strictest standards for protecting speech in the world.


If you think a website is doing something illegal, you can report it to the police or the FBI, depending on what type of crime it is. Kiwifarms has a US corporate entity controlled by a US citizen, it isn't like this is some tor darknet market hosted in Moldova or something.

Generally though sites aren't responsible for their users' speech, so if someone does cross the line, that would be on the user, not the site. As long as the site responds to any lawful subpoenas, they would stay in the clear.


So if a person advocates (for example) murder on an American site, this is fine until the police say it isn't? That is not a standard that 99% of the internet follows, and for good reasons.

The US legal system is 100% wholly incapable of keeping up with the pace of internet content for this sort of thing, so embracing the spirit of the laws on speech and applying them within user-content-based-sites is an appropriate minimum.

Even Musk who wanted to turn Twitter into a site dedicated to free speech specifically said he wanted to focus primarily on moderating content based on US laws (something that he has apparently walked back since then since Twitter still aggressively moderates legal content).


That's the whole point of Section 230. Service providers generally have immunity with respect to third-party content posted by their users. If a user posts something, it's their speech, and the user is therefore held responsible for it, not the website. Section 230 is what makes an internet of user-generated content possible.

https://www.eff.org/issues/cda230




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: