Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and exploits and traumatises data workers to make sure ChatGPT doesn’t generate outputs like child sexual abuse material and hate speech or encourage users to self-harm.

It's hard to take this seriously, especially with the inflated language.

They're essentially doing moderation. It's a job that's been done and needed on internet platforms of all stripes for at least 30 years. Sure, it can been unpleasant work. If it's "traumatizing" you, you shouldn't be doing it. Acting like this is some novel and horrific phenomenon springing from the quixotic pursuit of AGI is ridiculous. It would be needed even if no one believed AGI was possible.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: