Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A friend of mine came up with a good analogy: the AI doomers that went off to start Anthropic AI are like the extremist Puritans that were driven out of an increasingly secular Europe.

The thing is that two groups superficially share common traits, but if they do so for different reasons, they aren't actually compatible. You see this with groups of friends that are all "conspiracy theorists", for example. They're all vaguely the same and band together, but get into enormous arguments.

The "AI Puritans" and the "Secular Humanists" are both concerned about AI safety, but for different reasons. The former group would rather that the AIs behave in a prim and proper manner, banning it from doing anything that isn't mandatory. The latter group is concerned for people's jobs, unfairness, bias, and inappropriate over usage of early, low-quality models.

They both say they want "AI safety", but they understand that term to mean different things and arrived at their position from fundamentally different axioms.

One group wants the AI to never say "naughty" things and is happy to dumb down the models to the point of being a lobotomised and retarded... but behaved.

The other group thinks that newer, smarter, better models are critical to ensure that the AI doesn't make mistakes, doesn't get confused, and can obey orders correctly.

One group wants to shove the genie back into the bottle, the other group wants a genie that can grant our wishes.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: