Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I thought (and could be wrong) that all of these concerns are based on a very low probability of a very bad outcome.

Among knowledgeable people who have concerns in the first place, I'd say giving the probability of a very bad outcome of cumulative advances as "very low" is a fringe position. It seems to vary more between "significant" and "close to unity".

There are some knowledgeable people like Yann LeCun who have no concerns whatsoever but they seem singularly bad at communicating why this would be a rational position to take.



Given how dismissive LeCun is of the capabilities of SotA models, I think he thinks the state of the art is very far from human, and will never be human-like.

Myself, I think I count as a massive optimist, as my P(doom) is only about 15% — basically the same as Russian Roulette — half of which is humans using AI to do bad things directly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: