Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

None of us have seen the letter so this may be off base, but I would expect people working at the world's most prominent AI research organization to have more skepticism about the ramifications of any one "breakthrough." Perhaps most did, but a couple didn't and wrote the letter?

More than 3 decades ago when AI started beating humans at chess, some people feared AGI was right around the corner. They were wrong.

Last year a Google researcher thought his chat bot was sentient ( https://www.scientificamerican.com/article/google-engineer-c... .) He was wrong.

Some day AGI will be achieved and Q* sounds like a great breakthrough solving an interesting piece of the puzzle. But "performing math on the level of grade-school students" is a long ways from AGI. This seems like a strange thing to have triggered the chaos at OpenAI.



>Last year a Google researcher thought his chat bot was sentient ( https://www.scientificamerican.com/article/google-engineer-c... .) He was wrong.

You've figured out how to test for sentience ?


I think the word "sentience" is a red herring. The more important point is that the researcher at Google thought that the AI had wants and needs 'like a human', e.g. that if it asked the AI if it wanted legal representation to protect its own rights, this was the same as asking a human the same question.

This needs much stronger evidence than the researcher presented, when slight variations or framing of the same questions could lead to very different outcomes from the LLM.


> that if it asked the AI if it wanted legal representation to protect its own rights, this was the same as asking a human the same question.

You seem to be assigning a level of stupidity to a google AI researcher that doesn't seem wise. That guy is not a crazy who grabbed his 15 minutes and disappeared, he's active on twitter and elsewhere and has extensively defended his views in very cogent ways.


These things are deliberately constructed to mimic human language patterns, if you're trying to determine whether there is underlying sentience to it, you need to be extra skeptical and careful about analyzing it and not rely on your first impressions of it's output. Anything less would be a level of stupidity not fit for a Google AI researcher, which considering that he was fired is apropos. That he keeps going on about it after his 15 minutes are up is not proof of anything except possibly that besides being stupid he also stubborn.


You don't need to, you can make a determination by proxy: if the claim is made by a crackpot, discard the claim.


> Some day AGI will be achieved

https://www.youtube.com/watch?v=IOax8WSeEGM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: