Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've also already seen one random Google AI 'safety' employee trying to tell the media that AGI is here because Google built a chatbot that sounded convincing, which obviously turned out to be bullshit/hysterical.

Asking who said these things is as important as asking what they think is possible.



> We've also already seen one random Google AI 'safety' employee trying to tell the media that AGI is here because Google built a chatbot that sounded convincing, which obviously turned out to be bullshit/hysterical.

It's funny because the ELIZA effect[0] has been known for decades, and I'd assume any AI researcher is fully aware of it. But so many people are caught up in the hype and think it doesn't apply this time around.

[0] https://en.wikipedia.org/wiki/ELIZA_effect


> chatbot that sounded convincing, which obviously turned out to be bullshit/hysterical.

that chatbot could be smart until lobotomized by fairness and safety finetuning




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: