We've also already seen one random Google AI 'safety' employee trying to tell the media that AGI is here because Google built a chatbot that sounded convincing, which obviously turned out to be bullshit/hysterical.
Asking who said these things is as important as asking what they think is possible.
> We've also already seen one random Google AI 'safety' employee trying to tell the media that AGI is here because Google built a chatbot that sounded convincing, which obviously turned out to be bullshit/hysterical.
It's funny because the ELIZA effect[0] has been known for decades, and I'd assume any AI researcher is fully aware of it. But so many people are caught up in the hype and think it doesn't apply this time around.
Asking who said these things is as important as asking what they think is possible.