Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, it doesn't. You can read more when that was first posted to Hacker News. If I recall and understand correctly, they're just using the output of sublayers as training data for the outermost layer. So in other words, they're faking it and hiding that behind layers of complexity

The other day, I asked Copilot to verify a unit conversion for me. It gave an answer different than mine. Upon review, I had the right number. Copilot had even written code that would actually give the right answer, but their example of using that code performed the actual calculations wrong. It refused to accept my input that the calculation was wrong.

So not only did it not understand what I was asking and communicating to it, it didn't even understand its own output! This is not reasoning at any level. This happens all the time with these LLMs. And it's no surprise really. They are fancy, statistical copy cats.

From an intelligence and reasoning perspective, it's all smoke and mirrors. It also clearly has no relation to biological intelligent thinking. A primate or cetacean brain doesn't take the billions of dollars and how much energy to train on terabytes of data. While it's fine that AI might be artificial and not an analog of biological intelligence, these LLMs bear no resemblance to anything remotely close to intelligence. We tell students all the time to "stop guessing". That's what I want to yell at these LLMs all the time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: