Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> question that requires logical reasoning

This is the tough part to tell - are there any such questions that exist that have not already been asked?

The reason Chat-GPT works is its scale. to me, that makes me question how "smart" it is. Even the most idiotic idiot could be pretty decent if he had access to the entire works of mankind and infinite memory. Doesn't matter if his IQ is 50, because you ask him something and he's probably seen it before.

How confident are we this is not just the case with LLMs?



I'm highly confident that we haven't learnt every thing that can be learnt about the world, and that human intelligence, curiosity and creativity are still being used to make new scientific discoveries, create things that have never been seen before, and master new skills.

I'm highly confident that the "adjacent possible" of what is achievable/discoverable today, leveraging what we already know, is constantly changing.

I'm highly confident that AGI will never reach superhuman levels of creativity and discovery if we model it only on artifacts representing what humans have done in the past, rather than modelling it on human brains and what we'll be capable of achieving in the future.


Of course there are such questions. When it comes to even simple puzzles, there are infinitely many permutations possible wrt how the pieces are arranged, for example - hell, you could generate such puzzles with a script. No amount of precanned training data can possibly cover all such combinations, meaning that the model has to learn how to apply the concepts that make solution possible (which includes things such as causality or spatial reasoning).


Right, but typically LLMs are really poor at this. I can come up with some arbitrary systems of equations for it to solve and odds are it will be wrong. Maybe even very wrong.


That is more indicative of the quality of their reasoning than their ability to reason in principle, though. And maybe even quality of their reasoning specifically in this domain - e.g. it's not a secret that most major models are notoriously bad at tasks involving things like counting letters, but we also know that if you specifically train a model to do that, it does in fact drastically improve its performance.

On the whole I think it shouldn't be surprising that even top-of-the-line LLMs today can't reason as well as a human - they aren't anywhere near as complex as our brains. But if it is a question of quality rather than a fundamental disability, then larger models and better NN designs should be able to gradually push the envelope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: