Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On backtracking, I thought tree-of-thought enabled that?

"considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices"

https://arxiv.org/abs/2305.10601

Generally with you though, this thing is not leading to real smarts and that's accepted by many. Yes, it'll fill in a few gaps with exponentially more compute but it's more likely that an algo change is required once we've maxed out LLM's.



Yes, there are various approaches like tree-of-thought. They don't fundamentally solve the problem because there are just too many paths to explore and inference is just too slow and too expensive to explore 10,000 or 100,000 paths just for basic problems that no one wanted to solve anyway.

The problem with solving such problems with LLMs is that if the solution to the problem is unlike problems seen in training, the LLM will almost every time take the wrong path and very likely won't even think of the right path at all.

The AI really does need to understand why the paths it tried failed in order to get insight into what might work. That's how humans work (well, one of many techniques we use). And despite what people think, LLMs really don't understand what they are doing. That's relatively easy to demonstrate if you get an LLM off distribution. They will double down on obviously erroneous illogic, rather than learn from the entirely new situation.


Thank you for the thoughtful response




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: