Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think the missing piece is conceptual understanding. Good LLMs seem to 'understand' most concepts as well as most humans do, even if they're a little less multimodal about it (for now). The common factor here seems to me to be that they're not good at problems which involve hidden intermediate steps. You can trip ChatGPT up pretty easily by telling it not to show working, while on the same problem if you tell it to explain its reasoning in steps it'll do fine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: