Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, humans are very similar. We have intuitive immediate-next-step-suggestions, and then we apply these intuitive next steps, until we find that it lead to a dead end, and then we backtrack.

I always say, the way we used LLMs (so far) is basically like having a human write text only on gut reactions, and without backspace key.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: