Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even in cognitive tasks expressed via language, something like a memory feels necessary. At which point it’s not a LLM as in a generic language model. It would become a language model conditioned on the memory state.


More than a memory.

Needs to be a closed loop, running on its own.

We get its attention, and it responds, or frankly if we did manage any sort of sentience, even a simulation of it, then the fact is it may not respond.

To me, that is the real test.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: