Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> the human brain expresses intentionality originating from inner states and future expectations

How is this different from and/or the same as the concept of "attention" as used in transformers?



I believe we are contextual language models as well, we rely 99% on chaining ideas and words and 1% on our own inspiration. Coming up with a truly original useful idea can be a once in a lifetime event. Everything else has been said and done before.


In a sense yes, but the things you do and say are not prompted by already expressed statements or commands. You interpret your environment to infer needs, plan for future contingencies, identify objectives, plan actions to achieve them, etc. they are not randomly picked from a library, but generated and tailored to your actual circumstances.

It’s when LLMs start asking the questions rather than answering them that things will get interesting.


In a sense yes, but the things you do and say are not prompted by already expressed statements or commands. You interpret your environment to infer needs, plan for future contingencies, identify objectives, plan actions to achieve them, etc. they are not randomly picked from a library, but generated and tailored to your actual circumstances.

It’s when AIs start asking the questions rather than answering them that things will get interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: