Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These types of semantic conundrums would go away if, when we refer to a given model, we think of it more holistically as the whole entity which produced and manages a given software system. The intention behind and responsibility for the behavior of that system ultimately traces back to the people behind that entity. In that sense, LLMs have intentions, can think, know, be straightforward, deceptive, sycophantic, etc.




In that sense every corporation would be intentional, deceptive, exploitative, motivated, etc. Moreover, it does not address the underlying issue: no one knows what computation, if any, is actually performed by a single neuron.

> In that sense every corporation would be intentional, deceptive, exploitative, motivated, etc.

...and so they are, because the people making up those corporations are themselves, to various degrees, intentional, deceptive, etc.

> Moreover, it does not address the underlying issue: no one knows what computation, if any, is actually performed by a single neuron.

It sidesteps this issue completely, to me the buck stops with the humans, no need to look inside their brain and reduce further than that.


I see. In that case we don't really have any disagreement. Your position seems coherent to me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: