Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fascinating point, and one I think can definitely apply here.

Though there is a key difference – Galileo could see through his telescope the same way, every time. He also understood what the telescope did to deliver his increased knowledge.

Compare this with LLMs, which provide different answers every time, and whose internal mechanisms are poorly understood. It presents another level of uncertainty which further reduces our agency.



> Though there is a key difference – Galileo could see through his telescope the same way, every time.

Actually this is a really critical error- a core point of contention at the time was that he didn't see the same thing every time. Small variations in the lens quality, weather conditions, and user error all contributed to the discovery of what we now call "instrument noise" (not to mention natural variation in the astronomical system which we just couldn't detect with the naked eye, for example the rings of Saturn). Indeed this point was so critical that it led to the invention of least-squares curve fitting (which, ironically, is how we got to where we are today). OLS allowed us to "tame" the parts of the system that we couldn't comprehend, but it was emphatically not a given that telescopes had inter-measurement reliability when they first debuted.


LLMs can be deterministic machines, you just need to control the random seeds and run it on the same hardware to avoid numerics differences.

Gradient descent is not a total black box, although it works so well as to be unintuitive. There is ongoing "interpretability" research too, with several key results already.


Deterministic doesn't necessarily mean that can be understood by an human mind. You can think about a process entirely deterministic but so complex and with so many moving parts (and probably chaotic) that a humble human cannot comprehend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: