Hacker Newsnew | past | comments | ask | show | jobs | submit | more Antibabelic's commentslogin

If you don't actually want to write code there's no reason to learn anything. The question is, if LLMs can write much better code than you, what does your employer need you for?


Your employer needs you because writing code was never the hardest part of programming and software engineering in general. The hardest part is managing expectations, responsibilities, cross-team communication, multi-domain expertise, corporate bureaucracy and pushing back against unnecessary requirements and constraints. None of which LLMs can solve, and are especially terrible at pushing back.


I agree. I remember once the full especification I got was

> Enough

After talking for 4 hours and 3 coffe cups, I got enough corner cases and main case to understand what they wanted. 1 week later I got a list of criteria that can be programmed. 5 years later most of the unusual but anoying rought corners were fixed. We still had a button to aprove manually the weird cases.


> Put us in computers

I feel like this is a modern version of believing in souls. You are matter, not data. If you find a way to simulate yourself on a computer, this will not prevent you from experiencing death. And if that's the case, what's the point? Stroking your ego with the knowledge that a simulation of you will stick around for some time after you give up the ghost?


Maybe we’re already the simulations, just, this part of our memories is the back-propagation used to figure out where the saved copy came from.


I think it's the opposite. Believing that something special exists in brains that can't in be replicated in a (sufficiently complex) computer is spirituality, a belief in the supernatural.


Rather, confusing models (including computational ones) for the things they model is the very definition of magical thinking. Matter matters. There is nothing specific about "brains" that prevents them from "existing" digitally. The fact is that nothing material can "exist" in a computational substrate. A computer can only simulate: replicate the structure of material things in a useful to us manner using symbols.


Would you like to share any details?


I would love to. Unfortunately when you’re talking about physics the community doesn’t welcome casual conversations, that’s been my experience. If you start with anything less than a formal proof all the hackstchuallys come out. So I will start with the proof, soon. Working with some academics on peer review now.


I remember seeing some studies that experimentally show this to be true for Hebrew (another de/ascender-poor writing system), but can't find them at the moment.


The author treats the Copenhagen interpretation as if it shows that there are no observer-independent things, when in reality it simply states that quantum theory is not about them.

"Bohr (1937), Heisenberg (1947), Frank (1936) and others explained carefully -- but did not prove -- that the theory makes no assertions concerning autonomous, i.e. observer-independent, things: that all its statements are about experimental situations. (This is why Bohr, and initially also Rosenfeld, stated that no special theory of measurement was necessary: they believed that quantum mechanics was already a theory of measurement.)" From Mario Bunge (1979) "The Einstein-Bohr debate over quantum mechanics: Who was right about what?"


You can believe the references the Wikipedia article is based off, such as BBC Scotland:

"They surveyed hundreds of fish and chip shops in Scotland to find out if "the delicacy" was available and if people were actually buying them. It found 66 shops which sold them, 22% of those who answered the survey. [...] Annie Anderson, from the Centre for Public Health Nutrition Research at the University of Dundee, used to send her medical students out into the city to see if they could find somewhere that sold deep-fried Mars bar. "It was not much of a challenge in Dundee," she says."


I lived in Dundee during the 1990s and they were available. I'm told that they were popular at the Victor (https://maps.app.goo.gl/9g3je56Gt7spifo26) though I don't see them on the menu today.


I'm yet to see a convincing example of LLMs producing anything substantially insightful.


Depends on how you define "insight" really.

Is doing meta-analysis and discovering a commonality "insightful" for example?

Or is insight only something new you discover without basing your discovery on anything?


They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.


Yes, the general acceptance of generally mediocre AI output is quite frustrating.

Cool, you "made" that image that looks like ass. Great, you "wrote" that blog post with terrible phrasing and far too many words. Congrats, I guess.


I'm not sure I understand your distinction between understanding of language and general mental capacity. As I (and the other person responding, I believe) understand, the two are inseparably connected in humans.


> The non sequitur is in assuming that somehow the mechanism of operation dictates the output, which isn't necessarily true.

Where does the output come from if not the mechanism?


So you agree humans can't really think because it's all just electrical impulses?


Human "thought" is the way it is because "electrical impulses" (wildly inaccurate description of how the brain works, but I'll let it pass for the sake of the argument) implement it. They are its mechanism. LLMs are not implemented like a human brain, so if they do have anything similar to "thought", it's a qualitatively different thing, since the mechanism is different.


Mature sunflowers reliably point due east, needles on a compass point north. They implement different things using different mechanisms, yet are really the same.


You can get the same output from different mechanisms, like in your example. Another would be that it's equally possible to quickly do addition on a modern pocket calculator and an arithmometer, despite them fundamentally being different. However.

1. You can infer the output from the mechanism. (Because it is implemented by it).

2. You can't infer the mechanism from the output. (Because different mechanisms can easily produce the same output).

My point here is 1, in response to the parent commenter's "the mechanism of operation dictates the output, which isn't necessarily true". The mechanism of operation (whether of LLMs or sunflowers) absolutely dictates their output, and we can make valid inferences about that output based on how we understand that mechanism operates.


> yet are really the same.

This phrase is meaningless. The definition of magical thinking is saying that if birds fly and planes fly, birds are planes.

Would you complain if someone said that sunflowers are not magnetic?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: