Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you don’t learn by figuring out a new concept

This.

If LLMs were actually some magical thing that could write my code for me, I wouldn't use them for exactly this reason. Using them would prevent me from learning new skills and would actively encourage my existing skillset to degrade.

The thing that keeps me valuable in this industry is that I am always improving, always learning new skills. Anything that discourages that smells like career (and personal) poison to me.



This all depends on your mental relationship with the LLM. As somebody else pointed out, this is an issue of delegation. If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?

I treat LLMs as junior programmers. They can make my life easier and occasionally make it harder. With that mindset, you start out knowing that they're going to make stupid mistakes, and that builds your skill of detecting mistakes in other people's code. Also, like with biological junior programmers, nonbiological junior programmers quickly show how bad you are giving direction and force you to improve that skill.

I don't write code by hand because my hands are broken, and I can't use the keyboard long enough to write any significant amount of code. I've developed a relationship with nonbiological junior programmers such that I now tell them, via speech recognition, what to write and what information they need to create code that looks like code I used to create by hand.

Does this keep me from learning new skills? No. I'm always making new mistakes and how to correct them. One of those corrections was knowing that you don't learn something significant from writing code. Career-sustaining knowledge comes at a much higher level.


> If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?

I try hard to avoid the scenario where junior programmers write things they don’t understand. That’s actually my biggest frustration and the difference between juniors I enjoy vs loathe working with. There are only so many ways to remain supportive and gently say “no, for real, learn what you’re working on instead of hacking something brittle and incomplete together.”


Thank you for explaining. I totally agree with you about being supportive and the level of capability.

For me, both Copilot and ChatGPT are reasonably skilled junior programmers. They understand about 70% of what I'm looking for. I can correct them with one revised prompt and get them to the 80 to 90% range. At three prompts, I assume GPT/Co-Pilot is an idiot.

In both cases of non-bio and bio junior programmers, I always ask myself the question, how am I explaining it wrong? More often with bio junior programmers, I am explaining it wrong. With nonbio junior programs, it's still giving the wrong explanation, but in a different way.

The closest analogy to this experience is learning how to use speech recognition. You get to about the 95% recognition level by the system learning how you speak. You get to the 98, 99% recognition level by speaking the way the system hears


> If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?

It's a great question. My answer generally is yes, I would (and do), but I'm willing to sacrifice a bit in order to ensure that the junior developer gets enough practical experience that they can succeed in their career. I'm not willing to make such a sacrifice for a machine.


> I don't write code by hand because my hands are broken, and I can't use the keyboard long enough to write any significant amount of code. I've developed a relationship with nonbiological junior programmers such that I now tell them, via speech recognition, what to write and what information they need to create code that looks like code I used to create by hand.

Can you please write a blog post on what tools you use for that?

My hands are fine, still, I'd love to just verbally explain what I want and have someone else type the code for me.


sure. I've been meaning to do a comparison of aqua and dragon. I'll do one of co-pilot and gpt-whatever. give me 6 months or so, I've got marketing to do. :-)


In other words, your objection isn't to LLMs, it's to delegation, since the exact same argument would apply to having "some magical thing that could write my code for me" be your co-worker or a contractor.

It's fair for the type of code you want to write for your own growth. But even with that, there's more than enough bullshit boilerplate and trivial cross-language differences that contribute zero (or negatively) to your growth, and is worth having someone else, or something else, write it for you. LLMs are affordable for this, where people usually are not.


If that's the only thing LLMs are good for, my money for improving software productivity is in good old fashioned developer tools.

A better language reduces boilerplate. A better compiler helps you reason about errors. Better language features help you be more expressive. If I need to spool up a jet turbine feeding H100's just to decipher my error messages, the solution is a better compiler, not a larger jet engine.

I myself have noticed this: a wild heterogeneity in the types of tasks for which LLM are helpful. Its appearance as a silver bullet withers the closer you get to essential complexity.


One of my fears with the growing use of LLMs for writing software is that people are using them as a catch-all that prevents them from feeling the pain that indicates to us that there is a better way to do something.

For example, nobody would ever have developed RAII if manual memory management didn’t peeve them. Nobody would have come up with C if they didn’t feel the pain of assembly, or Rust without C, or Typescript without JavaScript, etc. Nobody would have come up with dozens of the genius tools that allow us to understand and reason about our software and enable us to debug it or write it better, had they not personally and acutely felt the pain.

At my job, the people most enthusiastic about LLMs for coding are the mobile and web devs. They say it saves them a lot of time spent writing silly boilerplate code. Shouldn’t the presence of that boilerplate code be the impetus that drives someone to create a better system? The entire firmware team has no interest in the technology, because there isn’t much boilerplate in C. Every line means something.

I worry LLMs will lead to terrible or nonexistent abstractions in code, making it opaque, or inefficient, or incorrect, or all of the above.


Its an interesting observation for sure, but those developers for mobile and web sit at the tippy top of all other abstractions layers. At that position a certain amount of boiler plate is needed because not all controls and code behind are the same, and there is a mighty collection of hacks on hacks to get a lot of things done. I think this is more of “horses for courses” thing where developers higher in the abstraction stack will always benefit from LLMs more and developers lower down the stack have more agency for improvement. At the end of the day, I think everyone gets more productive which is a net positive. Its just that not all developers are after the same goal (Application dev vs library devs vs system devs)


To add to this, LLMs write pretty trite poetry, for example. If we think of code from the creative side, it’s hard to imagine that we’d want to simply hand all coding over to these systems. Even if we got working solutions (which is a major undertaking for large systems), it seems we’d be sacrificing elegance, novelty, and I’d argue more interesting explorations.


> If I need to spool up a jet turbine feeding H100's just to decipher my error messages, the solution is a better compiler, not a larger jet engine.

You don't need a jet turbine and H100s for that, you need it once for the whole world to get that ability; exercising it costs comparatively little in GPU time. Like, can't say how much GPT-4o takes in inference, but Llama-3 8B works perfectly fine and very fast on my RTX 4070 Ti, and it has a significant enough fraction of the same capabilities.

Speaking of:

> A better compiler helps you reason about errors.

There's only so much it can do. And yes, I've actually set up an "agent" (predefined system prompt) so I can just paste the output of build tooling verbatim, and get it to explain error messages in it, which GPT-4 does with 90%+ accuracy. Yes, I can read and understand them on my own. But also no, at this point, parsing multiple screens of C++ template errors or GCC linker failures is not a good use of my life.

(Environment-wise, I'm still net ahead of a typical dev anyway, by staying away from Electron-powered tooling and ridiculously wasteful modern webdev stacks.)

> A better language reduces boilerplate.

Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.

Oh wait, wrong reality.

> Better language features help you be more expressive.

That's another can of worms. I'm not holding much hopes here, because as long as we insist on working directly on plaintext codebase treated as single source of truth, we're already at Pareto frontier in terms of language expressiveness. Cross-cutting concerns are actually cross-cutting; you can't express them all simultaneously in a readable way, so all the modern language design advances are doing is shifting focus and complexity around.

LLMs don't really help or hurt this either, though they could paper over some of the problem by raising the abstraction level at which programmers edit their code, in lieu of the tooling actually being designed to support such operations. I don't think this would be good - I'd rather we stopped with the plaintext single-source-of-truth addiction in the first place.

> Its appearance as a silver bullet withers the closer you get to essential complexity.

100% agreed on that. My point is, dealing with essential complexity is usually a small fraction of our work. LLMs are helpful in dealing with incidental complexity, which leaves us more time to focus on the essential parts.


> Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.

> Oh wait, wrong reality.

You are a bit too cynical. The tools (compilers and interpreters and linters etc) that people are actually using have gotten a lot better. Both by moving to better languages, like more Rust and less C; or TypeScript instead of Javascript. But also from compilers for existing languages getting better, see especially the arms race between C compilers kicked off by Clang throwing down the gauntlet in front of GCC. They both got a lot better in the process.

(Common) Lisp was a good language for its time. But I wouldn't hold it up as a pinnacle of language evolution. (I like Lisps, and especially Racket. And I've programmed about half of my career in Haskell and OCaml. So you can rest assured about my obscure and elitist language cred. I even did a year of Erlang professionally.)

---

Btw, just to be clear: I actually agree with most of what you are writing! LLMs are already great for some tasks, and are still rapidly getting better.

You are also right that despite better languages being available, there are many reasons why people still have to use eg C++ here or there, and some people are even stuck on ancient versions of or compilers for C++, with even worse error messages. LLMs can help.


Copilot (and so on) are simultaneously incredible and not nearly enough.

You cannot ask it to build a complex system and then use the output as-is. It's not enough to replace developer knowledge, but it also inhibits acquiring developer knowledge.


It sounds like you've discouraged yourself from learning the skill of using an LLM to help you code.


That only matters if the assumption is that any skill is worth learning simply because it's a skill.

You could learn the skill of running yourself over with a car, but it's either a skill you'll never use or the last skill you'll use. Either way, you're probably just as well off not bothering to learn that one.


"running yourself over with a car" feels very different from "learning to use LLMs to your advantage".


The GP was pointing out that learning to use an LLM, in their opinion, would stop them from learning other new skills and erode their existing ones.

In that context I think the analogy holds. Using an LLM halts your learning, as does running yourself over with a car. It's an exaggerated point for sure, but I think it points to the fact that you don't have to learn to use LLMs simply because it's a skill you could learn, especially if you think it will harm you long term.


what's better, I stare at my code for 3 hours to find a missing semicolon, or ChatGPT explains it to me in minutes. what lesson am I learning by staring at my code?


The skill of finding errors is actually a very useful one. I can count on one hand how many times I've made similar mistakes in code, after 3 hours staring at it that lesson really sticks and you don't make it again.

When an algorithm tries to fix it for you you only learn how to send the code to the algorithm to fix it for you. It's convenient in the moment for sure, but you haven't learned how to fix your own code and won't know how to fix it when the algorithm is the cause of the error.

That also ignores secondary risks, like giving your codebase to a nonprofit / very much for profit company. That isn't always possible depending on the codebase, and in general why bother giving them access to it when you instead learn how to find your own missing semicolons? Why spin up a pipe of GPUs and burn all that power rather than learning to do it yourself?


I would disagree with that take, actually. Perhaps I haven't yet figured out how to leverage LLMs for that (and don't get me wrong, I have certainly experimented as has most of my team), but I'm not discouraged from it.

I'm just trying to be clear-eyed about the risks. As an example, code completion tools in IDEs will cause me to get rusty in important baseline skills. LLMs present a similar sort of risk.


Are you preparing for some sort of cataclysmic world event that results in us living in a world where IDEs with code completion don't exist, and we're tested on whether we can code without code completion? letting those skills get rusty because other skills are being sharpened is not a bad thing. if they're getting rusty because you're not doing anything else, then that's a bad thing, but you can get yourself out of lazy mental traps, no matter where they are, if you're proactive and diligent, no matter the skill.


Eh, your argument could also be used against compilers. Or against language features like strong typing in something like Rust, instead of avoiding our bugs through very careful analysis when writing C code like God intended.

Using an LLM _is_ a skill, too.


I agree it's a skill, but I actually think hear this analogy a lot and think it's not a great one

A feedback loop with an LLM is useful for refinement of ideas and speeding up common tasks. I really even think it can be a massive productivity boost for one of the most common professional dev tasks with the right tooling. I work a lot of mercenary gigs and need to learn new languages all the time, and something like phind.com is great for giving me basic stuff that works in a language whose idioms I don't know, and the fact that it cites its sources and gives me links means I can deal with it being wrong sometimes, and also drill down and learn more when appropriate more easily

However, LLMs are super not like compilers. They simply do not create reliable simplifications in the same way. A higher level language creates a permanent, reliable, and transferable reduction in complexity for the programmer, and this only works because of that reliability. If I write a function in scala, it probably has a more complicated equivalent in JVM bytecode, but it works the same every time and the higher-order abstraction is semantically equivalent and I can compose it with other functions and decompose it into its constituent parts reliably without changing the meaning. Programming languages can be direct translations of each other in a way that adding the fuzziness of natural language makes it basically impossible to. An abstraction in a language can be used in place of the complex underlying reality, and even modified to fit new situations predictably and reliably without drastic risk of not working the same way. This reliability also means that the simplification has compounding returns, as it's easier to reason about and expand on for some future maintainer or even my future self.

LLMs for code generation, at least in their current form, lack all these important properties. The code they generate is a fuzzy guess rather than a one-to-one translation. Often it's a good guess! But even when it is, it's generating code that's no more abstract than needed to be written before, so putting it into your codebase still gives you just as much additional complexity to take into account when expanding on it as before. Maybe the LLM can help with that, maybe not. Asking an LLM to solve a problem in one case can fail to transfer to another one in unpredictable ways.

You also aren't able to use it to make permanent architectural simplifications recursively. We can't for example save a series of simple english instructions instead of the code that's generated, then treat that as a moving piece we can recombine by piping that into another instruction to write a program, etc. This would also increase the cost of computing your program significantly obviously, but that's actually a place where, well, not a compiler but an interpreter is a decent analogy. My main concern with LLMs being deployed by developers en masse is kind of already happening, but it predates LLMs. I notice that codebases where people have used certain IDEs or other code generation tools proliferate a bunch of unnecessary and hard to maintain complexity in codebases, because the programmer using the tools got used to just "autogenerating a bunch of boilerplate" which is fine in a vacuum but accumulates a ton of technical and maintainability debt really fast if you're not actively mindful of it and taking steps in your workflow to prevent it, like having a refinement and refactoring phase in your feedback loop

I think LLMs are useful tools and can help programmers a lot, and even may lead to "barefoot programmers" embedded in local community needs, which I love. I hear the analogy to compilers a lot and I think it's a bad one, managing to miss most of what's good about compilers while also misunderstanding the benefits and pitfalls of generative models


I mostly agree about a certain layer of semantics in our 'normal' programming languages. And most of the time, that level is good and good enough. But whether eg certain compiler optimisations kick in or not is sometimes much harder to forecast.

Btw, currently I wouldn't even dare to compare LLMs to compilers. For me the relevant comparison would be to 'Googling StackOverflow': really important tools for a programmer, but nothing you can rely on to give you good code. Nevertheless they are tools whose mastery is an important skill.

Remember how in yesteryears we complained about people copy-and-pasting from StackOverflow? Just like today we complain about people committing the output of their LLM directly.

---

I do hope that mechanical assistance in programming keeps improving over time. I have quite a few tricky technical problems that I would like to see solved in my lifetimes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: