Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs perform well on small tasks that are well defined. This definition matches almost every task that a student will work on in school leading to an overestimation of LLM capabiity.

LLMs cannot decide what to work on, or manage large bodies of work/code easily. They do not understand the risk of making a change and deploying it to production, or play nicely in autonomous settings. There is going to be a massive amount of work that goes into solving these problems. Followed by a massive amount of work to solve the next set of problems. Software/ML engineers will have work to do for as long as these problems remain unsolved.



Careers are 30 years long

Can you confidently say that an LLM won’t be better than an average 22 year old coder within these 30 years?


Careers have failed to be 30 years long for a lot longer than 30 years now. That's one of the reasons that 4-year colleges have drastically lost their ROI, the other blade of those scissors being the stupendously rising tuition. AI is nothing but one more layer in the constantly growing substrate of computing technology a coder has to learn how to integrate into their toolbelts. Just like the layers that came before it: mobile, virtualization, networking, etc.


Careers are still longer than 30 years. How many people do you think are retiring at 48 or 51 years old these days? It’s a small minority. Most people work through 65: a career of about 45 years or more.


Right but most people don't stick a single career anymore. An individual career is <30 yrs, and the average person will have >1 of them.

It's not as out there as e.g. this article (https://www.wsj.com/articles/SB10001424052748704206804575468...) - 7 careers is probably a crazy overestimate. But it is >1.


> Can you confidently say that an LLM won’t be better than an average 22 year old coder within these 30 years?

No 22 years old coder is better than the open source library he's using taken straight from github, and yet he's the one who's getting paid for it.

People who claim IA will disrupt software development are just missing the big picture here: software jobs are already unrecognizable from what it was just 20 years ago. AI is just another tool, and as long as execs won't bother use the tool by themselves, then they'll pay developers to do it instead.

Over the past decades, writing code has become more and more efficient (better programming languages, better tooling, then enormous open source libraries) yet the number of developers kept increasing, it's Jevons paradox[1] in its purest form. So if past tells us anything, is that AI is going to create many new software developer jobs! (because the amount of people able to ship significant value to a customer is going to skyrocket, and customers' needs are a renewable resource).

[1]: https://en.wikipedia.org/wiki/Jevons_paradox


22 year old coder today or 22 year old coder 30 years from now? How a 22 year old codes 30 years from now may look like magic to you and me.


Huh careers are 30 years long? I don't know where you live but it's more like 45 years long where I live. The retirement age is 67.


yes, because this is still glorified autocomplete


the average coder is worse than an autocomplete

Too many people here have spent time in elite corporations and don't realize how mediocre the bottom 50th percentile of coding talent is


To be honest, if the bottom 50th percent of coding talent is going to be obsolete, I wonder what happens to rest of the "knowledge workers" in those companies. I mean people whose jobs consist of attending Teams meetings, making fancy powerpoint slides and reports, perhaps even excel if they are really competent. None of that is any more challenging for LLM than writing code. In fact replacing these jobs should be easier, since presentations and slides do not actually do anything, unlike a program that must perform a certain action correctly.


I've heard compelling arguments that we passed the "more people than jobs" threshold during the green revolution and as a civilization have collectively retrofitted UBI in the form of "fake email jobs" and endless layers of management. This also would explain https://wtfhappenedin1971.com/ pretty well.

Either AI shatters this charade, or we make up some new laws to restrain it and continue to pretend all is well.


Exactly. There's some need, perhaps, to keep these tools "up to date" because someone in a non-free country is going to use them in a horrendous manner and we should maybe know more about them (maybe).

However, there is no good reason in a free society that this stuff should be widely accessible. Really, it should be illegal without a clearance, or need-to-know. We don't let just anyone handle the nukes...


This is true and yet companies (both Private and Public sector) spend literal billions on Accenture /Deloitte slop that runs budgets will into the 10s of millions.

Skills aren't even something that dictates software spend, it seems.


I tried it out and was able to put together a decent libevent server in c++ with smart pointers, etc, and a timer which prints out connection stats every 30s. It worked remarkably well.

I'm trying not to look at it as a potential career-ending event, but rather as another tool in my tool belt. I've been in the industry for 25 years now, and this is way more of an advancement than things like IntelliSense ever was.


Exactly, LLMs are not near ready to fully replace software engineers or any kind of knowledge workers. But they are increasingly useful tools that is true. https://www.lycee.ai/blog/ai-replace-software-engineer


Truth is, LLMs are going to make the coding part super easy, and the ceiling for shit coders like me has just gotten a lot lower because I can just ask it to deliver clean code to me.

I feel like the software developer version of an investment banking Managing Director asking my analyst to build me a pitch deck an hour before the meeting.


You mentioned in another comment you’ve used AI to write clean code, but here you mention you’re a “shit coder”. How do you know it’s giving you clean code?


I know the fundamentals but I'm a noob when it comes to coding with React or NextJS. Code that comes out from Claude is often segregated and modularized properly so that even I can follow the logic of the code, even if not the language and its syntax. If there's an issue with the code, causing it to fail at runtime, I am still able to debug it appropriately with my minimal language of JS. If any codebase can let me do that, then in my books that's a great codebase.

Compare that to Gpt 4o which gives me a massive chunk of unsorted gibberish that I have to pore through and organize myself.

Besides, most IBD MDs don't know if they're getting correct numbers either :).


Has the coding part ever been hard? When is the last time you faced a hard coding challenege?

What is hard is gather requirements, dealing with unexpected production issues, scaling, security, fixing obscure bugs and integration with other systems.

The coding part is about 10% of my job and the easiest part by far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: