Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm really sorry to hear this, because part of my goal here is to help push back against the idea that "programming skills are useless now, anyone can get an LLM to write code for them".

I think existing software development skills get a whole lot more valuable with the addition of coding agents. You can take everything you've learned up to this point and accelerate the impact you can have with this new family of tools.

I said a version of this in the post:

> AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.

A brand new vibe coder may be able to get a cool UI out of ChatGPT, but they're not going to be able to rig up a set of automated tests with continuous integration and continuous deployment to a Kubernetes cluster somewhere. They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.



I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills. It doesn't work like tools developers use and it doesn't work like people developers work with. Furthermore, techniques of working with agents today may be completely outdated a year from now. The acceleration is also inconsistent: sometimes there's an acceleration, sometimes a deceleration.

Generative AI is at the same time incredibly impressive and completely unreliable. This makes it interesting, but also very uncertain. Maybe it's worth my investment to learn how to master today's agents, and maybe I'd be better off waiting until these things become better.

You wrote:

> Getting good results out of a coding agent feels uncomfortably close to getting good results out of a human collaborator. You need to provide clear instructions, ensure they have the necessary context and provide actionable feedback on what they produce.

That is true (about people) but misses out the most important thing for me: it's not about the information I give them, but about the information they give me. For good results, regardless of their skill level, I need to absolutely trust that they tell me what challenges they've run into and what new knowledge they've gained that I may have missed in my own understanding of the problem. If that doesn't happen, I won't get good results. If that kind of communication only reliably happens through code I have to read, it becomes inefficient. If I can't trust an agent to tell me what I need to know (and what I trust when working with people) then the whole experience breaks down.


> I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills.

If you’ve be been tasked with leadership of an engineering effort involving multiple engineers and stakeholders you know that this is in fact a crucial part of the role the more senior you get. It is much the same with people: know their limitations, show them a path to success, help them overcome their limitations by laying down the right abstractions and giving them the right coaching, make it easier to do the right thing. Most of the same approaches apply. When we do these things with people it’s called leadership or management. With agents, it’s context engineering.


Because I reached that position 15 years ago, I can tell you that this is untrue (in the sense that the experience is completely different from an LLM).

Training is one thing, but training doesn't increase the productivity of the trainer; it's meant to improve the capability of the trainee.

At any level of capability, though - whether we're talking about an intern after one year of university or a senior developer with 20 years of experience - effective management requires that you're able to trust that the person tells you when they've hit a snag or anything else you may need to know. We may not be talking 100% of trust, but not too far from that, either. You can't continue working with someone that doesn't tell you what you need to know even 10% of the time, regardless of their level. LLMs are not at that acceptable level yet, so the experience is not similar to technical leadership.

If you've ever been tasked with leading one or more significant projects you'd know that if you feel you have to review every line of code anyone on the team writes, at every step of the process, that's not the path to success (if you did that, not only would progress be slow, but your team wouldn't like you very much). Code review is a very important part of the process, but it's not an efficient mechanism for day-to-day communication.


> effective management requires that you're able to trust that the person tells you when they've hit a snag or anything else you may need to know

Nope, effective management is on YOU, not them. If everyone you’re managing is completely transparent and immediately tells you stuff, you’re playing in easy mode


So the role of a coding agent is to challenge me to play in hard mode?

And suppose getting developers to not lie or hide important information is on me, what should I do to get an LLM to not do that?


no, the point is LLMs will behave the same way humans you have to manage do (there's obviously differences - eg LLMs tend to forget context more often than most humans, but also they tend to know a lot more than the average human). So some of the same skills that'll help you manage humans will also help you get more consistency out of LLMs.


I don't know of anyone who would like to work with someone who lies to them over and over, and will never stop. LLMs do certain things better than people, but my point is that there's nothing you can trust them to do. That's fine for research (we don't trust, and don't need to trust, any human or tool to do a fully exhaustive research, anyway), but not for most other work tasks. That's not to say that LLMs can't be utilised usefully, but something that can never be trusted behaves like neither person nor tool.


Anthropomorphizing LLMs is not going to help anyone. They're not "lying" to you. There's no intent to deceive.

I really think that the people who have the hardest time adapting to AI tools are the ones that take everything personally.

It's just a text generator, not a colleague.


> It's just a text generator, not a colleague.

The person you are responding to is quite literally making the same point. This entire thread of conversation is in response to the post's author stating that using a coding agent is strongly akin to collaborating with a colleague.


Yes, I want to play in easy mode. Why would I want to play in hard mode?

You're trying to sell AI here, right? And the argument is that AI is like hard mode... which developers are already in, but might not be.

It's just not a very good sales pitch.


> Yes, I want to play in easy mode. Why would I want to play in hard mode?

Working alone can be much easier than managing others in a team. But also, working in a team can be far more effective if you can figure out how to pull it off.

It's much the same as working with agents. Working alone, without the agents, it's easier to make exactly what you want happen. But working with agents, you can get a lot more done a lot faster-- if you can figure out how to make it happen. This is why you might want hard mode.


The point you missed entirely, young padawan


Are you going to say the point or are you just going to dance around it?


> If everyone you’re managing is completely transparent and immediately tells you stuff, you’re playing in easy mode

So much this. There are many managers who are effective at managing people who do not need management.


The vast majority of managers, much like most engineers, only has to deal with “maintenance mode” throughout most of their career. Particularly common in people whose experience has been in large corporations - you simply don’t realize how much was built for you and “works” (even if badly)


> effective management requires that you're able to trust that the person tells you when they've hit a snag or anything else you may need to know

This is what we shoot for, yes, but many of the most interesting war stories involve times when people should have been telling you about snags but weren't-- either because they didn't realize they were spinning their wheels, or because they were hoping they'd somehow magically pull off the win before the due date, or innumerable other variations on the theme. People are most definitely not reliable about telling you things they should have told you.

> if you feel you have to review every line of code anyone on the team writes...

Somebody has to review the code, and step back and think about it. Not necessarily the manager, but someone does.


> the most interesting war stories involve times when people should have been telling you about snags but weren't-

This comes up a lot. A person sometimes does an undesirable thing that an AI also does. So you might as well use the AI.

But we don't apply this thinking to people. If a person does something undesirable sometimes then we accept that because they are human. If they do it very frequently then at some point, given a choice, you will stop working with that person.


1000% this. Today LLMs are like enthusiastic, energetic, over-confident, well-read junior engineers.

Does it take effort to work with them and get them to be effective in your code base? Yes. But is there a way to lead them in such a way that your "team" (you in this case) gets more done? Yes.

But it does take effort. That's why I love "vibe engineering" as a term because the engineering (or "senior" or "lead" engineering) is STILL what we are doing.


Inconsistent performance and frequent lies are a crucial part of the role, really? I've only met a couple of people like that on my career. Interviews go both ways: if I can't establish that the team I'll be working with is composed and managed by honest and competent people, I don't accept their offer. Sometimes it has meant missing out on the highest compensation, but at least I don't deal with lies and inconsistent performance.


> incredibly impressive and completely unreliable.

There have been methods of protecting against this since before AI, and they still apply. LLMs work great with test driven development, for example.

I would say that high-level knowledge and good engineering practices more important than ever, but they were always important.


Test-driven development helps protect against wrong code, but it's not code I'm interested in, and it's not wrong code that I'm afraid of (I mean, that's table stakes). What I need is something that would help me generate understanding and do so reliably (even if the performance is poor). I can't exercise high-level knowledge efficiently if my only reliable input is code. Once you have to work at the code level at every step, there's no raising of the level of thought. The problem for me isn't that the agent might generate code that doesn't pass the test suite, but that it cannot reliably tell me what I need to know about the code. There's nothing I can reliably offload to the machine other than typing. That could still be useful, but it's not necessarily a game-changer.

Writing code in Java or Python as opposed to Assembly also raises the level of abstract thought. Not as much as we hope AI will be able to do someday, but at least it does the job reliably enough. Imagine how useful Java or Python would be if 10% of the time they would emit the wrong machine instructions. If there's no trust on anything, then the offloading of effort is drastically diminished.


In my experience with Claude Code and Sonnet, it is absolutely possible to have architectural and design-oriented conversations about the work, at an entirely different and higher level than using a (formerly) high-level programming language. I have been able to learn new systems and frameworks far faster with Claude than with any previous system I have used. It definitely does require close attention to detect mistakes it does not realize it is making, but that is where the skill comes in. I find it being right 80% of the time and wrong 20% of the time to be a hugely acceptable tradeoff, when it allows me to go radically faster because it can do that 80% much quicker than I could. Especially when it comes to learning new code bases and exploring new repos I have cloned -- it can read code superhumanly quickly and explain it to me in depth.

It is certainly a hugely different style of interaction, but it helps to think of it as a conversation, or more precisely, a series of individual small targeted specific conversations, each aimed at researching a specific issue or solving a specific problem.


Indeed, I successfully use LLMs for research, and they're an improvement because old-school search isn't very reliable either.

But as to the 80-20 tradeoff on other tasks, the problem isn't that the tool is wrong 20% of the time, but that it's not trustworthy 100% of the time. I have to check the work. Maybe that's still valuable, but just how valuable that is depends on many factors, some of which are very domain-dependent and others are completely subjective. We're talking about replacing one style with another that is much better in some respects and much worse in others. If, on the whole, it was better in almost all cases, that would be one thing (and make the investment safer), but reports suggest it isn't.

I've yet to try an LLM to learn a new codebase, and I have no doubt it will help a lot, but while that is undoubtedly a very expensive task, it's also not a very frequent one. It could maybe save me a week per year, amortised. That's not nothing (and I will certainly give it a try next time I need to learn a new codebase), but it's also not a game-changer.


80-20 is also a gracious ratio, my experience it’s more like 65-35


Without meaning to sound flippant or dismissive, I think you're overthinking it. By the sounds of it, agents aren't offering what you say you need. What are _are_ offering is the boilerplate, the research, the planning etc. All the stuff that's ancillary. You could quite fairly say that it's in the pursuit of this stuff where details and ideas emerge and I would agree, but sometimes you don't need ideas. You need solutions which are run-of-the-mill and boring.


I'm well aware that LLMs are more than capable enough to successfully perform straightforward, boring tasks 90% of the time. The problem is that there's a small but significant enough portion of time where I think a problem is simple and straightforward, but it turns out not to be once you get into the weeds, and if I can't trust the tool to tell me if we're in the 90% problem or the 10% problem, then I have to carefully review everything.

I'm used to working with tools, such as SMT solvers, that may fail to perform a task, but they don't lie about their success or failure. Automation that doesn't either succeed or report a failure reliably is not really automation.

Again, I'm not saying that the work done by the LLM is useless, but the tradeoffs it requires make it dramatically different from how both tools and humans usually operate.


If you're writing your own tests, sure, AI is fast at writing code that passes the tests.

But if you write a comprehensive test suite for a problem, you've effectively done the hard development work to solve the problem in the first place. How did the AI help?

Oh have the AI write unit tests you say? Claude cheats constantly at the tests ime. It frequently tests the mock instead of the UUT and reports a pass. That's worse than useless! I'm sure a huge swath of slop unit tests that all pass is acceptable quality for a lot of businesses out there.


> But if you write a comprehensive test suite for a problem, you've effectively done the hard development work to solve the problem in the first place. How did the AI help?

By making you not write the implementation?

Also, the AI writing anything bad isn’t an excuse. You’re the one piloting that ship, and if not, you’re probably the one reviewing the code. It’s your job to review your own and others’ code with a critical eye, and that goes double in the LLM age.


> doesn't work like people developers work with

I don't know.

This is true for people working in an environment that provides psychological safety, has room for mistakes and rewards hard work.

This might sound cynical, but in all other places I see the "lying to cover your ass" behavior present in one form or another.


> It doesn't work like tools developers use and it doesn't work like people developers work with. Furthermore, techniques of working with agents today may be completely outdated a year from now.

Sounds like big money to be made in improving UX


> I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills.

that's a basic skill you gotta have if you're leading anything or anyone. There'll always be levels of that. So if you're planning to lead anyone in your career, it's a good skillset to develop


That's not the same skill at all https://news.ycombinator.com/item?id=45518204


While this is true, I definitely find that the style of the work changes a lot. It becomes much more managerial, and less technical. I feel much more like a mix of project and people manager, but without the people. I feel like the jury is still out on whether I’m overall more productive, but I do feel like I have less fun.


My lessons so far:

1. Less fun.

2. A lot of more "review fatigue".

3. Tons of excess code I'd never put in there in the first place.

4. Frustration with agents being too optimistic which with time verges on the ludicurous ("Task #3 has been completed successfully with 98% tests failing. [:useless_emojis:]")

5. Frustration with agents routinely getting down a rabbit hole or going in circles, the effort needed to get that straight (Anthropic plainly advises to start from scratch in such cases - which is sound advice, but makes me feel like I just lost the last 5 hours of my life without even learning anything new).

I stopped using agents and use LLMs very sparingly (e.g. for review - they sometimes find some details I missed and occasionally have an interesting solution) but I'm enjoying my work so much more without them.


I think one of the tricks is to just stop using the agent as soon as you see signs of funny business. If it starts BSing me with failing tests, I just turn it off immediately and git reset (maybe after taking a quick peek)


Yeah I make maybe two or three attempts at getting it to write a plan that it is able to follow coherently. But after that I pull the escape hatch and *gasp* program by hand.

I've made this mistake of doubling down after a few initial failures to solve an issue, by trying to make this super duper comprehensive and highly detailed and awesome plan that it will finally be able to implement correctly. But it just gets worse and worse the more I try, because it fundamentally is not understanding what is going on, so it will inevitably find an opportunity to go massively off rails, and the further down you lead it the more impressible the derailment will be.

My experience is that going around in endless circles with the model is just a waste of time when you could have just done it yourself in the time you've wasted.


One thing I don’t get - If you spend much of your time reviewing, you’re just reading - you’re not actually doing anything - you’re passive in the activity of code production. By extension you will become worse at knowing what a good standard of code is and become worse at reviewing code.

I’m not a SWE so I have no interests to protect by criticising what is going on.


In my DJing years I've learned that it is best to provide a hot signal and trim the volume than trying to amplify it later, because you end up amplifying noise. Max out the mixer volume and put a compressor (and a limiter after to protect the speaker set up - it will make it sound awful if hit, but it won't damage your set up and it will flag clueless bozos loud and clear) later, don't try to raise it after it leaves the origin.

It seems to me that adding noise to the process and trying to cut it out later is a self defeating proposition. Or as Deming put it, (paraphrasing) you can't QC quality into a bad process.

I can see how it seems better to "move fast and break things" but I will live and die by the opposite "move slow and fix things". There's much, much more to life than maximizing short term returns over a one dimensional naïve utilitarian take on value.


Tell that to Linus Torvalds.

His whole job is just doing code review, and I'd argue he's better at coding now than he ever was before.


I'd be careful with extrapolating based on the creator of Linux and Git. His life and activities are not in line with those of more typical programmers.


> His life and activities are not in line with those of more typical programmers.

Okay sure.

I'll use myself as another example then. When I was a dev I used to write a lot of code. Now I'm a tech team lead, and I write less code, but review significantly more code than I used to previously.

I feel more confident, comfortable, and competent in my coding abilities now than ever before even though I'm coding less.

I feel like this is because I am exposed to a lot more code, and not in a passive way (reading legacy code) but an active way (making sure a patch set will correctly implement feature X, without breaking anything existing)

I feel like this principal applies to any programmer. Same thing with e.g. writers. Good writers read _a lot_ and it makes them better writers.

This is my opinion and not based on any kind of research. So if you disagree, that's fine with me. But so far I haven't seen anything to convince me of the opposite.


Yeah exactly… hardly comparable to the median or mean dev


Sure, but I’m not comparing myself with a typical programmer am I?


It's not only that Linus is atypical, it's also that he is reviewing other people's code, and those people are also highly competent, or they would not be kernel committers. And they all share large amounts of high-quality and hard-earned implicit context.

Reviewing well executed changesets by skilled developers on complicated and deliberate projects is not comparable to "fleet of agents" vibe engineering. One of these tasks will sharpen you regardless how lazily you approach it. The other requires extreme discipline to avoid atrophy.


Linus Torvalds is hardly typical.


I've never found code reviews degrade the reviewer's standards. Just the opposite.


I reset context probably every 5-10 minutes if not more frequently, and commit even more often than that. If you’re going 5 hours between commits or context resets, I’m not surprised you’re getting bad results. If you ever see “summarizing”’in copilot for example, that means you went way too far in that context window. The LLMs get increasingly inaccurate and confused as the context window fills up.

Other things like having it pull webpages in, will totally blow away your context. It’s better to make a separate context just to pull a webpage down and summarize it in markdown and then reset context.


The 'best' trick I learned from someone over here when working with Claude Code is to very regularly go back a few steps in your context (esc esc -> pick something a few steps up) and say something like "yeah, I already did this myself, now continue and do Y"

It results helps keep the context clean while still keeping the initial context I provided (usually with documentation and initial plan setup) at the core of the context.

Now that you say this, I did notice webpages blow context but didn't think too much of it just yet, maybe there's some improvement to be found here using a subagent? I'm not a big fan of subagents (didn't really get proper results out of them in my initial experiments anyway) but maybe adding a 'web researcher' sub agent that summarizes to a concise markdown file could help here.


Now that's dangerous to do because the conversation history in Claude Code now also reverts the code to that point. So while this technique may have worked in the past, it no longer works.


Works fine. Reverting the code too is an option you can choose.


Regarding #3. I feel it's related to this idea: We can build a wood frame house with 2x4's or toothpicks. AI directed and generated code today tends to build things overly complex with more pieces than necessary. I feel like an angry foreman yelling at AI to fix this, change that, etc. I feel I spend more time and energy supervising AI while getting a sloppier end result.


Thankfully, yelling like an angry foreman is more effective on LLMs than people.

> Get your fucking act together, and stop with the bullshit comments, shipping unfinished code, and towering mess of abstractions. I've seen you code properly before. You're an expert for God's sake. One more mistake, and you're fired. Fix it, now!


I wouldn't talk that way to an LLM for fear of its bleeding over into my interactions with people.

Back when computer performance was increasing faster than it is now and was more important to the user experience, a friend upgraded to a faster computer and suddenly became more impatient with me. He seemed to have expected my response time to have drastically decreased just like his computer's did.


Effects like these, those of our tools on ourselves which occur slowly/subtly enough that we hardly notice, underlie a great many of our greatest problems, I think


98% tests sounds really great though, give that LLM a raise!


Yeah exactly, it changes the job from programmer to (technical) project manager, which is both more proactive (writing specifications) and reactive (responding to an agent finishing). The 'sprinting' remark is apt, because if your agents are not working you need to act. And it's already established that a manager shouldn't micromanage, that'll lead to burnout and the like. But that's why software engineers will remain relevant, because managers need someone to rely on that can handle the nitty-gritty details of what they ask for.


I also think that managing a coding agent isnt like managing a person. a person is creative, they will come up with ways that challenge whatever idea you have and that usually makes the project better. A coding agent never challenges you, mostly just does whatever you want, and you don't end up having any kind of intellectual person to person engagement that is why working on teams can be fun. So it kind of isolates you. And I think the primary reason all this happens is because marketing people have decided to call all of these coding agents "Artificial Intelligence" instead of "Dev Tools". And instead of calling it "Security" they now call it "AI Alignment". And instead of calling it "data schema" or "Spec sheet" they call it "managing the AI context". So now, we are all biased to see these things as some kind of entity that we can treat something like a colleague and we all bought this idea because the tool can chat with you. But it isn't a colleague, it doesn't think and feel, it doesn't provide intellectual engagement, it simply is a lossy, noisy tool to try and translate human language into computer language whether its python or machine code.


Have you used SOTA models to code in the last 2 months or so? This reads like someone who has given up a year ago and made their impressions based off GPT-3.

AI can absolutely have creativity. You just have to engage it like that. The article itself talks about that. You don’t just say “hey AI, go write this code.” You write a spec along with the AI. You tell it what parts are open to its interpretation. Tell it if you want it to be creative or to follow common practices. What level of abstraction is appropriate, etc.

If all you do is give it directions then it just follows the directions.

Also context doesn’t have much to do with a data schema. It’s more like a document database with no schema, if anything. It’s a collection of tokens that it refers back to. Schema implies some structured data with semantic meaning and hierarchies or relationships. That might exist as an emergent property, but for example if I just had a folder full of PDFs, I wouldn’t consider that a schema. That’s kinda what context is like.


> They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.

I wonder what the practical limits are.

As a senior dev on a greenfield solo project it's too exhausting for me to have two parallel agents (front/back), most of the time they're waiting for me to spec, review or do acceptance test. Feels like sprinting, not something I could do day in and day out.

Might be due to tasks being too fine grained, but assuming larger ones are proportionally longer to spec and review, I don't see more than two (or, okay, three, maybe I'm just slow) being a realistic scenario.

More than that, I think we're firmly in the vibe coding (or maybe spec-driven vibe coding) territory.


At least on a team, the limit is the team's time to review all the code. We've also found that vibe engineered (or "supervised vibing" as I call it) code tends to have more issues in code review because of a false sense of security creating blind spots when self reviewing. Even more burden on the team.

We're experimenting with code review prompts and sub agents. Seems local reviews are best, so the bulk of the burden is on the vibing engineer, rather than the team.


Do you have a sense for how much overhead this is all adding? Or, to put it another way, what I’m really asking is what productivity gain (or loss) are you seeing versus traditional engineering?


In our experience, it depends on the task and the language. In the case of trivial or boilerplate code, even if someone pushes 3k-4k lines of code in one day, it's manageable because you can just go through it. However, 3k lines of interconnected modules, complex interactions, and intricate logic require a lot of brainpower and time to review properly and in most cases, there are multiple bugs, edge cases that haven't been considered, and other issues scattered throughout the code.


And empirical studies on informal code review show that humans have a very small impact on error rates. It disappears when they read more than roughly 200 SLOC in an hour.


Interesting, do you have a link to the study? Our experience is different, at least when reviewing LLM generated code, we find quite a few errors, especially beyond 200 LOC. It also depends on what you're reviewing, 200 LOC != 200 LOC. A boilerplate 200 LOC change? A security sensitive 200 LOC change? A purely algorithmic and complex 200 LOC change?



Isn't the current state of thing such that it's really hard to tell? I think the METR study showed that self-reported productivity boosts aren't necessarily reliable.

I have been messing with vibe engineering on a solo project and I have such a hard time telling if there's an improvement. It's this feeling of "what's faster, one lead engineer coding or one lead engineer guiding 3 energetic but naive interns"?


Very curious to hear responses about this too


The problem with this is that software engineering is a very unorganized and fashion/emotion driven domain.

We don't have reliable productivity numbers for basically... anything.

I <feel> that I'm more productive with statically typed languages but I haven't seen large scale, reliable studies. Same with unit tests, integration tests, etc.

And then there are all the types of software engineering: web frontend, web API, mobile frontend, command line frontend, Windows GUI, MacOS GUI, Linux backend (10 million different stacks), Windows backend (1 million different stacks), throwaway projects, WordPress webpages, etc, etc.


Yeah I agree.

A controlled experiment done with a representative sample would be lovely. In the long-run it comes down to the financial impact that occurs incrementally because of LLMs.

In the short-run, from what I see, firms are trying to play-up the operational efficiency gains they have achieved. Which then signals promise to investors in the stock market, for which, investors then translate this promise into expectations about the future which are then reflected in the present value of equity.

But in reality it seems to be reducing head-count because they over-hired before the hype and furore of LLMs.


> In the short-run, from what I see, firms are trying to play-up the operational efficiency gains they have achieved.

The thing is all of this is getting priced in, and will be table stakes for any business, so I don't see it as a big factor in future success.

As I've said here, LinkedIn, and one a few other places, the businesses that will succeed with AI will be those who can use it to add/create value. They will outcompete and out-succeed businesses that can't move beyond cost cutting with AI[0].

[0] Which might not last forever anyway. Granted there are a decent number of players in the market, thankfully, but this wouldn't be the first time tech companies had hooked large numbers of individuals and businesses on a service and then jacked up the prices once they'd captured enough of the market. It's still very much in the SV and PE playbook. SolarWinds is a recent example of the latter.


I wanted to point you at https://neverworkintheory.org/ which attempted to bridge the gap between academia and software engineering. Turns out the site shut down, because (quoting their retrospective)

> Twelve years after It Will Never Work in Theory launched, the real challenge in software engineering research is not what to do about ChatGPT or whatever else Silicon Valley is gushing about at the moment. Rather, it is how to get researchers to focus on problems that practitioners care about and practitioners to pay attention to what researchers discover. This was true when we started, it was true 10 years ago, and it remains true today.

The entire retrospective [1] is well worth a read, and unfortunately reinforcing your exact point about software development being fashion/emotion driven.

[1] https://www.computer.org/csdl/magazine/so/2024/03/10424425/1...


The other problem is the perennial, how much of what we do actually has value?

Churning out 5x (or whatever - I’m deliberately being a bit hyperbolic) as much code sounds great on the face of it but what does it matter if little to none of it is actually valuable?

You correctly identify that software development is often driven by fashion and emotion but the much much bigger problem is that product and portfolio management is driven by fashion and emotion. How much stuff is built based on the whims of CEOs or other senior stakeholders without any real evidence to back it up?

I suppose the big advantage of being more “productive” is that you can churn through more wrong ideas more quickly and thus perhaps improve your chances of stumbling across something that is valuable.

But, of course, as I’ve just said: if that’s to work it’s absolutely predicated on real (and very substantial) productivity gains.

Perhaps I’m thinking about this wrong though: it’s not about production where standards, and the need to be vigilant, are naturally high, but really the gains should be seen mostly in terms of prototyping and validating multiple/many solutions and ideas.


"I suppose the big advantage of being more “productive” is that you can churn through more wrong ideas more quickly and thus perhaps improve your chances of stumbling across something that is valuable."

But I think there is a very big danger here - you build in the action but completely neglect the deep thinking behind a vision, strategy etc.

So yes you produce more stuff. But that stuff means more money spent - which is generally a sunk cost too.

In a bizarre way, I predict we will see the failure rate of software firms rise. Despite the fact these 'productivity' tools exist.


Yeah, I mean, you might be right. As others have commented, I think it's simply very hard to say what gains we're really going to see from AI-assisted software development at present.

And then of course there's the question of how many businesses have their key value proposition rendered obsolete, and to what extent it's rendered obsolete, by AI: doesn't have to be completely nullified for them to fail (which obviously applies to some software companies, but goes far beyond that sector).


I resonate on the exhaustion — actually, the context switching fatigue is why we built Sculptor for ourselves (https://imbue.com/sculptor). We usually see devs running 4-6 agents in parallel today using Sculptor today. Personally I think much of the fatigue comes from: 1) friction in spawning agents 2) friction in reviewing agent changes 3) context management annoyance when e.g. you start debugging part of the agent's work but then have to reload context to continue the original task

It's still super early, but we've felt a lot less fatigued using Sculptor so far. To make it easier to spawn agents without worrying, we run agents in containers so they can run in YOLO mode and don't interfere with each other. To make it easy to review changes, we made "Pairing Mode", lets you instantly sync any agent's work from the container into your local IDE to test it, then switch to another.

For context management, we just shipped the ability to fork agents form any point in the convo history, so you can reuse an agent that you loaded with high-quality context and fork off to debug an agent's changes or try all options it presented. It also lets you keep a few explorations going and check in when you have time.

Anyway, sorry, shilling the product a bit much but I just wanted to say that we've seen people successfully use more than 2 agents without feeling exhausted!


What gives you the fatigue?


Switching between the two parallel agents (frontend & backend, same project), requiring context switches.

I'm speccing out the task in detail for one agent, then reviewing code for the previous task on the other agent and testing the implementation, then speccing the next part for that one (or asking for fixes/tweaks), then back to the first agent.

They're way faster in producing code than I am in reviewing and spelling out in details what I want, meaning I always have the other one ready.

When doing everyting myself, there are periods where I need to think hard and periods where it's pretty straightforward and easy (typing out the stuff I envisioned, boilerplate, etc).

With two agents, I constantly need to be on full alert and totally focused (but switching contexts every few minutes), which is way more tiring for me.

With just one agent, the pauses in the workflow (while I'm waiting for it to finish) are long enough to get distracted but short enough to not being able to do anything else (mostly).

Still figuring out the sweet spot for me personally.


I've been meaning to try out some text-to-speech to see if that makes it a bit easier. Part of the difficulty of "spelling out in detail what I want" is the need for precise written language, which is high cognitive load, which makes the context switching difficult.

Been wondering if just natural speaking could both speed up typing. Maybe have an embedded transform/compaction that strips out all the ummms and gets to the point of what you were trying to say. Might have lower cognitive load, which could make it easier.


This works really well already. You can fire up something like Wispr Flow and dump what you're saying directly into Claude Code or similar, it will ignore the ums and usually figure out what you mean.

I use ChatGPT voice mode in their iPhone app for this. I walk the dog for an hour and have a loose conversation with ChatGPT through my AirPods, then at the end I tell it to turn everything we discussed into a spec I can paste into Claude Code.


I really don't get the idea that LLMs somehow create value. They are burning value. We only get useful work out of them because they consume past work. They are wasteful and only useful in a very contrived context. They don't turn electricity and prompts into work, they turn electricity, prompts AND past work into lesser work.

How can anyone intellectually honest not see that? Same as burning fossil fuels is great and all except we're just burning past biomass and skewing the atmosphere contents dangerously in the process.


> How can anyone intellectually honest not see that?

The idea that they can only solve problems that they've seen before in their training data is one of these things that seems obviously true, but doesn't hold up once you consistently use them to solve new problems over time.

If you won't accept my anecdotal stories about this, consider the fact that both Gemini and OpenAI got gold medal level performance in two extremely well regarded academic competitions this year: the International Math Olympiad (IMO) and the International Collegiate Programming Contest (ICPC).

This is notable because both of those contests have brand new challenges created for them that have never been published before. They cannot be in the training data already!


> consider the fact that both Gemini and OpenAI got gold medal level performance

Yet ChatGPT 5 imagines API functions that are not there and cannot figure out basic solutions even when pointed to the original source code of libraries on GitHub.


Which is why you run it in a coding agent loop using something like Codex CLI - then it doesn't matter if it imagines a non-existent function because it will correct itself when it tries to run the code.

Can you expand on "cannot figure out basic solutions even when pointed to the original source code of libraries on GitHub"? I have it do that all the time and it works really well for me (at least with modern "reasoning" models like GPT-5 and Claude 4.)


As a human, I sometimes write code that does not compile first try. This does not mean that I am stupid, only that I can make mistakes. And the development process has guardrails against me making mistakes, namely, running the compiler.


Agreed

Infallibility is an unrealistic bar to mark LLMs against


Yes. I don't see why these have to be mutually exclusive.


I feel they are mutually inclusive! I don’t think you can meaningfully create new things if you must always be 100% factually correct, because you might not know what correct is for the new thing.


> If you won't accept my anecdotal stories about this, consider the fact that both Gemini and OpenAI got gold medal level performance in two extremely well regarded academic competitions this year: the International Math Olympiad (IMO) and the International Collegiate Programming Contest (ICPC).

it's not a fair comparison

the competitions for humans are a display of ingenuity and intelligence because of the limited resources available to them

meanwhile for the "AI", all it does is demonstrate is that if you have a dozen billion dollar data-centres and a couple of hundred gigawatt hours, which can dedicate to brute-forcing a solution, then you can maybe match the level of one 18 year old, when you have a problem with a specific well known solution

(to be fair, a smart 18 year old)

and short of moores law lasting another 30 years, you won't be getting this from the dogshit LLMs on shatgpt.com


Google already released the Gemini 2.5 Deep Think model they used in ICPC as part of their $250/month "Ultra" plan.

The trend with all of these models is for the price for the same capabilities to drop rapidly - GPT-3 three years ago was over 1,000x the price of much better models today.

I'm not yet ready to bet against that trend holding for a while longer.


> GPT-3 three years ago was over 1,000x the price of much better models today.

right, so only another 27 years of moores law continuing left

> I'm not yet ready to bet against that trend holding for a while longer.

I wouldn't expect an industry evangelist to say otherwise


I'm a pretty bad "industry evangelist" considering I won't shut up about how prompt injection hasn't had any meaningful improvements in the last three years and I doubt that a robust solution is coming any time soon.

I expect this industry might prefer an "evangelist" who hasn't written 126 posts about that: https://simonwillison.net/tags/prompt-injection/

(And another 221 posts about ethical concerns with how this stuff works: https://simonwillison.net/tags/ai-ethics/)


you would be a lot more credible if you were honest about being an evangelist


Credibility is genuinely one of the things I care most about. What can I do to be more honest here?

(Also what do you mean here by an "evangelist"? Do you mean someone who is an unpaid fan of some of the products, or are you implying a financial relationship?)


I know this is something you care about, and I'm not your parent, but something I've often observed in conversations about technology on here, but especially around AI, is that if you say good things about something, you are an "evangelist." It's really that straightforward, and doesn't change even if you also say negative things sometimes.


In that case yeah, I'm an LLM "evangelist" (not so much other forms of generative AI - I play with image/video generation occasionally but I don't spend time telling people that they're genuinely worthwhile tools to learn). I'm also a Python evangelist, a SQLite evangelist, a vanilla JavaScript evangelist, etc etc etc.


yes, enough "concern" to provide plausible deniability


"they output strings that didn't exit before" is some hardcore, uncut cope


It's not about being honest. It's about Joe Bullshit from the Bullshit-Department having it easier in his/her/theirs Bullshit Job. Because you see, Joe decided two decades ago to be an "office worker", to avoid the horrors of working honestly with your hands or mind in a real job, like electrician, plumber or surgeon. So his day consists of preparing powerpoints, putting together various Excel sheets, attending whatever bullshit meetings etc. Chances are you've met a lot of Joe Bullshits in your career, you may have even reported to some of them. Now imagine the exhilaration Joe feels when he touches these magic tools. Joe does not really care about his job or about his company. But suddenly Joe can reduce his pain and suffering in a boring-to-death-job while keeping those sweet paychecks. Of course Joe doesn't believe his bosses only need him until the magic machine is properly trained so he can be replaced and reduced to an Eloi, living off the UBI. Joe Bullshit is selfish. In the 1930s he blindly followed a maniacal dictator because the dictator gave him a sense of security (if you were in the majority population) and a job. There is unfortunately a lot of Joe Bullshits in this world. Not all of them work with Excel. Some of them became self-made "developers" in the last 10 years. I don't mean the honest folks who were interested in technology but never had the means to go to a university. I mean all those ghouls who switched careers after they learnt there was money to be made in IT and money was their main motivation. They don't really care about the meaning of it all, the beautiful abstractions your mind wanders through as you create entire universes in code. So they are happy to offload it too, well because it's just another bullshit job, for the Joe Bullshit. And since Joe Bullshit is in the majority, you my friend, with your noble thoughts, are unfortunately preaching to the wind.


Jeez. Brutal but true.


I don't think OP thinks his skills are useless per se now, but that the way to apply those skills now feels less fun and enjoyable.

Which makes perfect sense - even putting aside the dopamine benefits of getting into a coding flow state.

Coding is craftsmanship - in some cases artistry.

You're describing Vibe Engineering as management. And sure, a great manager can make more of an impact increasing the productivity of an entire team than a great coder can make by themselves. And sure, some of the best managers are begrudging engineers who stepped up when needed to and never stepped down.

But most coders still don't want to be managers - and it's not from a lack of skill or interest in people - it's just not what they chose.

LLM-based vibe coding and engineering is turning the creative craftsmanship work of coding into technical middle management. Even if the result is more "productivity", it's a bit sad.


But does anybody really care about what you like? What about all those other professions that got replaced by technology, did anybody care what they liked? The big question is how is software going to be build most efficiently and most effectively in the future and how do you prepare yourself for this new world. Otherwise you’ll end up with all those other professions that got replaced, like the mineworkers, hoping that the good old days will someday return.


Its reasonable to stay away from something one considers dystopian considering the industry is not even sure about the usefulness of coding agents in professional environments. When the tractors replaced the horses, everyone could agree they outperform horses. The result was easily measurable. Its not that simple with LLM agents owned by big corporations.


Sure, it's not yet clear what impact LLMs will have on software development, but the impact it will have will not depend on if developers like to use it or not. If it is going to make software development 10x faster, companies will adopt it, whether devs like it or not.


Yup absolutely, and its a shame because it takes the joy out of it for a lot of people. I'll have a lot more paid leave if I dont like my job is all im going to say to this.


Sadly true. Most companies don’t even care if the software is sloppy, slow, and ridden with errors that cause data loss or privacy breaches. They care about exploiting workers and extracting value.

Is it ethical? Probably not. It took a few bridges falling and buildings caving in before traditional engineering became a profession.

In this post-Reagan world I’m not sure software has the right context to make that happen. I’m pretty sure we’ll stay the course where the big tech companies like it: very little regulation, loose liability, and terrible software for everyone.


Everything is getting industrialized. We buy most products made in China (tv,laptop, mobile phone etc), furniture is mostly cheap IKEA furniture. Many craftsmen lost their profession to industrialized automation. If we don´t care our furniture is subpar, our products are cheap plastic china products, why do we expect anybody to care about software craftsmanship?


Because the cost of faults is much higher than getting a new bookshelf from IKEA.

When talking about craftsmanship I’m not talking about artisanal, hand crafted source code that is aesthetically pleasing. Nobody but programmers care.

I’m talking about CVEs that allow RCE on your phone so that authoritarian governments can exfiltrate your contact lists and arrest all of the people they suspect of participating in protests that you were involved in.

When companies don’t care about quality and they’re not forced to we end up with slow, surveillance bloatware that is full of security holes and useless features designed to keep us engaged and paying.


I don´t see why industrialized software development with AI Agents could not be better at quality. Medical equipment or airplane safety requirements validations are also done in an industrialized manner. We don't really care if engineers working on these products like what they are doing or they feel like a craftsman.


These have standards, software is the wild west?


This is the heart of it. Most "craft" industries that have not yet been disrupted by technology or been made "more efficient" tend to coincidentally be the ones that are in demand and pay well -> and that society generally wants "good X" of. e.g. Plumbers, Electricans, previously software engineers. Efficiency usually benefits the consumer or the employer, not the craftsmen in most industries. There's a reason people are saying right now to "get a trade" where I am.

If you look at what still pays well and/or is stable (e.g. where I live trades are highly paid and stable work) its usually the crafts industry. We still build houses for example mostly like we did way back (i.e. much of the skills are still craft, not industrialized industry) when and it shows in the price of them.


I'm getting really great results in a VERY old (very large) codebase by having discussion with the LLM (I'm using Claude code) and making detailed roadmaps for new features or converting old features to new more useable/modern code. This means FE and BE changes usually at the same time.

I think a lot of the points you make are exactly what I'm trying to do.

- start with a detailed roadmap (created by the ai from a prompt and written to a file)

- discuss/adjust the roadmap and give more details where needed

- analyze existing features for coding style/patterns, reusable code, existing endpoints etc. (write this to a file as well)

- adjust that as needed for the new feature/converted feature - did it miss something? Is there some specific way this needs to be done it couldn't have known?

- step through the roadmap and give feedback at each step (I may need to step in and make changes - I may realize we missed a step, or that there's some funky thing we need to do specifically for this codebase that I forgot about - let the LLM know what the changes are and make sure it understands why those changes were made so it won't repeat bad patterns. i.e. write the change to the .md files to document the update)

- write tests to make sure everything was covered... etc etc

Basically all the things you would normally WANT do but often aren't given enough time to do. Or the things you would need to do to get a new dev up to speed on a project and then give feedback on their code.

I know I've been accomplishing a lot more than I could do on my own. It really is like managing another dev or maybe like pair programming? Walk through the problem, decide on a solution, iterate over that solution until you're happy with the decided path - but all of that can take ~20 minutes as opposed to hours of meetings. And the end result is factors of time less than if I was doing it on my own.

I recently did a task that was allotted 40 hours in less than 2 working days - so probably close to 10-12 hours after adjusting for meetings and other workday blah blah blah. And the 40 hour allotment wasn't padded. It was a big task, but doing the roadmap > detailed structure including directory structure - what should be in each file etc etc cut the time down dramatically.

I would NOT be able to do this if I the human didn't understand the code extremely well and didn't make a detailed plan. We'd just end up with more bad code or bad & non-working code.


Thank you for this post. I don't write much code as I'm currently mostly managing people but I read it constantly. I also do product management. LLMs are very effective at locating and explaining things in complex code bases. I use Copilot to help me research the current implementation and check assumptions. I'm working to extend out in exactly the directions you describe.


"LLMs are very effective at locating and explaining things in complex code bases." YES. I do nothing BUT write code and tracking everything down in the code base is greatly simplified by using an LLM.

This is just a new tool. I think the farming example mentioned in another post is actually a great example. I love coding. I code in my free time. It's just fun. I've been doing it for ~20 years and I don't plan on stopping anytime soon!

But at work I'm really focused on results more than the fun I can have writing code. If a tractor makes the work easier/faster why would I not use a tractor? Breaking my back plowing isn't really my end goal at work. Having a plowed field is my end goal. If I can ride around in a tractor while doing it great! If I can monitor a fleet of tractors that are plowing multiple fields at once even better!

When I go home I can plant anything I want in any way I want and take all the time I want. Of course that's probably why in my free time I end up working on games I never finish...


This is what I've seen as well - in the past a large refactor for a codebase like that seemed nearly impossible. Now doing something like "add type hints" in python or "convert from js to ts" is possible in a few days instead of months to never.

Another HUGE one is terraforming our entire stack. It's gone from nearly impossible to achievable with AI.


I remember reading a sci-fi book, where time was.. sharded? And people from different times were thrust together. I think it was a Phoenician army, which had learned to ride and battle bareback.

And were introduced to the stability of stirrups and saddle.

They were like daemons on those stirrup equipped horses. They had all the agility of wielding weapons and engaging in battle by hanging onto mane, and body with legs, yet now had (to them) a crazy easy and stable platform.

When the battle came, the Phoenicians just tore through those armies who had grown up with the stirrup. There was no comparison in skill or capability.

(Note: I'm positive some of the above may be wrong, but can't find the story and so am just stating it as best able)

My point is, are we in that age? Are we the last skilled, deeply knowledgeable coders?

I grew up learning to write eeproms on burners via the C64. Writing machine language because my machines were too slow otherwise. Needing to find information from massive paper manuals. I had to work it all out myself often, because no internet no code examples, just me thinking of how things could be done. Another person who grew up with some of the same tools and computers, once said we are the last generation to understand the true, full stack.

Now I wonder, is it the same with coding?

Are we it?

The end?


> they're not going to be able to rig up a set of automated tests with continuous integration and continuous deployment to a Kubernetes cluster somewhere.

Honestly, I have a ton of experience in system administration, and I'm super comfortable at a command line and using AWS tooling.

But, my new approach is to delegate almost all of that to Claude, which can access AWS via the command-line interface and generate configuration files for me and validate that they work correctly. It has dramatically reduced the amount of time that I spend fiddling with and understanding the syntax of infra config files.


So it's automating away the fun parts, and leaving the humans to rig up automated tests and setup continuous integration...

And unfortunately people who get to architect anything are a small subset of developers.


I appreciate what you're trying to do, but for myself, I'm not depressed because my skills are less valuable. I enjoyed the money but it was never about that for me. I'm depressed because I don't like the way this new coding feels in my brain. My focus and attention are my most precious resources and vibe coding just shatters them. I want to be absorbed in a coding flow where I see all the levels of the system and can elegantly bend the system to my will. Instead I'm stuck reviewing someone/something else's code which is always a grind, never a flow. And I can feel something terrible happening in my brain, which at best can be described as demotivation, and at worst just utter disinterest.

It's like if I were a gardener and I enjoyed touching dirt and singing to plants, and you're here on Gardener News extolling the virtues of these newfangled tractors and saying they'll accelerate my impact as a gardener. But they're so loud and unpleasant and frankly grotesque and even if I refrain from using one myself, all my neighbors are using them and are producing all their own vegetables, so they don't even care to trade produce anymore--with me or anyone else. So I look out at my garden with sadness, when it gave me such joy for so many decades, and try to figure out where I should move so I can at least avoid the fumes from all the tractors.


Well said! Reading this I feel reminded of the early protests against industrialization and automation in other fields. Checks all the same boxes - insecurity and fear about the future, alienation towards the new tools, ...

Not saying AI is similar in impact to the loom or something, it just occured to me how close this is to early Luddite texts.


Many Luddites were fine with using the new Loom machines. They smashed them because they were precious to the capital holders and in a time when there were no labour laws. The Luddites were protesting child labour, forced labour, and having no social safety net when they were discarded by their employers other than workhouses.


This has been the dream of the capital classes since time immemorial.

And unfortunately (for humanity) this has been the status quo for the whole civilization. Small ruling elite class (you might designate them as masters, lords or employers) with all the wealth, minimal or no "middle class" and lots of poor people (you might designate them as peasants or slaves or workers).

The only exception to this has been the period of time since World War 2 when in most of the "western" countries the middle class demanded and took their share of the wealth. That's the time when modern well-fare states were born, universal health care became a thing, working safety improved, education became accessible. etc.

All these were NOT given by the elite but TAKEN by the working class via social reforms, workers unions and social democracy.

The capital owning class wants to take all these away and they're succeeding.

So yes, in fact the Luddites were not against technology, they were against the unilateral and uneven distribution of wealth produced by the technology.


There was another time of great social upheaval and progress.

It was after the Black Death.

Similar circumstances, if you think about it.


Industrialization actually helped the middle class insofar as you needed "skilled workers" to work the industrial equipment, make decisions, and process. The fact that production had scaled in a world of scarcity meant that your worker had greater leverage. They often knew each other as well, went to the same schools, etc which meant less info asymmetry as to their worth/value. It moves the value to something that takes time, skill and sometimes luck to achieve that being skills and experience. This was hard to replicate (e.g. our school, college, university systems), required significant training and created "pets not cattle" with hard to get skills meaning the new skilled middle class could rise and exercise their new found negotiating power.

Somewhat unprecedented in human history. All because intelligence had scarcity. AI changes that.

AI is the real dream of the capital classes. It makes intelligence cheap potentially undoing the very thing that gave birth to the last century's middle class. In the long term, given current trends, I wouldn't be surprised if these AI technologies revert us back to most of human history -> where the world is very unequal, meritocracy dies and most of us are trying to just exist/survive whilst the capital holders have abundance. It explains the large valuations as well of AI/Tech lately and the weird deals going on; this isn't a game of economics anymore; its an arms race of power in the new world structure. I suspect to these people no amount of money is enough; if you win you win for the next era of humanity.


If you think about it, Luddites were the original victims of capitalist propaganda.


Exactly. Most people who haven’t studied the history think the Luddites smashed the machines because they were against progress and industrialization. Hence the modern interpretation of Luddite meaning, “against technology.”

Many Luddites were shot against the wall or jailed. History often lies because it is written by the winners.


The OG alienator was the Agricultural Revolution, settling and toiling repetitively in predetermined ways, unlike the more adventurous lifestyle from before with all the camping, hunting, gathering, where circumstances brought always novel challenges, you could be a man spearing a deer, instead of just killing some docile domesticated cow. Searching for pheasant eggs and being happy if you found some, instead of going out every morning to the predictable presence of eggs in the chicken coop.


Although, gradually, all over the world people chose that lifestyle rather than take their chances with the seasons and the hunt.


Chose is a little strong. They were forced into it because agricultural societies could field armies orders of magnitude larger than hunter/gatherers.

It's telling that the nobles of agricultural societies generally still hunted, and often reserved that privilege.


Exactly, and similarly people may adopt AI too, whether they like the aesthetics or not.


Mostly because if you settled down a tilled a field of barley you had a reliable source of beer. Finding beer in the wild was and still is an almost certain failure.

The roots of global civilization are brown and frothy.


So what you're telling me is that I am genetically predisposed to brew and drink beer?

Explains a lot.

Explains it to my satisfaction!


Comparing the impact of LLMs on programming to the agricultural revolution is a pretty solid analogy!


I'd say the fear is justified. The economy should serve the people and the citizens not the other way around. Yet, our economies are increasingly the other way around, people have to fit into to the shape of the economy.

It's not hard to see a future where the workers displaced by AI get pushed to the sidelines and fringes of the society while the capital holders hoard more wealth and get the benefits of the "value" created.

We already have half the population on this planet living in slums without access to economic means and in the "developed" countries larger and larger group of people are barely hanging on either already displaced and unemployed or working jobs below living wages.

Frankly, It'd stupid not to be concerned.


This is true, but it started way earlier than AI with software development though. A lot of software developers' job is just being ticket monkies, adding small things or fixing bugs for a huge company that nobody cares about. The alienation is real.

This is, of course, an attribute of capitalism.

Like carpenters, gardeners and farmers, there are very few software developers who truly have the luxury to treat their work as a craft and not a factory output.


How beautifully put, and I couldn't agree more. I feel exactly the same way.

However, I am still unconvinced that software development will go down this way. But if woking as a software developer will require managing multiple agents at the same time instead of crafting your own code — you can count me out, too.


If it is not about the money, why do you have to use these tools? If you enjoy small farming why concern yourself with mass production, or expensive equipment? Remain in the lane you enjoy?


I enjoy programming and I enjoy being paid for programming. I'm being pressed to use AI for my paid work. And I don't enjoy AI-powered programming.

As of today, I've disabled Copilot. The only autocomplete that I can accept is absolutely mechanical one, not any kind of smart. I want to write the rest of the code myself. I like it.

I was weird in StackOverflow era, because I never blindly applied snippets, like other programmers do. I went over them token by token, reading underlying library sources and docs, always creating my own solution. It made me less productive, but I feel that my code was more robust and maintainable, so it was a good trade-off for me.

May be it'll work out the same way with AI, time will tell.


I think it will; AI is not going away, but once the hype has settled, the first companies have gone bankrupt or acquired, and employers are paying for them, they will become part of someone's daily tools not unlike the existing autocomplete tools.


Fwiw I’m an ai proponent who loves that flow state you are describing. Programming literally was the first time I found it as a youth and I’ve been addicted to it since then.

But it’s such a small part of my professional life. Most of what I do is chores and answering simple questions and planning for small iterations on the original thing or setting up a slightly different variant.

Llm’s have freed me of so much of that! Now I outsource most of that work to the llms and greedily keep the deep flow inducing work for myself.

And I have a new tool to explain to management why we are investing in all the tooling and processes that we know lead to quality, because the llms are catnip for the managerial mind.


As time goes by I tend to agree more and more with your POV.


Very well said. I feel the exact same. :(


> They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.

Neither can I, sadly. I have one brain cell and I can only really do one thing at a time. Doing more than one leads to a corrupted stack and I make exponentially more mistakes.


Have you tried SolveIt (method, tool) from Jeremy Howard yet?

I was in the first batch last year where they introduced it and going to do the second one too.

It´s a very different kind of beast to what is currently being discussed.


> going to do the second one too.

I missed the first one, when will the second one be?


"Signups are open [1], and will remain so until October 20th."

Recently on HN [2].

[1]: https://solve.it.com/

[2]: https://news.ycombinator.com/item?id=45455719


>accelerate the impact you can have with this new family of tools.

Tech spent the last 10 years drilling into engineers' heads that scaling your impact is not about writing more or better code, but about influencing the work of other engineers through collaboration, process, documentation, etc. Even the non-managerial "senior IC" tracks are mostly about doing this with greater and and greater numbers of people. I wonder if we will start to see recognition in career tracks for people who are actually just extraordinarily productive by themselves or in small groups, or if you'll pretty much just have to be a startup founder to get paid for that.


Software developers can 10x-100x productivity/effectiveness with LLMs.

Non developers can go from 0x to 1x. And I'm happy for people finally being able to learn about building software one way.

And then learn why vibe coding often creates more quickly disposable code.


This has been experience as well. If there is a hard problem which needs to be addressed, generative code helps me break the inertia by generating the first draft and then I get really curious to poke holes in the generated code. I tend to procrastinate when I come across a gnarly issue or something I am not really familiar with, justifying by saying I need a big block of time to work on it. I use generative code as a pushy "mom/boss/coworker/spouse" to get stuff done.


I really hope you are right here, and to be honest it does reflect my limited experience with where I've used AI so far.

But I'm also not ready to bet the farm on it. Seriously considering taking our savings and equity out of our house in a London adjacent area, and moving to a lower cost of living area, so that we're practically debt free. At that point we can survive on a full time minimum wage job, anything more than that is a bonus.


I still haven't seen any evidence to match these repeated claims of increased efficiency. What I have seen is reports that makes a lot of sense to me claiming it's all in the user's head.


Maybe it's in my head, but I have completed coding projects that I believe would have taken a team of five offshore maybe 12 weeks to do in the past in about ten working days while juggling calls and living normal corporate life.

The win is that I don't have to share the vision of what needs to be done and how it should all work, and then constantly monitor and reframe that in the face of the teams missteps and real findings. I work with the agents directly, and provided I set the architecture and build up systematically I can get really good results. The cycle time between me identifying an issue and the issue getting fixed by me and the agents is now minutes rather than hours or days with an off shore team. Even better the agents can provide bug fixing expertise much quicker than stack overflow - so I can figure out what's wrong much faster so as to specific what needs fixing.

It is no good walking in and requesting functionality, you need to know how the thing you want should work, and you need to know what good looks like, and what bad looks like, and how good is separated from bad. Then the normal process of discovery ("eep that doesn't actually work like I thought") can take place and you can refactor and repair as required.

Sometimes I start something that just doesn't work, you have to recognise that you and the agents are lost, and everything needs to be torn down. You then need to think properly about whats gone wrong and why, and then come back with a better idea. Again - just like with dev teams, but much more clearly and much faster.


I'm working in corporate and haven't seen it yet. The main thing I see is blogs and whatnot of people building new weekend projects with LLMs, that is, greenfield, non-critical software - the type of software that, if I were to write it, I wouldn't bother with CI, tests, that kind of thing with. Sloppy projects, if you will.

But happy to be corrected - is someone using these agents in their paid / professional / enterprise / team job?


I think most of the code in our enterprise is now written by AI. It’s all boring callcenter crud apps, so nobody is really sad they’re not writing any of that code any more. I’m not sure it makes me faster, but I think QA testing what the AI made and occassionaly adjusting it is more fun anyway.

The code is absolutely lower quality, but there were always so many people producing garbage faster than I could produce something nice that the code was always terrible anyway.

There’s an element of wanting to know how the thing works so at least I’ll know when it’s ready to replace me though.


>But happy to be corrected - is someone using these agents in their paid / professional / enterprise / team job?

Yes, and I find them quite useful

I don't see myself going back to the "Google + StackOverflow" approach I had used for 10 years prior (well, I can always fall back to it if necessary, but so far I haven't needed to)

My experience matches OP: my years of experience in manual coding complements the agent approach remarkably well


I’ve asked this many times on here - I never get a coherent answer


I am, but in a very narrow focus: mostly examining our existing codebase as a more powerful but fuzzier search, and a system to then generate a plan to implement and approach which I tweak.

I sometimes use it to scaffold out some boilerplate for tests, but never tests themselves: no matter what I try it always ends up writing the useless straight-jacket "change alert" style tests that break on any change to the unit under test, which I despise.


There was an article on here not too long ago - I can’t find it now - where the authors discussed how they leaned full in on it and were submitting 20k+ line PRs to open source projects in languages they were not very familiar with.

However, they mentioned you had to let go of reviewing every line of every PR. I read that and was fine with holding off on full vibe coding for now. Nobody intelligent would pay for that and no competent developer would operate like that.

I have a couple coworkers big on it. The lesser skilled ones are miserable to work with. I’ve kept my same code review process but number of comments left has at least 5x’d (not just from me, either). And I’m not catching everything - I get fatigued and call it done. Duplicated logic, missed edge cases, arbitrary patterns and conventions, etc. The high skilled ones less so, but I still don’t look forward to reviewing their PRs anymore. Too much work on my end.

There are many devs who are more focused on results than being correct. These are the ones I’ve seen most drawn to LLMs/agents. There’s a place for these devs, but having worked on an aging startups codebase, I hope there aren’t too many.


Of course the devil is in the details. What you say and the skills needed make sense. It's unfortunately also the easiest aspects to dismiss either under pressure as there is often little immediate payoff, or because it's simply the hard part.

My experience with llms in general is that sadly, they're mostly good bullshitters. (current google search is the epitome of worthlessness, the AI summary so hard tries to make things balanced, that it just dreams up and exaggerates pros en cons for most queries). In a same way platforms like perplexity are worthless, they seem utterly unable to assign the proper value to sources they gather.

Of course that doesn't stop me from using llms where they're useful; it's nice to be able to give the architecture for a solution and let the llm fill the gaps than to code the entire thing by hand. And code-completion in general is a beautiful thing (sadly not a thing where much focus is on these days, most is on getting the llm create complete solutions while i would be delighted by even better code completion)

Still all in all, the more i see llms used (or the more i see (what i assume) well willing people copy/paste llm generated responses in favor of handwritten responses) on so much of the internet, resulting in a huge decline of factualness and reproducibility (in he sense, that original sources get obscured), but an increase of nice full sntences and proper grammar, the more i'm inclined to belief that in the foreseeable future llm's aren't a net positive.

(in a way it's also a perfect storm, the last decade education unprioritised teaching skills that would matter especially for dealing with AI and started to educate for use of tools instead of educate general principles. The product of education became labourers for a specific job instead of higher abstract level reasoning in a general area of expertise)


Google's "AI overviews" are one of the worst LLM-powered features on the market today, they're genuinely damaging the reputation of the whole industry.

Meanwhile I've started using ChatGPT GPT-5 search as my default search engine! A year ago I would have laugher at the idea: https://simonwillison.net/2025/Sep/6/research-goblin/

And Google themselves have an "AI mode" which is a different league of quality from "AI overviews", I wrote about that one here: https://simonwillison.net/2025/Sep/7/ai-mode/

This is new. AI search tools almost universally sucked until OpenAI's release of o3 in April this year.


It might actually be in Googles best interest to damage the interest in LLMS by showing those crappy AI Mode stuff, because it materially impacts their business model.

The perception of LLMs in the gen pop is what matters, not in the eyes of techies.


What is the other part of your goal?


Sparking more conversations about practices that work for doing serious production-quality software development with LLMs, especially in larger teams and on larger projects.

Having a good counter to people who use "vibe coding" as a dismissive term for anything where an LLM is used to help product software.


That sounds so familiar. Have you ever considered that LLMs are the new PHP?

- A reasonable technology that can be used to deliver great value.

- People hate it.

- Terrible first impression. The wrong way of using PHP is much more popular than the good stuff.

- People are very dismissive, they won't even listen to your argument.


What about the accessibility of software development? Its completely vanishing for people that can not afford to pay for these agents. It used to be a field where you could get a laptop from the scrapyard and learn from there. It feels pointless. Also agents do not invent things, the creativity part is gone with them. They simply use what they've already seen, repeat the same mistakes a person made a few years ago. Its a dystopian way of working. Sure it enables one to spew out slop that might make companies money, but there is no passion, sense of exploration, personal growth. Its all just directing parrots with thumbs...


I feel your sentiment. However anyone with an interest in computers now has access to an LLM, which to me feels like an upgrade to having access to a modem and a search engine. Knowledge is power right?


Absolutely. If the knowledge is verifiable. Kurzgesagt recently uploaded a video about this, its a lot harder to verify statements due to AI. Not easier. Here is the video in case you are interested: https://youtu.be/_zfN9wnPvU0?si=_17KU8l2wDjGUYA5


> What about the accessibility of software development? Its completely vanishing for people that can not afford to pay for these agents.

what do you actually mean by this? it's clearly untrue - anyone get get a laptop and install linux on it and start bashing out code today, just as they could last week and last year and thirty years ago.

do you mean that you think at some point in the future tooling for humans to write code won't exist? or that employers won't hire human programmers? or that your pride is hurt? or you want your hobby to also be a well-paid job? or something else?


I mean that this "tooling" becomes inaccessible to people. At least the tooling that is relevant for jobs. Employers will eventually stop hiring human based on their programming competence. It'll translate into a low pay career for people who like to orchestrate agents.


I doubt it will, because there will always be a need for understanding the code, especially when it comes to things like security, certification, etc.

I mean COBOL has not been a relevant programming language for anyone coming into the field in the past 20-40 years because it's been superseded, yet there's still a significant demand for COBOL developers, because the newer generation can't or doesn't want to work with it.

Not to completely dismiss your claim, of course; I'm sure a segment of software engineering will be agent based now or in the near future. But I don't think it'll take over as comprehensively, since the previous existential crisis - outsourcing - also hasn't decimated the software engineering market.


I think it's just another floor in the creaky old tower of abstraction.

Machine code > ASM > 3GLs > 4GLs > visual programming > LLMs

etc etc etc. Thing is, the moment you go off-piste, the LLMs get a lot less useful. I think, if you want to stay closer to the metal, you've got to aim for a niche that has a small internet footprint. So... domain knowledge or esoteric programming knowledge.

One way to incorporate domain knowledge might be to become a hybrid product owner/programmer.

(This is all just opinion - I'm sure a well-argued rebuttal is possible).


I need to read through this some more, but there has been another genetic coding paradigm referred to as spec driven development.

I’ll find the link in the morning, but I kinda joke - it’s vibe coding for people who know how to define a problem and iterate on it.

I’ve got a project reimplementing a service I want to make more uniform. Claude has produced a lot of stuff that would have taken me weeks to do.


GitHub's SpecKit is an example: https://github.com/github/spec-kit

Spec-Driven Development treats the spec as the source of truth and the code as an artifact. As you develop, you modify/add to the spec and the codebase gets updated to reflect it.

Personally I'm doubtful it can compete with traditional artisanal software engineering, as it's (IMHO) boils down to "if only you can spec it precisely enough, it'll work" and we've tried this with 5GL and (to some extent) BDD, and it doesn't get you to 100%.

I do think it's interesting enough to explore, and most of us could use a bit more details in our Jira tickets.


That was exactly what UML wanted to do, and it almost never worked out in practice.

Seems to be just a rehashing of the same idea but instead of XML, and diagrams, it's now some free-text to be interpreted by LLMs, so much less deterministic and will probably fail just like UML failed.

People also tend to forget about Peter Naur's take on "Programming as Theory Building" [0], the program is, in itself, the theory of what's implemented. A spec cannot replace that.

[0] https://pages.cs.wisc.edu/~remzi/Naur.pdf


Theory building is the secret sauce, and all variants of "this is how to use AI effectively" I've seen are inferior to the epistemologically sound theory Naur outlines in his paper.


We already invented languages for succinctly describing what the computer should do. They’re call programming languages.

“The code is the documentation” is not a joke. Logic that’s useful in the real world is complex and messy. You need additional documentation (why did the code end up like it is, etc) but code is the most expressive way we’ve got for describing how a computer should work.


Hey! No J word tolerated in here!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: