Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a theory that vibe coding existed before AI.

I’ve worked with plenty of developers who are happy to slam null checks everywhere to solve NREs with no thought to why the object is null, should it even be null here, etc. There’s just a vibe that the null check works and solves the problem at hand.

I actually think a few folks like this can be valuable around the edges of software but whole systems built like this are a nightmare to work on. IMO AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.





I've written before about my experience at a shop like this. The null check would swallow the exception and do nothing about the failure so things just errored silently. Many high fives and haughty remarks about how smart the team was for doing this were had at the expense of lesser teams that didn't. The whole operation ran on a hackneyed MVP architecture from a Learning Tree class a guy took in 2008 and snippets stolen from StackOverflow and passed around on a USB key. Deviation from this bible was heresy and rebuked with sharp, unprofessional behavior. It was not a good place to work for those who value independent thought.

> AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.

I've been saying this exact thing for years now. It also does the whole CRUD app "copy, paste, find, replace from another part of the application" workflow for building new domains very well. If you can bootstrap a codebase with good architectural practices and tests then Claude Code is a productivity godsend for building business apps.


Blindly copying and pasting from StackOverflow until it kinda sorta works is basically vibe coding

AI just automates that


Yeah, but you had to integrate it until it at least compiled, which kind of made people think about what they're pasting.

I had a peer who suddenly started completing more stories for a month or two when our output was largely equal before. They got promoted over me. I reviewed one of their PRs... what a mess. They were supposed to implement caching. Their first attempt created the cache but never stored anything in it. Their next attempt stored the data in the cache, but never looked at the cache - always retrieving from the API. They deleted that PR to hide their incompetence and opened a new one that was finally right. He was just blindly using AI to crank out his stories.

That team had something like 40% of capacity being spent on tech debt, rework, and bug fixes. The leadership wanted speed above all else. They even tried to fire me because they thought I was slow, even though I was doing as much or more work than my peers.


It's a frustrating situation. I had a stretch in my career when I was the clean up person who did the 90% of work that was left after management thought a junior had gotten in 90% done. It's potentially very satisfying but very easy to feel unappreciated in (e.g. they wish the junior could have gotten it done and thought I was "too slow" though in retrospect one year of that was an annus mirabilis where I completed an almost unbelievable number of diverse projects.)

> Yeah, but you had to integrate it until it at least compiled, which kind of made people think about what they're pasting

That’s a very low bar. It’s easy to get a program to compile. And if it’s interpreted, you can coast for months with no crashes, just corrupted state.

The issue is not that they can’t code, it’s that they can’t problem solve and can’t design.


Yeah, but integrating manually is more likely to force them to think than if the agent just does everything. You used to have to search stackoverflow, which requires articulating the problem. Now you can just tell copilot to fix it.

The field of software is slowly getting worse for some and better for others. I'm probably going to just contract myself out.

To be fair my AI setup almost always compiles before thinking its done.

Is that Claude Code or something else? GitHub Copilot in VSCode does not always compile.

> I actually think a few folks like this can be valuable around the edges of software but whole systems built like this are a nightmare to work on. IMO AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.

I would correct that: it's not an accelerant of "seeing what you want on the screen," it's an accelerant of "seeing something on the screen."

[Hey guys, that's a non-LLM it's not X, it's Y!]

Things like habitual, unthoughtful null-checks are a recipe for subtle data errors that are extremely hard to fix because they only get noticed far away (in time and space) from the actual root cause.


I agree but I'd draw a different comparison. That is vibe coding has accelerated the type of developers who relied on stack overflow to solve all their problems. The kind of dev who doesn't try to solve problems themselves. It has just accelerated this type of working, but is less reliable than before.

this matches with my first thought of this "study" (remember what coderabbit sells...); can you compare these types of PRs directly? Is the conclusion that AI produces more bugs, or is that a symptom of something else, like AI PRs are produced by less experienced developers?

One of my frustrations with AI, and one of the reasons I've settled into a tab-complete based usage of it for a lot of things, is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data" [1], but I have to fight the AI on that all the time because it is a routine mistake programmers make and it makes the same mistake repeatedly. I have to fight the AI to properly create types [2] because it just wants to slam everything out as base strings and integers, and inline all manipulations on the spot (repeatedly, if necessary) rather than define methods... at all, let alone correctly use methods to maintain invariants. (I've seen it make methods on some occasions. I've never seen it correctly define invariants with methods.)

Using tab complete gives me the chance to generate a few lines of a solution, then stop it, correct the architectural mistakes it is making, and then move on.

To AI's credit, once corrected, it is reasonably good at using the correct approach. I would like to be able to prompt the tab completion better, and the IDEs could stand to feed the tab completion code more information from the LSP about available methods and their arguments and such, but that's a transient feature issue rather than a fundamental problem. Which is also a reason I fight the AI on this matter rather than just sitting back: In the end, AI benefits from well-organized code too. They are not infinite, they will never be infinite, and while code optimized for AI and code optimized for humans will probably never quite be the same, they are at least correlated enough that it's still worth fighting the AI tendency to spew code out that spends code quality without investing in it.

[1]: Which is less trivial than it sounds and violated by programmers on a routine basis: https://jerf.org/iri/post/2025/fp_lessons_half_constructed_o...

[2]: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion...


> a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data"

Yea, this is something I've also noticed but it never frustrated me to the point where I wanted to write about it. Playing around with Claude, I noticed it has been trained to code very defensively. Null checks everywhere. Data validation everywhere (regardless of whether the input was created by the user, or under the tight control of the developer). "If" tests for things that will never happen. It's kind of a corporate "safe" style you train junior programmers to do in order to keep them from wrecking things too badly, but when you know what you're doing, it's just cruft.

For example, it loves to test all my C++ class member variables for null, even though there is no code path that creates an incomplete class instance, and I throw if construction fails. Yet it still happily whistles along, checking everything for null in every method, unless I correct it.


> is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested.

That is a really good point: the output you're gonna get is going to be mediocre, because it was trained (in aggregate) on mediocrity.

So the people who gush about LLMs were probably subpar programmers to start, and the ones that complain probably tend to be better-than-average, because who would be irritated by mediocrity?

And then you have to think about the long-term social effects: the more code the mediocrity machine puts out, the more mediocre code people are exposed to, and the more mediocre habits they'll pick up and normalize. IMHO, a lot of mediocrity comes from "growing up" in an environment with poor to mediocre norms. The next generation of seniors, who have more experience being LLM operators than writing code themselves, and probably more likely to get stuck in mediocrity.

I know someone's going to make an analogy to compilers to dismiss what I'm saying: but the thing about compilers is they are typically written by very talented and experienced people who've spent a lot of time carefully reasoning about how they behave in different scenarios. That's nothing like an LLM (just imagine how bad compilers would be if they were written by a bunch of mediocre developers from an outsourcing body shop, that's an LLM).


This is close to my approach. I love copilot intellisense at GitHub’s entry tier because I can accept/reject on the line level.

I barely ever use AI code gen at the file level.

Other uses I’ve gotten are:

1. It’s a great replacement for search in many cases

2. I have used it to fully generate bash functions and regexes. I think it’s useful here because the languages are dense and esoteric. So most of my time is remembering syntax. I don’t have it generate pipelines of scripts though.


My experience with AI coding is mixed.

In some cases I feel like I get better quality at slightly more time than usual. My testing situation in the front end is terribly ugly because of the "test framework can't know React is done rendering" problem but working with Junie I figured out a way to isolate object-based components and run them as real unit test with mocks. I had some unmaintainable Typescript which would explode with gobbledygook error messages that neither Junie or I could understand whenever I changed anything but after two days of talking about it and working on it it was an amazing feeling to see that the type finally made sense to me at Junie at the same time.

In cases where I would have tried one thing I can now try two or three things and keep the one I like the best. I write better comments (I don't do the Claude.md thing but I do write "exemplar" classes that have prescriptive AND descriptive comments and say "take a look at...") and more tests than I would on my on my own for the backend.

Even if you don't want Junie writing a line of code it shines at understanding code bases. If I didn't understand how to use an open source package from reading the docs I've always opened it in the IDE and inspected the code. Now I do the same but ask Junie questions like "How do I do X?" or "How is feature Y implemented?" and often get answers quicker than digging into unfamiliar code manually.

On the other hand it is sometimes "lights on and nobody home", and for a particular patch I am working on now it's tried a few things that just didn't work or had convoluted if-then-else ladders that I hate (even if I told it I didn't like that) but out of all that fighting I got a clear idea of where to put the patch to make it really simple and clean.

But yeah, if you aren't paying attention it can slip something bad past you.


I'd call some null-pointer-lint-with-automatic-fixes tools "vibe coding" tbh. I've run across a couple that do a pretty good job of detecting possible nulls and add annotations about it and that's great... but then the fix is "if null, return null", in practice it's frequently applied completely blindly without any regards to correctness.

If you lean on tools like that, you can rapidly degrade your codebase into "everything might be null and might short circuit silently and it can't tell you about when it happens", leaving you with buggy software that is next to impossible to understand or troubleshoot because there aren't "should not be null" hints or stack traces or logs or anything that would help figure out causes.


If you are in the industry for enough time you certainly crossed with a boss who said that it needs to be fixed in 5 minutes or else even if the problem was not caused by you and the solution clearly needs more than 5 minutes. (The root cause was because someone had only 5 minutes to do something too)

I once had a job that my boss ordered (that's the word he used) me to do the wrong thing. Me and the rest of the team refused except for one guy who did it because he was certain that 9 out 10 people were wrong while he was the only right one) The company spend 2M USD in returns, refunds and compensations in a project that probably didn't cost that. It was just a patch! How he could've possibly know - said the dismissed manager.

(Now he works for oracle, why not right)


"on error resume next" has been the first line of many vba scripts for years

I caught claude trying to sneak in the equivalent to a CI script yesterday as I was wrangling how to run framework and dotnet tests next to each other without slowing down the framework tests horrendously.

It tried to sneak in changing the CI build script to proceed to next step on failure.

It's a bold approach, I'll give it that.


  1. if it won't compile you'll give up on the tool in minutes or an hour.
  2. if it won't run you'll give up in a few hours or a day.
  3. if it sneaks in something you don't find until you're almost - or already - in production it's too late.
charitable: the model was trained on a lot of weak/lazy code product.

less-charitable: there's a vested interest in the approach you saw.


Yeah it’s trained to do that somewhere though it’s not necessary malicious. For RLHF (the model fine tuning) the HF stands for human feedback but is really another trained model that’s trained to score replies the way a human would. And so if that model likes code that passes tests more than code that’s stuck in a debugging loop, that’s what the model becomes optimized for.

In a complex model like Claude there is no doubt much more at work, but some version of optimizing for the wrong thing is what’s ultimately at play.



Yeah. There are times when silently swallowing nulls is the proper answer. I've found myself doing it many times in C# to trap events that get triggered during creation. But you should never do so unless you've traced where they're coming from!

"ship fast, break things"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: