Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing I can't wrap my head around is that I work on extremely complex AI agents every day and I know how far they are from actually replacing anyone. But then I step away from my work and I'm constantly bombarded with “agents will replace us”.

I wasted a few days trying to incorporate aider and other tools into my workflow. I had a simple screen I was working on for configuring an AI Agent. I gave screenshots of the expected output. Gave a detailed description of how it should work. Hours later I was trying to tweak the code it came up with. I scrapped everything and did it all myself in an hour.

I just don't know what to believe.



It kind of reminds me of the Y2K scare. Leading up to that, there were a lot of people in groups like comp.software.year-2000 who claimed to be doing Y2K fixes at places like the IRS and big corporations. They said they were just doing triage on the most critical systems, and that most things wouldn't get fixed, so there would be all sorts of failures. The "experts" who were closest to the situation, working on it in person, turned out to be completely wrong.

I try to keep that in mind when I hear people who work with LLMs, who usually have an emotional investment in AI and often a financial one, speak about them in glowing terms that just don't match up with my own small experiments.


I used to believe that until, over a decade later, I read stories from those ""experts" who were closest to the situation", and it turns out Y2K was serious and it was a close call.


I just want to pile on here. Y2K was avoided due to a Herculean effort across the world to update systems. It was not an imaginary problem. You'll see it again in the lead up to 2038 [0].

[0]: https://en.wikipedia.org/wiki/Year_2038_problem


You’re biased because if you’re here, you’re likely an A-tier player used to working with other A-tier players.

But the vast majority of the world is not A players. They’re B and C players

I don’t think the people evaluating AI tools have ever worked in wholly mediocre organizations - or even know how many mediocre organizations exist


wish this didnt resonate with me so much. Im far from a 10x developer, and im in an organization that feels like a giant, half dead whale. Sometimes people here seem like they work on a different planet.


> But then I step away from my work and I'm constantly bombarded with “agents will replace us”.

An assembly language programmer might have said the same about C programming at one point. I think the point is, that once you depend on a more abstract interface that permits you to ignore certain details, that permits decades of improvements to that backend without you having to do anything. People are still experimenting with what this abstract interface is and how it will work with AI, but they've already come leaps and bounds from where they were only a couple of years ago, and it's only going to get better.


There are some fields though where they can replace humans in significant capacity. Software development is probably one of the least likely for anything more than entry level, but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices. A purpose built AI could develop many systems and back test them to complete the design. A lot of this is already handled or aided by software, but a main role of the engineer is to interface with the non-technical persons or other engineers. This is something where an agent could truly interface with the non-engineer to figure out what they want, then develop it and interact with the design software quite autonomously.

I think though there is a lot of focus on AI agents in software development though because that's just an early adopter market, just like how it's always been possible to find a lot of information on web development on the web!


Good freaking luck! The inconsistencies of the software world pale in comparison to trying to construct any real world building: http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...


   > "you basically just need to know a lot of rules..."
This comment commits one of the most common fallacies that I see really often in technical people, which is to assume that any subject you don't know anything about must be really simple.

I have no idea where this comment comes from, but my father was a chemical engineer and his father was mechanical engineer. A family friend is a structural engineer. I don't have a perspective about AI replacing people's jobs in general that is any more valuable than anyone elses, but I can say with a great deal of confidence that in those three engineering disciplines specifically literally none of any of their jobs are about knowing a bunch of rules and best practices.

Don't make the mistake of thinking that just because you don't know what someone does, that their job is easy and/or unnecessary or you could pick it up quickly. It may or may not be true but assuming it to be the case is unlikely to take you anywhere good.


It's not simple at all, that's a huge reduction to the underlying premise. The complexity is the reason that AI is a threat. That complexity revolves around a tremendous amount of data and how that data interacts. The very nature of the field makes it non-experimental but ripe for advanced automation based on machine learning. The science of engineering from a practical standpoint, where most demand for employees comes from, is very much algorithmic.


> The science of engineering from a practical standpoint, where most demand for employees comes from, is very much algorithmic.

You should read up on Göedel's and Turing's work on the limits of formal systems and computability.

You are basically presuming that P=NP.


> just

In my experience this word means you don't know whatever you're speaking about. "Just" almost always hide a ton of unknown unknowns. After being burned enough times nowadays when I'm going to use it I try to stop and start asking more questions.


It's a trick of human psychology. Asking "why don't you just..." leads to one reaction, when asking "what are the road blocks to completing..." leads to a different but same answer. But thinking "just" is good when you see it as a learning opportunity.


I mean, perhaps, but in this case "just" isn't offering any cover. It is only part of the sentence for alliterative purposes, you could "just" remove it and the meaning remains.


>a main role of the engineer is to interface with the non-technical persons or other engineers

The main role of the engineer is being responsible for the building not collapsing.


I keep coming back to this point. Lots of jobs are fundamentally about taking responsibility. Even if AI were to replace most of the work involved, only a human can meaningfully take responsibility for the outcome.


If there is profit in taking that risk someone will do it. Corporations don't think in terms of the real outcome of problems, they think in terms of cost to litigate or underwrite.


Indeed. I sometimes bring this up in terms of "cybersecurity" - in the real world, "cybersecurity" is only tangentially about the tech and hacking; it's mostly about shifting and diffusing liability. That's why the certifications and standards like SOC.2 exist ("I followed the State Of The Art Industry Standard Practices, therefore It's Not My Fault"), that's what external auditors get paid for ("and this external audit confirmed I Followed The Best Practices, therefore It's Not My Fault"), that's why endpoint security exists and why cybersec is denominated not in algorithms, but third-party vendors you integrate, etc. It all works out into a form of distributed insurance, where the blame flows around via contractual agreements, some parties pay out damages to other parties (and recoup it from actual insurance), and all is fine.


I think about this a lot when it comes to self-driving cars. Unless a manufacturer assumes liability, why would anyone purchase one and subject themselves to potential liability for something they by definition did not do? This issue will be a big sticking point for adoption.


Consumers will tend to do what they are told and the manufacturers will lobby the government to create liability protections for consumers. Insurance companies will weight against human drivers and underwrite accordingly.


At a high level yes, but there are multiple levels of teams below that. There are many cases where senior engineers spend all their time reviewing plans from outsourced engineers.


ChatGPT will probably take more responsibility than Boeing for their airplane software.


Most engineering fields are de jure professional, which means they can and probably will enforce limitations on the use of GenAI or its successor tech before giving up that kind of job security. Same goes for the legal profession.

Software development does not have that kind of protection.


Sure and people thought taxi medallions were one of the strongest appreciating asset classes. I'm certain they will try but market inefficiencies typically only last if they are the most profitable scenario. Private equity is already buying up professional and trade businesses at a record pace to exploit inefficiencies caused by licensing. Dentists, vets, Urgent Care, HVAC, plumbing, pest control, etc. Engineering firms are no exception. Can a licensed engineer stamp one million AI generated plans a day? That's the person PE will find and run with that. My neighbor was a licensed HVAC contractor for 18 yrs with a 4-5 person crew. He got bought out and now has 200+ techs operating under his license. Buy some vans, make some shirts, throw up a billboard, advertise during the local news. They can hire anyone as an apprentice, 90% of the calls are change the filter, flip the breaker, check refrigerant, recommend a new unit.


for ~3 decades IT could pretend it didn't need unions because wages and opportunities were good. now the pendulum is swinging back -- maybe they do need those kinds of protections.

and professional orgs are more than just union-ish cartels, they exist to ensure standards, and enforce responsibility on their members. you do shitty unethical stuff as a lawyer and you get disbarred; doctors lose medical licenses, etc.


I promise the amount of time, experiments and novel approaches you’ve tested are .0001% of what others have running in stealth projects. Ive spent an average of 10 hours per day constantly since 2022 working on LLMs, and I know that even what I’ve built pales in comparison to other labs. (And im well beyond agents at this point). Agentic AI is what’s popular in the mainstream, but it’s going to be trounced by at least 2 new paradigms this year.


Say more.


seems like OP ran out of tokens


So what is your prediction?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: