Hacker Newsnew | past | comments | ask | show | jobs | submit | blindhippo's commentslogin

If anything, the AI bubble is reinforcing to me (and hopefully many more people) that the "markets" are anything but rational. None of the investments going on have followed any semblance of fundamentals - it's all pure instinct and chasing hype. I just hope it doesn't tear down the world for the 99% of us unable to actually reap any benefits from it.

AI is basically a toy for 99% of us. It's a long long ways away from the productivity boost people love to claim to justify the sky high valuations. It will fade to being a background tech employed strategically I suspect - similar to other machine learning applications and this is exactly where it belongs.

I'm forced to use it (literally, AI usage is now used as a talent review metric...) and frankly, it's maybe helped speed me up... 5-10%? I spend more time trying to get the tools to be useful than I would just doing the task myself. The only true benefit I've gotten has been unit test generation. Ask it to do any meaningful work on a mature code base and you're in for a wild ride. So there's my anecdotal "sentiment".


I multi task much more now that i can farm off small coding assignments to agents. i pay hndreds per month in tokens. for my role personally its been a massive paradigm shift.

Might work for you, but if I multi task too much, the quality of my output drops significantly. Where I work, that does not fly. I cannot trust any agent to handle anything without babysitting them to avoid going off the rails - but perhaps the tools I have access to just aren't good (underlying model is claude 4.5, so it the model isn't the cause).

I've said this in the past and I'll continue to say it - until the tools get far better at managing context, they will be hard locked for value in most use cases. The moment I see "summarizing conversation" I know I'm about to waste 20 minutes fixing code.


I think it depends on the project and the context, but I developed my own task management system particularly because of this challenge. I'm starting to extend this with verification gates as well.

If I worked on different types of systems with different types of tasks I might feel the same way as you, i think AI works well in specific targeted use cases, where some amount of hallucination can be tolerated and addressed.

What models are you using, I use opus 4.5, which can one shot a surprising ratio of tasks.


If you can predict that hitting “summarize conversation” equals rework, what can you change upstream so you avoid triggering it? Are you relying on the agent to carry state instead of dumping it into .MD files? What happens if your computer crashes?

> so it the model isn't the cause

Thing is, the prompts, those stupid little bits of English that can't possiu matter all that much? It turns out they affect the models performance a ton.


There are absolutely folks like you out there and I don’t doubt the productivity increase. The challenge is you are not the norm and the hundreds per month from you and others like you are a drop in the bucket of what’s needed to pay for all this.

To each his own, but multi-tasking feels bad to me. I want to spend my life pursuing mastery of a craft, not lazily delegating. Not that everyone should have the same goals, but the mastery route feels like it's dying off. It makes me sad.

I get it that some people just want to see the thing on the screen. Or your priority is to be a high status person with a loving family etc.. etc... All noble goals. I just don't feel a sense of fulfillment from a life not in pursuit of something deeper. The AI can do it better than me, but I don't really care at the end of the day. Maybe super-corp wants the AI to do it then, but it's a shame.


I lazily delegate things that can be automated, which frees me up to do actual feature development.

> I want to spend my life pursuing mastery of a craft, not lazily delegating.

And yet, the Renaissance "grand masters" became known as masters through systematizing delegation:

https://smarthistory.org/workshop-italian-renaissance-art/


I have wondered about that actually. Thanks, I'll read that, looks interesting.

Surely Donald Knuth and John Carmack are genuine masters though? There's the Elon Musk theory of mastery where everyone says you're great, but you hire a guy to do it, and there's the <nobody knows this guy but he's having a blast and is really good> theory where you make average income but live a life fulfilled. On my deathbed I want to be the second. (Sorry this is getting off topic.)


Masters of what though?

Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all. Same with plenty of people we label as "masters" in hindsight. The mastery isn’t always in the craft itself.

What actually seems risky is anchoring your identity to being the best at a specific thing in a specific era. If you're the town’s horse whisperer, life is great right up until cars show up. Then what? If your value is "I'm the horse guy," you're toast. If your value is taste, judgment, curiosity, or building good things with other people, you adapt.

So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.


I won't insult the man, but I never liked Steve Jobs. I'd rather be Wozniak in that story.

"taste, judgment, curiosity, or building good things with other people"

Taste is susceptible to turning into a vibes / popularity thing. I think success is mostly about (firstly just doing the basics like going to work on time and not being a dick), then ego, personality, presentation, etc... These things seem like unfulfilling preoccupations, not that I'm not susceptible to them like anyone else, so in my best life I wouldn't be so concerned about "success". I just want to master a craft and be satisfied in that pursuit.

I'd love to build good things with other people, but for whatever reason I've never found other people to build things with. So maybe I suck, that's a possibility. I think all I can do is settle on being the horse guy.

(I'm also not incurious about AI. I use AI to learn things. I just don't want to give everything away and become only a delegator.)

Edit: I'm genuinely terrified that AI is going to do ALL of the things, so there's not going to be a "survives the shift" except for having a likable / respectable / fearsome personality


> Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all.

I doubt Jobs would classify himself as a great programmer, so point being?

> So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.

That's like saying karate masters should drop the training and just focus on the gun? It does lose meaning.


It seems you are a bit obsessed with the Renaissance? Are you building a "vibeart" platform?

I like how you compare people to renaissance painters to inflate their egos

The other surprising skill from this whole AI craze is, it turns out that being able to social engineer an LLM is a transferable skill to getting humans to do what you want.

One of the funniest things to see nowadays is the opposite tho, some people expecting similar responses from people but getting thrashed as we are not LLMs programmed to make them feel good

Inflate whose ego? Mine? It seemed more like a swipe than ego-inflation, but I was happy to see the article anyway.

> the "markets" are anything but rational

No, they are rational. At least those with a lot of money.

> None of the investments going on have followed any semblance of fundamentals - it's all pure instinct and chasing hype

That's not what investments are about. Their fundamentals are if they can get a good return on their money. As long as the odds of the next sucker to buy them up exists it is a good investment.

> AI is basically a toy for 99% of us.

You do pay for toys right? Toy shops aren't irrational?


> AI is basically a toy for 99% of us.

So you're at the "first they laugh at us" stage then.


OK, but not everything that gets to that stage moves on to the next, let alone the stage after that.

But I will give you this, the "first they ignore us" stage is over, at least for many people.


Thing is, context management is NOT obvious to most users of these tools. I use agentic coding tools on a daily basis now and still struggle with keeping context focused and useful, usually relying on patterns such as memory banks and task tracking documents to try to keep a log of things as I pop in and out of different agent contexts. Yet still, one false move and I've blown the window leading to a "compression" which is utterly useless.

The tools need to figure out how to manage context for us. This isn't something we have to deal with when working with other humans - we reliably trust that other humans (for the most part) retain what they are told. Agentic use now is like training a team mate to do one thing, then taking it out back to shoot it in the head before starting to train another one. It's inefficient and taxing on the user.


I don't care what kind or style of job - if the balance of power in any labour relationship is overwhelmingly on the employer side, collective action is the only way labour can regain a modicum of negotiating power. To think that the style of job has any bearing on this relationship is naive.


I do agree collective action specifically can help, but not via organizing with a modern American Union.


The laws under which unions are organized have a huge influence on their effectiveness, and American unions are consequently... not that great.

The United Auto Workers partially funded the Port Huron Statement authored by Students for a Democratic Society, a generally socialist group. Now, it's entirely plausible that the UAW leadership wanted to have some modicum of influence, and that's why they loaned them an entire union retreat on Lake Huron. But I doubt that the average UAW factory worker was excited to see their union dues used to provide elite college students with a mostly-free vacation for political organizing.

I am not a labor law expert by any means, but my understanding of, say, German labor law is that it's much better at actually representing the workers in a given factory, in part because a union that doesn't do that loses its members to ones that will (since there's no requirement that everyone in a given job class has to join the same union).


>collective action is the only way labour can regain a modicum of negotiating power.

Does collective action mean everyone gets paid the same? If not, how does that work exactly?


No it doesn't mean that.

The way it works in the movie industry is actors or writers can sign a contract with minimum union terms. Or, if they're a big name, their agent negotiates a contract on their behalf.

From time to time the union membership will want improvements or changes to the minimum terms. If they don't get these terms then the union - stars and everyone else - goes on strike.

These strikes are well publicized. I'm surprised you haven't heard of them.


I've only heard of one during covid, and mostly people didn't care (IMO).


> mostly people didn't care

I don't know what that means.


You asked if I heard of [the] hollywood strike, I have. I didn't care because most of what hollywood puts out is not worth consuming.

The writers could go on strike for years, so what?


You don't have to care. None of the parties involved care if you care or not. On the other hand, if you had an open mind about this topic, you'd see that strikes work based on this evidence.


The reports of time saved are so cooked it's not funny. Just part of the overall AI grift going on - the actual productivity gains will shake out in the next couple years, just gotta live through the current "game changer" and "paradigm shifting event" nonsense the upper management types and VC's are pushing.

When I see stuff like "Amazon saved 4500 dev years of effort by using AI", I know it's on stuff that we would use automation for anyways so it's not really THAT big of a difference over what we've done in the past. But it sounds better if we just pretend like we can compare AI solutions to literally having thousands of developers write Java SDK upgrades manually.


Humans (accountants) are non-deterministic, so unsure if an LLM would be better or worse if we threw more effort at the problem.

But in general, I tend to side with the "lets leave the math to purpose built models/applications" instead of generalized LLMS. LLMs are great if you are just aiming for "good enough to get through next quarter" type results. If you need 100% accuracy, an LLM isn't going to cut it.


Human accountants also have a very important property: liability.

If a certified accountant told me to do X, I'm covered (at least to the point they would assist in recovering, or I can get compensation through their insurance). If LLM tells me, I'm in a bigger problem.


Most small businesses cannot afford CPAs for everyday tasks. At best a CPA signs off on the annual summaries. Most day to day work is done by bookkeepers who are not CPAs.

In my area (Vermont) the going rate for a good CPA is $200/hr. Bookkeepers are $20-30/hr.


Most small businesses also cant afford the risk of current LLMs putting garbage in their books that, in the best case, has to be cleaned up or redone, or, in the worst case, gets the IRS up your ass


Tuned LLMs will become more accurate than bookkeepers for most day-to-day small business transactions. I think you underestimate the amount of errors that normal bookkeepers tend to make.


There is "LLM misinformation" insurance, a very new branch of cyber insurance.


"Let's knock down social media's walled gardens..."

Links to pay wall trying to get me to pay for some subscription service I've never heard of and would never want to sign up for sight unseen.

mmmhmmm


The financial times is well known and subscriptions are journalism's only viable model today


But it's not necessarily good hacker news material. We get links that do not work, publishers get free promotion without providing anything. We will say something about viable models, and then somebody will post an archive.org link, bypassing the paywall and the viable model.

I flag links that do not work, not because I'm opposed to subscriptions (I subscribe to some online publications), but because I think Hacker News should only link to articles that are actually on the internet.


Journalism, sure. But most people are happy with entertainment news.

I'm a FT subscriber. I just know 99% of people aren't, and lots aren't even aware it exists or its reputation.


If you have to tell people that it's well known, it's not well known.


Just an internationally known 137 yr old news outlet that is ranked #3 for Finance news in web traffic... Not knowing about FT says alot more about the poster than FT.


His thoughts and opinions are not related to FT's and so while the combo of his article title + a paywall might appear ironic, in reality it's just happenstance, with no deeper meaning or hypocrisy.

The "walled gardens" that he and others speak about in this context do not refer to sites/apps that cost money, they have a massively different meaning. But perhaps it wouldn't be fair to expect you to know this if the paywall prevented you from reading the article. Fear not, now you can: https://archive.ph/4Vvms


FT is one of the most prestigious journals going. This article is designed to be read by the elite, especially in Europe.


So no walled gardens outside the castles, essentially?


It could be argued that the emergence of the web and search engines in particular has established this as a common pattern long before AI was around. I'm not convinced that AI represents a dramatic change to this behavior, though the point about anthropomorphizing AI likely acts as a magnifier.


I think the main difference is the degree of anthropomorphizing that happens with new chatbots. I mean, most kids in the 2000's didn't believe that they were literally asking Jeeves a question, but a lot of users today actually think of AI as an anthropomorphic being.


> but a lot of users today actually think of AI as an anthropomorphic being.

You think more than 10% of users?


It's easily closer to 90% than 10%.


I'm sure we could extend this farther into the separation of people out of villages into cities and the rise of transactional capitalism.


Same things I use it for as well - crap like "update this class to use JDK21" or "re-implement this client to use AWS SDKv2" or whatever.

And it works maybe... 80% of the way and I spend all my time fixing the remaining 20%. Anecdotally I don't "feel" like this really accelerates me or reduces the time it would take me to do the change if I just implemented the translation manually.


Amazon is publicly claiming that they have saved hundreds of millions on jvm upgrades using AI, so while it feels trivial - because before that work would end up in the "just don't do it" pile - it's a relevant use case.


I think this is overestimating the impact of LLMs.

Fact is, even if they are capable of fully replicating and even replacing actual human thought, at best they regurgitate what has come before. They are, effectively, a tutor (as another commentator pointed out).

A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

I personally still don't see the actual value of LLMs being realized vs their cost to build anytime soon. I'll be shocked if any of this AI investment pays off beyond some minor curiosities - in ten years we're going to look back at this period in the same way we look at cryptocurrency now - a waste of resources.


> A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

What changes is the educational history of those humans. It's like how the world is getting obese. On average, we have areas we empirically don't choose our own long term over our short term. Apparently homework is one of those things, according to teachers like in TFA. Instead of doing their own homework, they're having their "tutor" do their homework.

Hopefully the impact of this will be like the impact of calculators, but I also fear that the impact will be like having tutors do your homework and take your tests until you hit a certain grade and suddenly the tools you're reliant on don't work, but you don't have practice doing things any other way.


I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.


I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.

Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).


>at best they regurgitate what has come before

I keep seeing this repeated, but it seems people either take it as being self evident or have a false assumption about how transformers work.


Lazy management, ones that focus on "metrics" and "numbers" rather than actual engagement with their teams/business lines.

I'm only 20% joking here...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: