Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway0123_5's commentslogin

The account is 47 minutes old and with the writing style plus the hefty dose of em dashes, I think they are an LLM.

> Job loss is likely to have statistics more comparable to the Black Plague.

Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.

On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).


It is very obvious what and who caused the low living standards in North Korea and yet here we are decades later with no end in site.


Is it obvious? I suspect there are at least two sets of popular answers depending on what propaganda you consume.


> And at the end of the day, AI can be unplugged

We can't stop OpenClaw, because humans are curious. It just takes one unleashed model with a crypto account and some way to make money for the first independent AI's to start bleeding into cyberspace.

We can't opt out of AI competition, because other individuals, organizations and nation states are not going to stop, and not going to leverage their AI if they get ahead of us.

> AI job loss would quickly eclipse all other political concerns.

True. I think this is one of only a few certainties.


> because other individuals, organizations and nation states are not going to stop, and not going to leverage their AI if they get ahead of us.

I don't think that it is likely AT ALL, but it is probably only necessary for China and the US to agree to stop, not all organizations and nation states. It is at least possible given leadership in both countries that see AI as an existential threat.

The hardware needed to run and train SOTA AI can only be made by a very small handful of companies in a small handful of countries that either the US or China have significant influence over. Making AI R&D illegal would stop 99% of it overnight, most of the researchers are in it for money rather than some ideological commitment and there are plenty of other well-paid jobs they could take. Doing local inference in secret with existing models and GPUs would be possible, but training new SOTA models probably wouldn't be.


> LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)

I think it makes sense? Unlike small details which are certain to be explicitly part of the training data, "big picture stuff" feels like it would mostly be captured only indirectly.


> 3d graphics

Seems like the G in GPU is very obsolete now:

https://www.tomshardware.com/news/nvidia-h100-benchmarkedin-...

> As it turns out Nvidia's H100, a card that costs over $30,000 performs worse than integrated GPUs in such benchmarks as 3DMark and Red Dead Redemption 2


> you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.

I think this is where we're headed, very quickly, and I'm worried about it from a social stability perspective (as well as personal financial security of course). There's probably not a single white-collar job that I'd feel comfortable spending 4+ years training for right now (even assuming I don't have to pay or take out debt for the training). Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.

Lots and lots of people who did or will do "all the right things," with no benefit earned from it. Even if hypothetically there is something new you can reskill into every five years, how is that sustainable? If you're young and without children, maybe it is possible. Certainly doesn't sound fun, and I say this as someone who joined tech in part because of how fast-paced it was.


> Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.

I'd like to see real examples of this, beyond trivial ones like low-quality copywriting (i.e. the "slop" before there was slop) that just turns into copyediting. Current AI's are a huge force multiplier for most white-collar skills, including software development.


LLMs and AI more broadly certainly seem to have upended (or have the potential to upend) a lot of white-collar work outside of technology and art. Translators are one obvious example. Lawyers might be on the chopping block if they don't ban the use of AI for practicing law. Both seem about as far as you can get from "careers in technology," and in fact writing has pretty much always been framed as being on the opposite end of the spectrum from tech jobs, but is clearly vulnerable to technological progress.

Right now I can think of very few white-collar jobs that I would feel comfortable training 4+ years for (let alone spending money or taking on debt to do so). It is far from a guarantee that almost any 4-year degree you enroll in today will have any value in four years. That has basically never before been true, even in tech. Blue collar jobs are clearly safer, but I wouldn't say safe. Robotics is moving fast too.

I really can't imagine the social effects of this reality being positive, absent massive and unprecedented redistribution of the wealth that the productivity of AI enables.


Yeah, I notice a lot of the optimism is from people who have been in the field for decades. I'm newish to the field, half a decade out of undergrad. It definitely feels like almost all of what I learned has been (or will soon be) completely devalued. I'm sure this stuff feels a lot less threatening if you've had decades to earn a great salary and save a bunch of money. If money wasn't a concern I'd be thrilled about it too.


No, dont trust the supposed "staff engineer" types, Many had forgotten how to write code and now they can finally live the fantasy of being architects. So for them its like winning a jackpot. For people who could always write good code, the basics are still same, a good dev is still a good dev, and its even more important to be able to read & critique code.


On iOS Safari it loads and works decent for me, but w/ iOS Firefox and Firefox Focus doesn't even load.


> But I think we're still a long way off non-technical people being able to develop applications.

I'm surprised I haven't seen anyone do a case study, having truly non-technical people build apps with these tools. Take a few moderately tech-savvy (can use MS office up to doing basic stuff in excel, understands a filesystem) people who work white collar jobs. Give them a one or two-day crash course on how Claude Code works. See what is the most complicated app which they can develop that is reasonably stable and secure.


> Claude Opus 4.5 in a casual Claude Code session, approximately matching the best human performance in 2 hours

Is this saying that Claude matched the best human performance, where the human had two hours? I think that is the correct reading, but I'm not certain they don't mean that Claude had two hours, and matched the best human performance where the human had an arbitrary amount of time. The former is impressive but the later would be even more so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: