Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Altman on AI energy: it also takes 20 years of eating food to train a human (reddit.com)
55 points by puttycat 18 days ago | hide | past | favorite | 118 comments


The comparison only starts to make sense in a post-work society where there is no working-class, whose existence depends on working.

Unfortunately these companies are working to eliminate jobs, but not in any way making a path for a transition to a post-work society.


They are not eliminating job, you still have jobs in 1984 which is where we are heading to. You still need to hire someone to do the mass surveillance and policing, and enforcing the laws that are getting more and more draconian day by day. And you still need people to instigate-cough-motivate hate on something in order to keep the momentum of the society to shift the focus. Those still took labor but AI makes it easier.

We are indeed entering a post-job-scarity environment though. You see a lot of ghost posting and lack of response for years now, 6 out of 10 application is ghosted, 2 out of 10 said no, and just a few remaining. Jobs are getting rarer and are going to be more of a status rather than for breadwinning


It really sucks that everyone’s go to dystopia is 1984. Especially in this case given 1984 required the active participation of millions of citizens whereas Brave New World maps better where control is enforced through comfort and irrelevance instead of force.

The tech dystopia doesn’t even try to flatter us by assuming we’re important enough to oppress individually.


If we're closer to BNW, then where are the comforts?

Everything that were comforts before COVID are far too expensive now.

Even the bread and circuses are gradually being taken away.


>If we're closer to BNW, then where are the comforts?

Netflix/Sports/RealityTv + Onlyfans/PH + Doordash tacobell/chic-fil-a


The comforts, you mean soma? Well, besides from the fentanyl and tranq that turns people into almost-literal zombies since they can't get high from oxycontin and from codein no more, now instead we have all the free dopamine hits from gacha game lootboxes, endless attention seeking short videos, that is, tiktok/youtube shorts and instagram reels. Sometimes you feel high in the ups and lows of stock market and crypto bro doing rug pulls too. I have to unfortunately say I also fell into some of those lately, mostly in gacha games and opening weapon cases in CSGO, and for some reason I got rid of instagram which is a good thing I guess?

After all, "attention is all you need". Although it is nothing but just a title of the paper who introduced us to "transformers", and enabled all this AI slop lately (note: I'm not against LLM "AI" but I'm against using it in an irresponsibly, e.g. vibe coding without knowing your domain knowledge in the first place is one), but it is quite a dark humor to me that we can actually use this title literally to describe a lot of real world phenomemon.

All of that to make you feel numb in the rat race to the bottom. I don't think your argument that we are closer to BNM than 1984 is wrong, just that the antifa and all the ICE, and the politics fiasco makes me feel like it is more 1984 than BNM. Or maybe we have a super deluxe package to have both.


The comforts are digital. TikTok, YouTube, Twitch, Instagram, video games.


> You still need to hire someone to do the mass surveillance and policing

Someone, yes, but not millions, maybe not even thousands anymore.

The Stasi needed 1 in 40 of the working population as informers (https://www.amnesty.org/en/latest/news/2015/03/lessons-from-...), but large parts of surveillance can be automated now.

For policing, you may not need that much, either. To put out smaller fires, you only need local superiority in numbers. That can be achieved by having a small force that can be rapidly deployed.

For large uprisings, you can use drones. You’ll want to avoid that, though because that isn’t guaranteed to keep you in power.


This!

AI is taking jobs faster than making new ones!

No field is safe and trying to switch careers over 40 is almost impossible. Even flipping burgers is nearly impossible (very hard to do without pior experience at such age).


Over 40 at least you are almost halfway. Definitely you can switch, if you are a half decent thinker, most things are possible; I am sure you can become an electrician and that, at least where I live, makes very good money. Plumbers you cannot find if you are keedeep in your own poop here. In my home country my old classmates (I have an electrician degree but became a software engineer) are making more than most programmers being electricians. Not sure what is wrong with someone who cannot flip burgers after 40.


Have you tried flip burgers after 20 years in IT and zero manual labor? Forced downshift is BRUTAL.

Youngsters are 2x faster than you and precise. I'm not from USA tho.

I was SHOCKED that I was the worst at flipping the burgers at McDonald's. Pace is CRAZY and stress is NOTHING in comparison to let's say dropping production database or shipping 2 months late in startup.

Shit just looks easy, but noise, speed, temp and pressure (constant KPI over your head) is CRAAAAAZY.

Got kicked out after 3 months :| Got new job at tech support


The elimination of jobs necessarily 'makes a path' to a post-work society. Post-work couldn't exist without it. Beyond that, it isn't in AI companies' power to shape economies and societies for post-work (which is what I assume you're really getting at here). All Altman, Amodei, Hassabis and the others can do is alert policymakers to what's coming, and they're trying pretty hard to do that, aren't they? - often in the teeth of the skepticism we see so much of on this site. Really if policy makers won't look ahead, the AI companies can't be blamed for the bumps we're going hit.


>they're trying pretty hard to do that, aren't they

How so? Throwing out the term "UBI" every once in a while doesn't miraculously make it economically viable.


Do you really pay so little attention to the space that you think this is all they do? Almost every public discussion or interview involving these figures turns at some point to society's unpreparedness for what's coming, for instance Amodei's interview last week.

https://www.dwarkesh.com/p/dario-amodei-2


How do these interviews magically make the hard economics of UBI viable? Read up on UBI a little bit, and you'll quickly realize that it's far more expensive than universal healthcare, and we can't even get our politicians onboard with that.


That's uncertain in a post-work economy or even for the transition. Some mechanism will need to exist for the abundance resulting from automation to be distributed fairly - in both the post-work era and during the transition to it. Also measures to ensure production of essential goods that might otherwise disappear with deflation. This is all out of scope for AI companies, unless you fancy putting off a response until full automation, and anointing them as (fingers crossed) benign dictators for life?


Yes, these people are publicly warning about the risks of AI. Altman is promoting regulation that clearly favors OpenAI. This is called regulatory capture. It aims to strengthen one's own position. Furthermore, the claim that these companies cannot shape economies is simply false. They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.

Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill. You would have to be from another planet (or a sociopath) not to understand that this violates boundary conditions that we implicitly want to leave intact.


> They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.

They control how quickly they deploy, but I don't see how they have any control over the rest: "which industries they automate" is a function of how well the model has generalised. All the medical information, laws and case histories, all the source code, they're still only "ok"; and how are they, as a model provider in the US, supposed to cooperate (or not) with a trade union in e.g. Brandenburg whose bosses are using their services?

> Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill.

Certainly what I fear.

Any given UBI is only meaningful if it is connected to the source of economic productivity; if a government is offering it, it must control that source; if the source is AI (and robotics), that government must control the AI/robots.

If governments wait until the AI is ready, the companies will have the power to simply say "make me"; if the governments step in before the AI is ready, they may simply find themselves out-competed by businesses in jurisdictions whose governments are less interested in intervention.

And even if a government pulls it off, how does that government remain, long-term, friendly to its own people? Even democracies do not last forever.


> Widespread job losses as a path to post-work

who exactly is paying for you to live and why would they be so kind?


I want to live. And if you threaten my life, I will defend myself with whatever means I have at my disposal. It makes no difference whether you threaten me by taking away my livelihood or by withholding it from me. You therefore have a choice. Either you value my life as you value your own, or there will be war between us. And that is a war you will not win, because you are not only waging it against me, but against all people whose right to life you wish to deny.


Notwithstanding that I do not believe he is competent, Musk is currently talking about turning the entire moon into a space data center factory, specifically with a capacity so large that the resulting products of said factory could freeze the tropics just by blocking out the sun.

It is fortunate for him that those of us who understand the implications of this, do not believe he can do it.

Do you believe he could do it? Would you act against him now, when most people think his success in this endevour is implausible? Or wait until he demonstates all the parts necessary, at which point action against him is impossible? Or do you believe his claims that him doing this will render work unnecessary rather than, as I fear, making it impossible without also making it unnecessary?

What about everyone else that you think would be on your side? If you need everyone on-side, timing matters too.


Sorry, man, but I can't follow the plot. Why exactly do data centers from the moon block out the sun and freeze the tropics and make work unnecessary? Serious question: Are you okay? I hope you're just making fun of my last answer a little.


> Why exactly do data centers from the moon block out the sun

Musk wants to make a data center *factory* on the Moon, with an output of 1000 TW/year of satellites which are (supposedly) going to be launched from the moon.

I have done the maths on this, and suspect Musk used Grok for this plan, those numbers are on the edge of what's plausible for the thermodynamic limits of rearranging atoms, even with engineering that nobody's actually designed yet. But let's disregard my mere opinion that this is beyond him and say he solves all those technical difficulties:

If you built that much each year, given how long it lasts, the physical size of that many watts of PV-powered satellites is enough to block enough sunlight as to lower the average temperature of planet Earth by 33°C immediately, without accounting for any additional affects from how ice reflects more light than unfrozen land and water. Those feedback mechanisms can plausibly make it more like 48°C cooling.

> and make work unnecessary

Note: I am not making that claim, Musk is. Musk doesn't have a good answer to this, just vague platitudes about how AI can do all the work, not why his AI and his robots are going to give everyone (and not just his fans) luxury.

> Serious question: Are you okay?

No. I see the world's richest man sowing chaos, and demanding the removal of all checks on his plan to gain even more power both by political campaigning and by using phrases such as "robot army" within his own companies, and when his AI calls itself "Mecha Hitler" the military of the world's largest economy decides to pay for its use and then goes on to make threats against other competing AI companies that don't want to be involved in the military.


We are living through a time that seems like a completely crazy sci-fi plot. I don't understand why Musk is currently the richest person in the world. I don't understand what is going on politically, especially in the US and around the world, geopolitically, economically, socially, and in terms of information technology. It's as if the world I've known for the first two-thirds of my life has completely drifted away into an absurd alternate reality. It takes a bit of effort for me to keep a clear head. What I can say with some certainty is that someone who actually intends to do what Musk is announcing would behave differently in many ways than Musk does. Musk is ultimately a (rather successful) impostor. I assume that his communication is aimed at eliciting certain reactions from the public and is less about negotiating plausible realities on a factual level. That's why I'm not so interested in playing out scenarios based on the content of his grandiose announcements. I am more concerned about the destabilizing effect and about a third world war, which we may already be in the midst of.


Same.

Just to re-emphasise: I also don't believe Musk.

The earlier question is: when do you decide to believe someone like him? When do you act against someone like him who you do believe? Waiting until he is credible is waiting too late, acting before then makes you look like the villain and you don't get much support.


> act against someone like him

What do you mean? I have the day off today. I'm sitting here in my underwear listening to my washing machine in the background. The sun is shining outside. I went for a walk in the park next door earlier. In an hour (Germany time), I'll cook something for lunch and then go to the garage to put a new rear tire on my motorcycle. Tomorrow, sauna; Sunday, bike ride; and on Monday, back to the office. What I'm trying to say is: I'm not the protagonist whose decision determines whether Musk f*ks up the world or not. And that's not a question of my priorities, but of a realistic assessment of the real scope for action.

If you want to have a real chance of putting someone like Musk in his place, you need to join the largest possible political collective with the right agenda. But looking at the course of the conversation, my respectful recommendation (assuming you're not trolling) would be to focus on your own well-being first.


I mean your own words up-thread:

> I want to live. And if you threaten my life, I will defend myself with whatever means I have at my disposal. It makes no difference whether you threaten me by taking away my livelihood or by withholding it from me. You therefore have a choice. Either you value my life as you value your own, or there will be war between us. And that is a war you will not win, because you are not only waging it against me, but against all people whose right to life you wish to deny.

Like, OK, is that just you blowing off steam or do you have a specific threshold where you'll do anything?


Okay, I understand. The person who wrote the parent post seems to believe that people do not fundamentally have a right to survive, but must assert and maintain this claim transactionally in a market context. I think that every person has an intrinsic and incommensurable right to survive, and that this right also includes the right to defend oneself when the right to life is questioned or even endangered by others, not only through actions but also through omissions. For example: I must help you in an emergency, and you must help me in an emergency. I must not let you starve, and you must not let me starve. In a good society, these things are regulated institutionally. In this way, individuals are not burdened with the corresponding moral dilemma. The question of who pays for me to live and why they should do so points in the opposite direction: it suggests that this question needs to be clarified and that I (or any other person) should simply die if I cannot afford to live. I wanted to express that there is an ideological conflict here that could well take on the character of a war, and that my side does not consist of peace-loving hippies, but of people who are prepared to defend themselves very effectively against such a misanthropic ideology.

> do you have a specific threshold where you'll do anything?

This conflict is not fought only once a certain threshold has been reached, but from the outset and continuously, in political struggles, in the struggle for social values and prevailing ethics, etc. Only when there is really no other option is it fought with fists and weapons. If you ask me specifically when the masses will storm the palaces of people like Musk with pitchforks, I can't answer that. For myself, I can say that I still see a lot of scope for political action within the legal frameworks that have been established (at least here in Europe). After World War II, there was a comprehensive redistribution policy throughout the Western world (especially in the US) that we could certainly repeat: top tax rates above 90%, enormous power for trade unions, a rapidly growing middle class, and historically low income concentration. The constraints are different today than they were then, but the only thing that is really necessary is the willingness to put things that are currently upside down back on their feet.


That's a summary of my point, yes.

If I phrase it that succinctly, people tend to reply "democracy!" without considering who has the power and how they behave.


post-work? is this from the same lot who cant work-from-office because theyd have a nervous breakdown? who exactly pays for my existence in this world where i dont have to work?


They ARE, just the post-work society is limited to the people who own the AIs


post-work in that sense is as old as civilization. there is no post-work without some kind of dominating your fellow man


Big fumble to be unaware how this offhand comment would be taken out of context.

He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.

If Altman is to blame for anything, it’s that AI is a scissor-generator extraordinaire.


I haven't watched the whole interview. In the clip, a couple of things jump out:

1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.

2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.

It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.


> 2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.

> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.

Exactly. Perhaps in Altman's world, a human exists specifically to do tasks for him. But in reality, that human was always going to exist and was going to use those 20 years of energy anyway; they only happened to be employed by his rich ass when he wanted them to do a task. It's not equivalent to burning energy on training an LLM to do that task.


Is Altman a scientist? I trust scientists to make fine grained arguments!

AFAIK CEOs jobs include to set vision.

This example sets a post human/less valuable human paradigm.


> as if an LLM should have the same rights to the Earth as we do,

I don't see him calling for an LLM to have rights. I don't think this is part of how OpenAI considers its work at all. Anthropic is open-minded about the possibility, but OpenAI is basically "this is a thing, not a person, do not mistake it for a person".

> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.

His point is flawed in other ways, like the limited competence of the AI and how even an adult human eating food for 20 years has an energy cost on the low end of the estimated energy cost to train a very small and very rubbish LLM, and nowhere near the energy cost of training one that anyone would care about. And even for those fancy models, they're only ok, not great, etc., and there are lots of models being trained rather than this being a one-time thing. Or in the other direction, each human needs to be trained separately and there's 8 billion of us. And what he says in the video doesn't help much either, it's vibes rather than analysis.

But your point here is the wrong thing to call a flaw.

The human is here anyway? First, no: *some* humans are here anyway, but various governments are currently increasing pension ages due to the insufficient number of new humans available to economically support people who are claiming pensions.

Second: so what if it was yes? That argument didn't stop us substituting combustion engines and hydraulics for human muscle.


The problem is that he is now beginning to make comparisons of AI versus Humans, as in it's a competition more than an augmentation.


> He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.

When people have to interpret what you are saying, assuming that you are too intelligent and empathic to mean what you actually said, I think it says a lot.

"What he said is wrong, illogical and dangerous, but you have to forget it and consider that he probably meant this completely different thing that I will expose to you. Because he cannot be rich and powerful AND capable of expressing basic ideas on his own, what did you expect?"


And for that matter, why jump to giving him the benefit of the doubt? I'd just assume the worst intentions until I have any evidence otherwise.


For people rich enough to have dedicated PR staff talking in their field of expertise, there’s no such thing as an offhand comment.


Great to talk about choices in terms of comparison, but this was a really stupid delivery.


I didn't read/hear it as reducing human life to 'training energy', but I don't like the comparison at the technical level.

Firstly, the math isn't even close. A human being consumes maybe 15 MWh of food energy from years 0 to 20. Modern frontier models take on the order of 100,000 MWh to train. It's a 10,000x difference. Furthermore, the human is actively doing 'inference' (living, acting, producing) during those 20 years of training and is also doings lots of non-brain stuff. Besides the energy math, it's comparing apples-to-oranges. A human brain doesn't start out as a blank slate; it has billions of years of evolutionary priors for language and spatial reasoning that LLMs have to teach themselves from scratch, so this could explain why a human can do some things cheaper. Also, the learning material available to a human is inherently created to be easily ingested by a human brain, whereas a blank LLM needs to build the capacity to process that data. Altman seems to hint at a comparison to the whole human evolution, but that seems unfair in the other direction, because humans and human evolution had to make discoveries from scratch and trial and error whereas LLMs get to ingest the final "good stuff". But either way you slice it, it's just not a good comparison, though not an 'inhuman' or immoral one.


A US resident consumes 76 MWh per year [0], so 1.52 GWh over 20 years. A single model can be trained once and used by millions. Therefore LLMs are ~10000x more energy efficient than humans.

https://ourworldindata.org/energy-production-consumption#per...


Your numbers are about how much is used also for transport etc. Sam's number were about what the human body itself uses for training, hence why I used the caloric consumption.


Post human thinking by the CEO is not helping me feel comfortable with the Vision setting going on for Open AI.

Edit: Or perhaps more correctly, "less valuable human". Which is more appropriate?


Good question. It sounds like post-humanism, which, even in like left art circles was considered ‘interesting’ ten years ago (like post-antroposcene). These are not very useful terms so appreciate the nuance of ‘less valuable human’. It is not so catchy though, maybe we need to dig deeper. I am sure this has been discussed before.


I see some folks here defending Altman because it was an off-the-cuff remark in front of a receptive audience. But why does this make the comment acceptable? Would you give me if I talked about eating babies, but defended myself by saying that I was speaking to a receptive audience?

Most charitably, it's a dumb thing to say. It compares two unrelated things if you see the value of human life to be more than just answering prompts. Less charitably, the argument is evil: if he was trying to make a sincere apples-to-apples comparison, it implies that he doesn't value human life beyond the labor his company can automate.

I can understand edgy teenagers making arguments like that on LessWrong forums, but Altman ought to know better. He either doesn't, or he sincerely believes what the comment implies.


The problem I see is that in our society, CEOs are chosen for their ability to convince that they can increase productivity. Not for their ability to improve the life of people.

Just like the paperclip AI issue, CEOs are optimising for arbitrary metrics, and they are really good at that (because we select them precisely for that).

So obviously, as soon as you start wondering about how competent a CEO is at talking about life, you're in for a treat. He obviously has no idea about life. He is just a successful paperclip production machine.

What scares me is that we select those people for their ability to convince that they will generate money, in the hope that they will actually do that, and then we value their opinion about completely unrelated topics.

You may as well ask a curling professional athlete what they think about the problem of AI and energy. Not that they necessarily will say something as dumb as Altman of course, but you wouldn't behave as if they were experts in the field of... you know... the impact of energy on humanity and life in general.


How many people is he willing to let starve for the sake of his ego power and wealth?


It's, okay, we can just eat cake instead!


One could feed several hundred thousand kids to adulthood with for the cost of training OpenAIs biggest models.


What a depressing view of life. I don't expect him to take on some religious or philosophical view, but come on, how could you grow up somewhere wonderful, start a successful company with a lot of people you probably like and enjoy working with, have enough money to buy an island and still summarize life like that.

I prefer Richard Brandson's worldview. He's rich, but seeing the way he talks about his late wife and her memory warms my heart. I envy him for the human parts of his life, not just the success.


Power just unequivocally screws up most people. This past year has really crystallized how few good leaders there are.


CEOs are a mix of scary, funny, innovative and naive persons. It's the first time an LLM is compared with a human in terms of energy. I will not comment the foolishness and superficiality of the quote. I would add that a human can meet another human and they make another human.


People dismiss this as a meme too quick but I think this is a good thought experiment not only for drawing a comparison for energy consumption but learning efficiency. AI is often criticized for its low learning efficiency but if you compare it to a human it's not looking too bad. Let's say a human becomes an AGI-level learner by the time they are 14yo. Human vision is approx. 500 megapixels and that is approx. 1.7 Gb per second of vision data. That means it takes approx. 800 PETABYTES of data to 'pre-train' a human to become a well-enough generalist learner. Take Llama 4 from Meta whose training data set consisted of 30 trillion tokens - his is equivalent of 120 Tb which is a mere 0.12 peta-bytes.

I am well aware this is a flimsy napkin math at best but I find comparing LLM models to humans with a more serious tone is fun and useful thought.


To me the whole OpenClaw situation is proof enough how desperate OpenAI must be for fresh (real, non-circular) cash.

In that light Altman saying things things like that is not really surprising. Contrary it only reinforces their desperation to me.


Sam Altman (and everyone else in the field) complain that estimates of water and power consumption of AI are wrong, but instead of just publishing that data they come up with this crap.


What data do you want to see published about water consumption? Here's Google's tiny tiny estimates[1], for example. AFAIK AI water usage has always been a made-up issue, spread by people who never realized before how much water is routinely spent by humanity.

[1] https://cloud.google.com/blog/products/infrastructure/measur...


From the post you linked "These findings do not represent the specific environmental impact for all Gemini App text-generation prompts nor are they indicative of future performance."

I would like to see independent agencies having access to the various companies to provide reliable estimates. Just because humanity as a whole consumes a lot of water does not justify the extra consumption of water AND energy AND land AND monetary resources that is wasted on AI crap.


An AI model takes about 100 to 150 MW to be trained.

A human at rest used ~100Wh, up to 400Wh for an elite athlete under effort.

So 20 years at 200Wh (I'm being generous here) ends up being 35MW, still cheaper, and inference is still at under 200Wh!


The reductionism and comparison of a human life to a corporate product is disgusting but it's valuable to see how they truly see the world they are creating.

Their idea of a person's value seems to be less than the communist soviets at this point, nothing but work units.


It was meant tongue in cheek, if we're doing wild comparisons, I might as well do one.


How much energy does it take to feed, clothe, house, entertain, and transport that human to 18? Probably $500K worth.


How much does it take to build data centers to house the inferencing, and an the involved logistics, infrastructure setup, bribery, marketing, and organisational structure behind it. Easily in the hundred billions.


> Probably $500K worth.

What life standards do you have!?


I think this reveals a great deal about the thinking of the ruling elites.

The K shaped recovery phenomenon demonstrated that the economy can continue to thrive, when consumption by the lowest earners is replaced and concentrated by earners at the top. This demonstrated to the elites that actually, we don't need as many consumers to grow the economy, and that it's possible to redistribute wealth upward without losing growth.

These public comments just show that the elites are more and more comfortable making it explicit that there are too many "useless eaters" in their opinion, and that the change has been from considering just the Third World to be where these "useless eaters" are while still preserving an imperial core, to now considering everyone that isn't them, regardless of First or Third world, to be a useless eater.

Very dangerous thinking, but at least it's out in the open now.

They want to capture the entire value of everyone's labor and hoard it for themselves, and discard the people that produced it.


This is a profound category error. What Altman reduces to a 20-year 'training' cycle fueled by 'energy' is what we, in the actual world, call life. It is a stunningly hollow perspective that uses the language of industrial output to describe the human experience. While he is likely being provocative to keep his product at the center of the cultural conversation, it probably exposes something about him.


Exactly why we need to rid ourselves (by taxes) of billionaires. Those people have way too much power, and are often stupid dumbasses who just got rich randomly (right moment at the right place, or because their parents were rich in the first place), but are mostly spewing stupid lunacies


This is a super disingenuous take. He was very obviously making a specific point, not try express a perspective on the value of humanity.


I understand he’s making a technical point about efficiency, but language isn't neutral and I think it betrays something deeper. It's such a glib and shallow point too that I think it should be called out since he has a track record of saying some incredibly shallow things about AI, people, politics, and everything really.


The meaning of a message is what has been understood.


Can you please make your substantive points without being snarky or condescending? Your comment would be fine without that last bit.

https://news.ycombinator.com/newsguidelines.html


The meaning of a message is what is intended + communicated, assuming those intentions were communicated clearly.

Willfully interpreting otherwise (especially uncharitably so) is the very definition of being disingenuous, which is pretending to not know what was really meant.


I disagree: if a message is open to such disingenuous interpretations, then its meaning has not been formulated clearly enough. I use the: (1) say what you will communicate, (2) communicate, (3) say what you have communicated rule, also the six W's...


No one communicates that way. It's not practical. Almost all expressions can be uncharitably interpreted by a listener who doesn't like you, and thus has a motive to quote your sentences and disingenuously pretend you're saying something much more dastardly than you clearly intended.


Altman is not a speciesist, I guess.

Context:

Elon Musk is perhaps the world’s most famous doom-monger and has repeatedly sounded the alarm about the possibility of super-smart machines wiping out humanity.

But Google founder Larry Page allegedly dismissed these fears as ‘speciesist’ during an argument at a Napa Valley party in 2015.

A top professor at the Massachusetts Institute of Technology (MIT) has claimed the two tech moguls clashed in a ‘long and spirited debate’ in the early hours of the morning.

In his book Life 3.0: Being Human In The Age of Artificial Intelligence, Max Tegmark wrote: ‘[Page’s] main concerns were that AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google’s “don’t be evil” slogan.

‘Elon kept pushing back and asked Larry to clarify details of his arguments, such as why he was so confident that digital life wouldn’t destroy everything we care about.

‘At times, Larry accused Elon of being “speciesist”: treating certain life forms as inferior just because they were silicon-based rather than carbon-based.’

(https://metro.co.uk/2018/05/02/elon-musks-fears-artificial-i...)


Just more word salad from Altman.


To add some math to the discussion:

- A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).

- Frontier models need something like 1-10 MW-years to train.

- Inference requires .1-1kW computers.

So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.


The math is simpler, 1 human is irreplaceable by AI.

Therefore its value is infinite. Therefore Altman's hypothesis is toilet paper thin.


I remember when toilet paper was like ddr5


The human brain also is a product of billions of years of evolution. We branched off from our common ancestor 7-9 million years ago. We encode quite a lot of structure and information that is essential for intelligence. The starting point of just our life time of training is incomplete.

If you calculate 100W * 7 million years * 365 = 255,500MW to train.


If you really want to go down that path then AI's are the product of human ingenuity and labor so you have to amortize all of that into AI training. Then numbers become pretty meaningless very quickly. That sand didn't up and start thinking on its own you know.


That's the NRE of getting to where we are and having these llms


He really is a total piece of shit isn't he.


He has proved it over and over and over again.


So he's comparing a human being to AI, finally showing what our AI overlords think of humanity: we're just wasteful resources to be replaced by more efficiency tools.


It has become clear to me that Altman considers humans superfluous and a nuisance.

That a human does not have value in and of itself.


I’m not sure it’s possible to conclude what hey actually believes from public statements. I do not trust him to tell the truth about anything related to AI.


To be fair, it is not just him. There is an entire caste of people across the organizations that see employees as a problem. It is absolutely fascinating to watch, because those people tend to be somewhere in management class and appear to derive a fair amount of happiness from said managing ( and we can argue whether those skills are any good ).


This goes beyond just employees.

His comparison devalues the basic value of a human life.


You would need empathy for that.


Ethics would suffice. Or a basic humanistic education. Unfortunately, that is precisely what these people seem to lack.


Poison Ivy.


Well if you consider the theoretical goal of a machine that has all the answers then you’d understand why he thinks that way.


Is it possible to become wealthy like this AND value human life?

Why does it turn out they every single billionaire is also some combination of narcissist, pedophile, petty tyrant, or just utter freakazoid?


Top philanthropists include Jamsetji Tata (donated $102.4 billion), Bill and Melinda Gates ($75.8 billion), Warren Buffett (is pledging to donate 99% of his wealth). Andrew Carnegie gave away 85% of his wealth -- including construction of over 2,500 public libraries.


Gates took more than he gave, for example:

https://www.folklore.org/MacBasic.html


Carnegie did that to white wash his public opinion while he worked his workers non stop and to mutilation or death. When are you going to the library when you work 996 or more?


Bill Gates is not a great example given the recent revelations surrounding him and the nature of his divorce in the latest batch of the Epstein files.


Yes. Colors his philanthropy.

While I hope Warren Buffet isn't cut from the same cloth, the odds are looking quite bad. It would be nice to know there are some out there who can just be smart, get rich, and then NOT damn your immortal soul. But it's looking grim.


Experience would point to extreme wealth changing almost everyone who gains it, for the worse.


Only one billionaire has ever given away enough money while he was alive to not be a billionaire. Ever. Pledges don't count. Also Warren Buffet giving away 99% of his wealth still keeps him a billionaire.


Chouinard?


Chuck Feeney

Also I stand corrected, Chouinard is the other instance.


Still very rare


Because most people who are not some combination of the above tap out somewhere around the $100m-$500m mark or earlier, because they don't have any reason to get more.


Power corrupts the mind. They live in a different world


He may well be as you say, but nothing in this video is evidence of that. To the extent he's a slimy sociopath, he's not openly twirling his metaphorical moustache here, and he's a lot better at hiding villainy than most of the better-known slimy sociopaths in the world today (for comparison, Musk actually tweeted "If this works, I’m treating myself to a volcano lair. It’s time.", this isn't even at that level.

He's responding to all the people very upset about how much energy AI takes to train.

That said, a quick over-estimate of human "training" cost is 2500 kcal/day * 20 years = 21.21 MWh[0], which is on the low end of the estimates I've seen for even one single 8 billion parameter model.

[0] https://www.wolframalpha.com/input?i=2500+kcal%2Fday+*+20+ye...


The AI "movement" is hermetic magick. The goal is to bring about God in silico, because if you're not involved in so doing, God may punish you for eternity when he emerges:

https://en.wikipedia.org/wiki/Roko's_basilisk

Next to the might and terror of the machine God, mere humans are, individually, indeed as nothing...


Most of the people working on AI, and even those on the specific sub-domain of AI where Roko's basilisk was coined which isn't the majority of the field by a long shot, have been rolling their eyes at Roko's basilisk since the moment it was coined.

Even a brief moment of thought should reveal that, even if you think the scenario likely, there are an infinite number of potential equivalent basilisks and you'd need to pick the correct one.

I'm less worried about Roko's basilisk*, and rather more worried about the people who say this:

  I think you have said in fact, and I'm gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term.
- https://www.techpolicy.press/transcript-senate-judiciary-sub...

Because this is clearly not taking the words themselves at face value; either you should dig in and say "so why should we allow it at all then?" or you should dismiss it as "I think you're making stuff up, why should we believe you about anything?", but not misread such a blunt statement.

(If you follow the link, Altman's response is… not one I find satisfying).

* despite the people who do take it seriously, as such personalities have always been around and seldom cause big issues by themselves; only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them


>only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them

Tell me something; have you ever built something you later regret having built? Like you look back at it, accept you did, but realize that if you'd just been a bit wiser/knowledgeable about the world you wouldn't have done it? In the moment you're doing the thing you'll regret, you don't know in that moment anything better to do until the unpleasant consequences manifest, granting you experience.

If you haven't experienced that yet; fine, but we shouldn't be betting on existential problems with "hopefully" if we can at all avoid it. Especially when that hopefully clause involves something we're making the decision to craft, with means and methods we don't fully understand/aren't predictively ahead of, and knowing that the way these methods work have a tendency to generate/provide the basis to generate a thoroughly sycophantic construct.


Sure.

To your point, my P(doom) is 0.1, but the reason it's that low is that I expect a lot of people to use sub-threshold AI to do very dangerous things which render us either (1) unwilling or (2) unable to develop post-threshold AI.

The (1) case includes people actually taking this all seriously enough, which as per your final paragraph, I agree with you that people are currently not.

Things like Roko's basilisk are a strict subset of that 0.1; there's a lot of other dooms besides that one.


Sci-fi mumbo jumbo.


Sci-fi mumbo jumbo that Dario, Sam, and the rest have read and were profoundly influenced by. Yudkowskianism is marked by a strong apocalyptic "evil AI will kill us all" streak, and a big part of the AI industry is a race to build "good" AI before "evil" AI gets a fighting chance. Of course that all goes out the window just like Google's "don't be evil" once advertisers and the Pentagon start wafting the scent of money into the air...

"Why don't they eat cake?"


Real "This must hit so hard if you're stupid" moment.


Who cares about humans, it's 2026.

We only care about pelicans riding bicycles




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: