Ah, the good old days, before ChatGPT aced all such interview questions.
Do people still use these interview questions? My most recent interview was more of a "How good has your German become since last time we worked together?" situation.
> as if an LLM should have the same rights to the Earth as we do,
I don't see him calling for an LLM to have rights. I don't think this is part of how OpenAI considers its work at all. Anthropic is open-minded about the possibility, but OpenAI is basically "this is a thing, not a person, do not mistake it for a person".
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
His point is flawed in other ways, like the limited competence of the AI and how even an adult human eating food for 20 years has an energy cost on the low end of the estimated energy cost to train a very small and very rubbish LLM, and nowhere near the energy cost of training one that anyone would care about. And even for those fancy models, they're only ok, not great, etc., and there are lots of models being trained rather than this being a one-time thing. Or in the other direction, each human needs to be trained separately and there's 8 billion of us. And what he says in the video doesn't help much either, it's vibes rather than analysis.
But your point here is the wrong thing to call a flaw.
The human is here anyway? First, no: *some* humans are here anyway, but various governments are currently increasing pension ages due to the insufficient number of new humans available to economically support people who are claiming pensions.
Second: so what if it was yes? That argument didn't stop us substituting combustion engines and hydraulics for human muscle.
> They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.
They control how quickly they deploy, but I don't see how they have any control over the rest: "which industries they automate" is a function of how well the model has generalised. All the medical information, laws and case histories, all the source code, they're still only "ok"; and how are they, as a model provider in the US, supposed to cooperate (or not) with a trade union in e.g. Brandenburg whose bosses are using their services?
> Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill.
Certainly what I fear.
Any given UBI is only meaningful if it is connected to the source of economic productivity; if a government is offering it, it must control that source; if the source is AI (and robotics), that government must control the AI/robots.
If governments wait until the AI is ready, the companies will have the power to simply say "make me"; if the governments step in before the AI is ready, they may simply find themselves out-competed by businesses in jurisdictions whose governments are less interested in intervention.
And even if a government pulls it off, how does that government remain, long-term, friendly to its own people? Even democracies do not last forever.
Most of the people working on AI, and even those on the specific sub-domain of AI where Roko's basilisk was coined which isn't the majority of the field by a long shot, have been rolling their eyes at Roko's basilisk since the moment it was coined.
Even a brief moment of thought should reveal that, even if you think the scenario likely, there are an infinite number of potential equivalent basilisks and you'd need to pick the correct one.
I'm less worried about Roko's basilisk*, and rather more worried about the people who say this:
I think you have said in fact, and I'm gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term.
Because this is clearly not taking the words themselves at face value; either you should dig in and say "so why should we allow it at all then?" or you should dismiss it as "I think you're making stuff up, why should we believe you about anything?", but not misread such a blunt statement.
(If you follow the link, Altman's response is… not one I find satisfying).
* despite the people who do take it seriously, as such personalities have always been around and seldom cause big issues by themselves; only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them
>only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them
Tell me something; have you ever built something you later regret having built? Like you look back at it, accept you did, but realize that if you'd just been a bit wiser/knowledgeable about the world you wouldn't have done it? In the moment you're doing the thing you'll regret, you don't know in that moment anything better to do until the unpleasant consequences manifest, granting you experience.
If you haven't experienced that yet; fine, but we shouldn't be betting on existential problems with "hopefully" if we can at all avoid it. Especially when that hopefully clause involves something we're making the decision to craft, with means and methods we don't fully understand/aren't predictively ahead of, and knowing that the way these methods work have a tendency to generate/provide the basis to generate a thoroughly sycophantic construct.
To your point, my P(doom) is 0.1, but the reason it's that low is that I expect a lot of people to use sub-threshold AI to do very dangerous things which render us either (1) unwilling or (2) unable to develop post-threshold AI.
The (1) case includes people actually taking this all seriously enough, which as per your final paragraph, I agree with you that people are currently not.
Things like Roko's basilisk are a strict subset of that 0.1; there's a lot of other dooms besides that one.
He may well be as you say, but nothing in this video is evidence of that. To the extent he's a slimy sociopath, he's not openly twirling his metaphorical moustache here, and he's a lot better at hiding villainy than most of the better-known slimy sociopaths in the world today (for comparison, Musk actually tweeted "If this works, I’m treating myself to a volcano lair. It’s time.", this isn't even at that level.
He's responding to all the people very upset about how much energy AI takes to train.
That said, a quick over-estimate of human "training" cost is 2500 kcal/day * 20 years = 21.21 MWh[0], which is on the low end of the estimates I've seen for even one single 8 billion parameter model.
> who is left on facebook aside from dopamine junkies and bots.
Political activists, like a former partner of mine.
… who I mute, because I am a British person living in Berlin, I don't need or want "Demexit Memes" and similar groups, which is 90% of what they post …
… which in turn means that sometimes when I visit Facebook, my feed is actually empty, because nobody else is posting anything …
… which is still an improvement on when the algorithm decides to fill it up with junk, as the algorithm shows me people I don't know doing things I don't care abut interspersed with adverts for stuff I can't use (for all they talk about the "value" of the ads, I get ads both for dick pills and boob surgery, and tax advisors for a country I don't live in who specialise in helping people renounce I nationality I never had in the first place, and sometimes ads I not only can't read but can't even pronounce because they're in cyrillic).
While true (you're not the first to suggest it, even), in the context of the other things they show, I think it is more likely to be an example of them not knowing which advertiser to pitch my eyeballs at, and less likely to be them identifying me as a member of this set.
I take poorly directed targeting advertisements as a performance indicator for how well my data privacy efforts are working. When the ad targeting has you dead to rights is when you need to worry.
> Because they think this is good writing. You can’t correct what you don’t have taste for.
I have to disagree about:
> Most software engineers think that reading books means reading NYT non-fiction bestsellers.
There's a lot of scifi and fantasy in nerd circles, too. Douglas Adams, Terry Pratchett, Vernor Vinge, Charlie Stross, Iain M Banks, Arthur C Clarke, and so on.
But simply enjoying good writing is not enough to fully get what makes writing good. Even writing is not itself enough to get such a taste: thinking of Arthur C Clarke, I've just finished 3001, and at the end Clarke gives thanks to his editors, noting his own experience as an editor meant he held a higher regard for editors than many writers seemed to. Stross has, likewise, blogged about how writing a manuscript is only the first half of writing a book, because then you need to edit the thing.
> I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”
Unfortunately, there's a lot of people trying to content-farm with LLMs; this means that whatever style they default to, is automatically suspect of being a slice of "dead internet" rather than some new human discovery.
I won't rule out the possibility that even LLMs, let alone other AI, can help with new discoveries, but they are definitely better at writing persuasively than they are at being inventive, which means I am forced to use "looks like LLM" as proxy for both "content farm" and "propaganda which may work on me", even though some percentage of this output won't even be LLM and some percentage of what is may even be both useful and novel.
> If you've already got the electricity for electrolysis, would it not be more efficient and mechanically simpler to store it in a battery and power an electric motor?
Yes, if you actually have the batteries.
Between around 2014-2024, the common talking point was "we're not making enough batteries", and the way the discussions went it felt like the internal models of people saying this had the same future projections of batteries as the IEA has infamously produced for what they think future PV will be: https://maartensteinbuch.com/2017/06/12/photovoltaic-growth-...
I've not noticed people making this claim recently. Presumably the scale of battery production has become sufficient to change the mood music on this meme.
To be fair, there are still plenty of people on HN talking about lack of battery capacity as a reason to delay solar/wind rollout; I suspect it'll take a bit more time for the new reality to sink in fully.
The fossil industry was always suspiciously keen on green hydrogen - partly because the path to green hydrogen would likely have involved a long detour through grey and blue hydrogen, and partly because it gave them an excuse to lobby against phasing out natural gas for domestic heating/cooking ("we need to retain that infrastructure to enable the hydrogen economy!").
You can see the same thing happening in their support for Carbon Capture and Storage - "we're going to need the oil producers to enable carbon sequestration, so we might as well keep drilling new wells to keep their skills fresh!"...
The "beast" in this context is the government, not the country, on the argument that the former is slowing down the latter.
Therefore, I'd compare it to things like https://en.wikipedia.org/wiki/Asset_stripping and https://en.wikipedia.org/wiki/Vulture_capitalist
I don't like them, but I don't think they're illegal?
reply