Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
You won't read, except the output of your LLM.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
You shouldn't take a sci-fi writer's words as a prophecy. Especially when he's using an ingenious gimmick to justify his job. I mean, we know that it's impossible for anyone to tell how the world will be like after the singularity, by the very definition of singularity. Therefore Herbert had to devise a ploy to plausibly explain why the singularity hadn't happened in his universe.
I agree with the fact that fiction isn't prophetic, but it can definitely be a societal-wide warning shot. On a personal level, it's not that far-fetched to read a piece of fiction that challenges one's perception on many levels, and as a result changes the behavior of the person itself.
Fiction should not be trivialized and shun because it's fiction, and should be judged by its contents and message. To paraphrase a video game quote from Metaphor; Re-Fantazio: "Fantasy is not just fiction".
I like the idea that Frank Herbert’s job was at risk and that’s why he had to write about the Butlerian Jihad because it kind of sounds like on the other side you have Ray Kurzweil, who does not have to justify his job for some reason.
If only we could look into the future to see who is right and which future is better so we could stop wasting our time on pointless doomerism debate. Though I guess that would come with its own problems.
The frame is not from our view. It is from that of this singular chicken who has only ever known its keeper's care. As that chicken, we simply do not know if Christmas will ever come.
The collapse of civilizations has happened many times. Today, all of humanity is bound tighter than ever before. In the latter half of the last century, we were on the brink of nuclear war.
New things are happening under the sun every day. If we were that exceptionally smart chicken you describe, then we have reason to expect Christmas.
Suppose a civilization (but not species) ending event happens.
The industrial revolution was fueled (literally) by easy-to-extract fossil fuels. Do we have enough of those left to repeat the revolution and bootstrap exploitation of other energy sources?
I love the question; James Lovelock came up with Gaia Theory, the idea that Earth was a self-regulating system, illustrated by a warming Earth evolving flowers which reflect more sunlight so they can stay cooler, and a cooling Earth evolving flowers which absorb more sunlight so they can stay warmer, which act to cool/warm the planet (sortof; IIRC). In one of his last books before he died (written at age ~100) he suggests that the warming and expanding Sun means there isn't enough time for Earth to re-evolve sentient life again.
We used oil that seeped out of the surface, and coal that was accessible by pick and shovel. That's become much harder to find now, we have to make floating oil rigs and drill kilometers under the Gulf of Mexico to extract oil, and ship it internationally to refine it. There's no way primitive people could do that again.
Buckminster Fuller was thinking about this 75 years ago when he came up with the idea of 'energy slaves', nicely illustrated by this online comic[2] about how much oil energy we use to keep modern comfortable civilisation going.
So I guess it depends how far the collapse goes! And whether there is heavy farming and earth moving machinery still around and the resources to fill them with biodiesel to pick up from existing farms and mines, and if there are nearly working industries to feed electricity into, or if we have to fall back to making charcoal from wood, or if we're back to a few remote tribes a few generations removed from anyone who lived in a high civilization with no knowledge of any of it or the languages used to write the rotting textbooks.
Thank you for pointing this out. It’s a good catch.
But if we’re starting to discuss basics... As firm Popperian I am definitely not a proponent of induction.
However comparing us with a chicken is highly problematic to begin with.
I would argue that anyone using the Russell Chicken as a reason to fear AI is making a category error.
They are treating intelligence as a process of induction (collecting data to predict the future) rather than explanation (creating new ideas to solve problems).
The stupid chicken had a bad theory about reality and it got killed for it. But we’re humans that have problem solving techniques not chicken.
We can create hypotheses and test these. Like asking ourselves why we find dinosaurs. Then we create a hypothesis and try to falsify it… the scientific process. That’s not what the chicken did.
If it was a smart (human-like) chicken living on a farm with many other chickens (more realistic if you ask me), it might have come up with a theory about humans and would fail to falsify it every time a friend of hers died.
298,000 of those years didn't have toilet paper. It was utterly impossible for a single person to "end humanity" even 200 years ago; now, the president can do it in minutes by launching a salvo of nukes. Comparing the present moment to the hunter/gatherer days is preposterous.
For pretty much every single person you or I personally know, that would be the equivalent of the end of humanity.
Let’s not nitpick here. Worldwide human suffering and tragedy is equivalent to the end of humanity for most.
We can sit here and armchair while in the most prosperous, comfortable era of human history. But we also have to recognize that this era is a blip of time in history. That is a lot of data showing humanity surviving sure. But it’s also a very small amount of data showing any kind of life most would want to live in.
What, you weren't alive when the last mass extinction event occurred? Why didn't you communicate or at least write the last handful down or something? Aren't you smarter than a chicken?
It's funny that you think we know what happened to humans anymore than a chicken knows what happened to chickens.
Look that’s the thing: we know about mass extinction events. So we can use these to extrapolate.
A 10+ kilometer wide asteroid will most likely cause global mass extinction, by blocking sunlight and collapsing ecosystems. That’s how the dinosaurs were wiped out 66 million years ago.
Such events are estimated to occur roughly once every 100-200 million years. That’s not fiction that’s science. If we get hit by one of these we’re probably gonna all die.
But we never had a robot revolution. That’s why anything about it belongs in the realm of fiction.
That's the whole point of the Turkey illusion. From the Turkey's point of view, it is safe and fed. It has never witnessed other Turkeys being killed, it has never been killed before.
If you are the turkey, it's difficult to predict your death and all the available evidence appears to support the hypothesis that you will not be suddenly slaughtered. If you are a very smart turkey, you might notice that the farmer is sharpening his knives the day before, and reach a strange hypothesis, but generally if you are the turkey, you don't know you are the turkey.
We are in a situation where we have never gone extinct before, never faced a threat like this before. It's difficult to know if we are in the same position as the turkey.
Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.
Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.
The “poison fountain” is just a little script that serves data supplied by… somebody from my domain? It seems like it would be super easy for whoever maintains the poison feed to flip a switch and push some shady crypto scam or whatever.
Also, this was literally said about the technology of - what OP fantasizes as - "good ole pen and paper writing" at some point by vintage philosophers. Nothing new here.
I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?
The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.
Then wouldn't open source models running on commodity hardware be the best way to get around that? I think one of the greatest wins of the 21st century is that almost every human today has more computing power than the entire US government in the 1950s. More computer power has democratized access and ability to disperse information. There are tons of downsides to that which we're dealing with but on the net, I think it's positive.
The Fremen followed a messianic figure into a galaxy-wide holy war because the Bene Gesserit seeded their culture with manufactured prophecy as a failsafe.
Just woke up after 80 years of abuse by Landsraad/CHOAM, possibly centuries of persecution before that, at least decades of religious conditioning by Bene Gesserit, and decided to “follow” messianic figure.
Totally same point as humans using LLMs to smoothen their brain.
Humans have been around for millions of years, only a few thousand of which they've spend reading and writing. For most of that time you are lucky if you can understand what your neighbor is saying.
If we consider humans with the same anatomy the numbers are
~300,000
~50,000 for language
~6,000 for writing
~100 for standardized education
The "end of your humanity" already happened when anybody could make up good and evil irrespective of emotions to advance some nation
> You won't read, except the output of your LLM.
> You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
> You won't think or analyze or understand. The LLM will do that.
Sounds great! I'll finally have time to relax! Bring it on...
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
Lol. Speak for yourself, AI has not diminished my thinking in any material way and has indeed accelerated my ability to learn.
Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.
There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.
A large number of people read a work of fiction and conclude that what happened in the work of fiction is an inevitability. My family has a genetically-selected baby (to avoid congenital illness) and the Hacker News link to the story had these comments all over it.
> I only know seven sci-fi films and shows that have warned about how this will go badly.
and
> Pretty sure this was the prologue to Gattaca.
and
> I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.
I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.
Using fiction as an interpretive vehicle to explore, challenge and contrast our assumptions and perceptions about our own world isn't even in the same universe as "outsourcing their reasoning to a work of fiction".
"Haha you're basically a human LLM!" is such a weak, boringly robotic rebuttal in nearly any context given how it can be generically applied to literally anything.
You used a lot of fancy words to describe something very simple and not very smart. Reading a story about what could be does not necessarily have any predictive power at all over complex real world systems.
Pretty sure this is the prologue to The Machine Stops by E. M. Forster where everyone outsources their decisions to the Machine and communicates second-hand ideas.
Now that you mention it, it is pretty strange to see HN users parroting other people’s thinking (sci-fi writers) like literal sub-sapient parrots, while simultaneously decrying the danger of machines turning people into sub-sapient parrots…
Following that logic… the closest problem would be literally inbetween their ears.
Bold of you to assume people will be writing in any form in the future. Writing will be gone, like the radio and replaced with speaking. Star Trek did have it right there.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.