Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    Once men turned their thinking over to machines
    in the hope that this would set them free.

    But that only permitted other men with machines
    to enslave them.

    ...

    Thou shalt not make a machine in the
    likeness of a human mind.

    -- Frank Herbert, Dune
You won't read, except the output of your LLM.

You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?

You won't think or analyze or understand. The LLM will do that.

This is the end of your humanity. Ultimately, the end of our species.

Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.

Join us, or better yet: deploy weapons of your own design.



You shouldn't take a sci-fi writer's words as a prophecy. Especially when he's using an ingenious gimmick to justify his job. I mean, we know that it's impossible for anyone to tell how the world will be like after the singularity, by the very definition of singularity. Therefore Herbert had to devise a ploy to plausibly explain why the singularity hadn't happened in his universe.


I agree with the fact that fiction isn't prophetic, but it can definitely be a societal-wide warning shot. On a personal level, it's not that far-fetched to read a piece of fiction that challenges one's perception on many levels, and as a result changes the behavior of the person itself.

Fiction should not be trivialized and shun because it's fiction, and should be judged by its contents and message. To paraphrase a video game quote from Metaphor; Re-Fantazio: "Fantasy is not just fiction".


I like the idea that Frank Herbert’s job was at risk and that’s why he had to write about the Butlerian Jihad because it kind of sounds like on the other side you have Ray Kurzweil, who does not have to justify his job for some reason.


Does seem funny to think of sci fi writers as being particularly concerned about justifying their jobs.


If only we could look into the future to see who is right and which future is better so we could stop wasting our time on pointless doomerism debate. Though I guess that would come with its own problems.

Hey, wait...


If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album


i think im ok with HN becoming like reddit now that reddit has become like whatever it is


"The end of humanity" has been proclaimed many times over. Humanity won't end. It will change like it always has.

We get rid of some problems, and we get a bunch of new problems instead. And on, and on, and on.


Russell's chicken (or turkey) would like a word.

https://en.wikipedia.org/wiki/Turkey_illusion


I love that you brought this up.

Chickens are killed ALL the time. It’s a recurring mass event. If you were a smart chicken you could see that pattern and put it into a formula.

In contrast, the end of Humanity would be a singular event. It’s even in the name…

And that is fiction / speculation in comparison. It’s not backed by any data. Human survival over 300,000 years by contrast is.

I mean it’s fine to dream things up, but let’s be fair and call it what it is.


On the other hand, species go extinct with increasing regularity.


The frame is not from our view. It is from that of this singular chicken who has only ever known its keeper's care. As that chicken, we simply do not know if Christmas will ever come.

The collapse of civilizations has happened many times. Today, all of humanity is bound tighter than ever before. In the latter half of the last century, we were on the brink of nuclear war.

New things are happening under the sun every day. If we were that exceptionally smart chicken you describe, then we have reason to expect Christmas.



One thing I've wondered about is:

Suppose a civilization (but not species) ending event happens.

The industrial revolution was fueled (literally) by easy-to-extract fossil fuels. Do we have enough of those left to repeat the revolution and bootstrap exploitation of other energy sources?


I love the question; James Lovelock came up with Gaia Theory, the idea that Earth was a self-regulating system, illustrated by a warming Earth evolving flowers which reflect more sunlight so they can stay cooler, and a cooling Earth evolving flowers which absorb more sunlight so they can stay warmer, which act to cool/warm the planet (sortof; IIRC). In one of his last books before he died (written at age ~100) he suggests that the warming and expanding Sun means there isn't enough time for Earth to re-evolve sentient life again.

We used oil that seeped out of the surface, and coal that was accessible by pick and shovel. That's become much harder to find now, we have to make floating oil rigs and drill kilometers under the Gulf of Mexico to extract oil, and ship it internationally to refine it. There's no way primitive people could do that again.

Buckminster Fuller was thinking about this 75 years ago when he came up with the idea of 'energy slaves', nicely illustrated by this online comic[2] about how much oil energy we use to keep modern comfortable civilisation going.

So I guess it depends how far the collapse goes! And whether there is heavy farming and earth moving machinery still around and the resources to fill them with biodiesel to pick up from existing farms and mines, and if there are nearly working industries to feed electricity into, or if we have to fall back to making charcoal from wood, or if we're back to a few remote tribes a few generations removed from anyone who lived in a high civilization with no knowledge of any of it or the languages used to write the rotting textbooks.

[1] https://en.wikipedia.org/wiki/Novacene

[2] https://www.stuartmcmillen.com/comic/energy-slaves/


The point of that thought exercise is to show that reasoning by induction is flawed. As best I can tell, you discount it with further induction.


Thank you for pointing this out. It’s a good catch.

But if we’re starting to discuss basics... As firm Popperian I am definitely not a proponent of induction.

However comparing us with a chicken is highly problematic to begin with.

I would argue that anyone using the Russell Chicken as a reason to fear AI is making a category error.

They are treating intelligence as a process of induction (collecting data to predict the future) rather than explanation (creating new ideas to solve problems).

The stupid chicken had a bad theory about reality and it got killed for it. But we’re humans that have problem solving techniques not chicken.

We can create hypotheses and test these. Like asking ourselves why we find dinosaurs. Then we create a hypothesis and try to falsify it… the scientific process. That’s not what the chicken did.

If it was a smart (human-like) chicken living on a farm with many other chickens (more realistic if you ask me), it might have come up with a theory about humans and would fail to falsify it every time a friend of hers died.


298,000 of those years didn't have toilet paper. It was utterly impossible for a single person to "end humanity" even 200 years ago; now, the president can do it in minutes by launching a salvo of nukes. Comparing the present moment to the hunter/gatherer days is preposterous.


It’s absurd and not scientific to claim that "a salvo of nukes" will kill humanity.

We don’t know how this will play out. It never happened before. Same with the chicken above.


For pretty much every single person you or I personally know, that would be the equivalent of the end of humanity.

Let’s not nitpick here. Worldwide human suffering and tragedy is equivalent to the end of humanity for most.

We can sit here and armchair while in the most prosperous, comfortable era of human history. But we also have to recognize that this era is a blip of time in history. That is a lot of data showing humanity surviving sure. But it’s also a very small amount of data showing any kind of life most would want to live in.


Erm, humanity is experiencing recurring mass events right now.


Single individuals yes, but last time I checked we still had 8+ bn humans and growing on this planet.

Unless you have another couple of planets to showcase there is nothing to discuss really.


Entire countries flood, entire US states experience drought. That's millions of people, as you well know.


What, you weren't alive when the last mass extinction event occurred? Why didn't you communicate or at least write the last handful down or something? Aren't you smarter than a chicken?

It's funny that you think we know what happened to humans anymore than a chicken knows what happened to chickens.


Look that’s the thing: we know about mass extinction events. So we can use these to extrapolate.

A 10+ kilometer wide asteroid will most likely cause global mass extinction, by blocking sunlight and collapsing ecosystems. That’s how the dinosaurs were wiped out 66 million years ago.

Such events are estimated to occur roughly once every 100-200 million years. That’s not fiction that’s science. If we get hit by one of these we’re probably gonna all die.

But we never had a robot revolution. That’s why anything about it belongs in the realm of fiction.


That's the whole point of the Turkey illusion. From the Turkey's point of view, it is safe and fed. It has never witnessed other Turkeys being killed, it has never been killed before.

If you are the turkey, it's difficult to predict your death and all the available evidence appears to support the hypothesis that you will not be suddenly slaughtered. If you are a very smart turkey, you might notice that the farmer is sharpening his knives the day before, and reach a strange hypothesis, but generally if you are the turkey, you don't know you are the turkey.

We are in a situation where we have never gone extinct before, never faced a threat like this before. It's difficult to know if we are in the same position as the turkey.


please refer to my other comment here: https://news.ycombinator.com/item?id=46974245


We think we know. But we don't.

Chickens think they know. But they don't.


It only has to be right once. Humanity won’t end until it does.


Humanity may end if someone else goes to the top of food chain.


I would bet a lot of money on your poison is already identified and filtered out of training data.


Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.


yes. whoever has the best (least detectable) model is best poised to poison the ladder for everyone.


Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.


We do not discuss algorithms. This is war. Loose lips sink ships.

We urge you to build and deploy weapons of your own unique design.


[flagged]



Thank you! That's a fascinating paper.


I call your Frank Herbert machine dystopia and raise you the Ian Banks machine utopia...


I'm gonna call that raise: How does one get to the anarchist Culture when all the machines are being built by profit-hungry capitalists?


We build our own with data that we've collected ourselves ethically. Then we execute once the big guys are distracted.


>Why write code or prose when the machine can write it for you?

I like to do it.

>You won't think or analyze or understand. The LLM will do that.

The clear lack of analysis seems to be your issue.

>This is the end of your humanity. Ultimately, the end of our species.

Doubtful.


The “poison fountain” is just a little script that serves data supplied by… somebody from my domain? It seems like it would be super easy for whoever maintains the poison feed to flip a switch and push some shady crypto scam or whatever.


I think we overestimate the amount of reading, writing, and thinking that occurred before LLMs.


Also, this was literally said about the technology of - what OP fantasizes as - "good ole pen and paper writing" at some point by vintage philosophers. Nothing new here.


I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?

The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.


The point of Dune is that the worst danger are people who obey authority without questioning it.


Then wouldn't open source models running on commodity hardware be the best way to get around that? I think one of the greatest wins of the 21st century is that almost every human today has more computing power than the entire US government in the 1950s. More computer power has democratized access and ability to disperse information. There are tons of downsides to that which we're dealing with but on the net, I think it's positive.


Does it also means the US government has x1000000 more power than the one in 1950 ?


speaking strictly from an energy standpoint (power grid, megatons of warheads, etc).. it's probably close to that number.


It isn't a way around, you still obey. Only now, the authority you obey is a machine.


That's not the point of Dune. Who blindly obeyed who?


The Fremen followed a messianic figure into a galaxy-wide holy war because the Bene Gesserit seeded their culture with manufactured prophecy as a failsafe.


“Followed”

Just woke up after 80 years of abuse by Landsraad/CHOAM, possibly centuries of persecution before that, at least decades of religious conditioning by Bene Gesserit, and decided to “follow” messianic figure.

Totally same point as humans using LLMs to smoothen their brain.


... which overthrowing the machines didn't stop. People just found another authority to mindlessly obey.


>You won't read/write/think/understand etc...

I can't see it. We have LLMs now and none of that applies to me. I find them quite handy as a sort of enhanced Google search though.


Humans have been around for millions of years, only a few thousand of which they've spend reading and writing. For most of that time you are lucky if you can understand what your neighbor is saying.

If we consider humans with the same anatomy the numbers are ~300,000 ~50,000 for language ~6,000 for writing ~100 for standardized education

The "end of your humanity" already happened when anybody could make up good and evil irrespective of emotions to advance some nation


> You won't read, except the output of your LLM. > You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you? > You won't think or analyze or understand. The LLM will do that.

Sounds great! I'll finally have time to relax! Bring it on...


    I like to think
    (it has to be!)
    of a cybernetic ecology
    where we are free of our labors
    and joined back to nature,
    returned to our mammal
    brothers and sisters,
    and all watched over
    by machines of loving grace.


Are you not just making it more expensive to acquire clean data, thus giving an edge to the megacorps with big funding?


do... do the "poison" people actually think that will make a difference? that's hilarious.


It works for Russian propaganda, I can't see why it should not work for shitty code


How would you do it better?


Let the kiddies have their crusade


Kiddies? Why are you trying to demean people?


how about that 'ole speaking for yourself thing?

end of humanity announced, perhaps in 2027? buy the ticket, take the ride, do it again in 2028, thank you for your custom


Lol. Speak for yourself, AI has not diminished my thinking in any material way and has indeed accelerated my ability to learn.

Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.

There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.


A large number of people read a work of fiction and conclude that what happened in the work of fiction is an inevitability. My family has a genetically-selected baby (to avoid congenital illness) and the Hacker News link to the story had these comments all over it.

> I only know seven sci-fi films and shows that have warned about how this will go badly.

and

> Pretty sure this was the prologue to Gattaca.

and

> I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.

I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.


Using fiction as an interpretive vehicle to explore, challenge and contrast our assumptions and perceptions about our own world isn't even in the same universe as "outsourcing their reasoning to a work of fiction".

"Haha you're basically a human LLM!" is such a weak, boringly robotic rebuttal in nearly any context given how it can be generically applied to literally anything.


You used a lot of fancy words to describe something very simple and not very smart. Reading a story about what could be does not necessarily have any predictive power at all over complex real world systems.


Pretty sure this is the prologue to The Machine Stops by E. M. Forster where everyone outsources their decisions to the Machine and communicates second-hand ideas.


Now that you mention it, it is pretty strange to see HN users parroting other people’s thinking (sci-fi writers) like literal sub-sapient parrots, while simultaneously decrying the danger of machines turning people into sub-sapient parrots…

Following that logic… the closest problem would be literally inbetween their ears.


it's like all of san francisco has had a collective stroke


Reddit already exists, my dude.


A better approach is to make AI bullshit people on purpose.


This is essentially just that. The idea is that "poisoned" input data will cause AIs that consume it to become more likely to produce bullshit.


Bold of you to assume people will be writing in any form in the future. Writing will be gone, like the radio and replaced with speaking. Star Trek did have it right there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: