Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not long at all. Presumably you could write the method on the back of a napkin to lead another top AI researcher to the same result. That’s why trying to sit on breakthroughs is the worst option and making sure they are widely distributed along with alignment methods is the best option.


And what if there are no alignment methods.


Yudkowsky’s doomsday cult almost blew OpenAI to pieces and sent everyone who knows the details in the wind like dandelion seeds. What’s next? A datacenter bombing or killing key researchers? We should be happy that this particular attempt failed, because this cult is only capable of strategic actions that make things far more dangerous.

This will be solved like all other engineering and science: with experiments and iteration, in a controlled setting where potential accidents will have small consequences.

An unaligned system isn’t even useful, let alone safe. If it turns out that unaligned AGI is very hard, we will obviously not deploy it into the world at scale. It’s bad for the bottom line to be dead.

But there’s truly no way out but forward; game theory constrains paranoid actors more than the reckless. A good balance must be found, and we’re pretty close to it.

None of the «lesswrong» doomsday hypotheses have much evidence for them, if that changes then we will reassess.


> It’s bad for the bottom line to be dead.

I have no overall position, but climate change and nuclear weapons seem two quite strong counterexamples to this being a sufficient condition for safety.


Nuclear weapons and climate change are not on a path to destroy civilization. Such an interpretation is obvious hyperbole.


They seem rigid.

Also non violent.

I think if we have a major AI induced calamity... Then I worry much more. Although... Enough scary capability in a short enough period of time... I could see violence being on the table for the more radical amongst the group.

Your concern is very interesting though, and I think important to consider. I wonder if the FBI agrees.


When Yudkowsky says that it’s an acceptable outcome to be left with a minimum viable breeding population after a nuclear war, triggered by enforcement of AI slowdown demands, that is as far from non-violent you could get without someone actually being threatened directly.

The AI doom cult is not a peaceful movement. When an extremist tells you what they will do, you damn well listen to them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: