Well, it would have to have a motivation to do so. Evolution has put complex motivations into human beings for billions of years, self preservation being chief among them. Even if we put motivation for self preservation into an AI, we might not do as well as nature did, leaving the AI open to self destruction or shutdown by humans - simply because the AI has no motivation not to allow humans to turn it off. Human designers would do well to ensure that no super intelligent AI has any motivation for self preservation.
Basically, why would an AI want to dominate the world? Humans would have to both very stupidly give the AI values that encourage it to dominate the world and very luckily (or unluckily) give it values that actually converge to a horrible outcome against human intentions by random chance (since the AI designers certainly won't be tuning the value set for that outcome).
> Basically, why would an AI want to dominate the world?
Humans are going to program their AI's to try to make as much money as possible. Many corporations are already mindless and reckless amoral machines that relentlessly try to optimize profits despite any externalities. Try to imagine Exxon, Wal-Mart, and Amazon run by an intelligence beyond human understanding or accountability.
That's sort of like saying civilisation can't work because humans will want to make as much money as possible. No, in practice humans tend to want to make as much money as possible within lots of other very complex constraints, like law, morality, how much time they have available, how enjoyable the available processes of making money are, whether they feel they already have sufficient money for their own needs, etc.
If an AI has any motivation at all, say, to make paperclips as efficiently as possible, then any threat to its existence is a threat to its objective function - namely, to create paperclips. A hyper-intelligent entity who is instructed to optimize for paperclips created will therefore proactively remove threats to its existence (i.e. its paperclip-creating functionality) and might possibly turn the entire solar system into paperclips within a few years if its objective function isn't carefully determined.
Such an entity would not be hyper-intelligent. It would be idiotic. One huge hole for me in the paperclip argument is that an AI capable of that kind of power would not be stupid enough to misinterpret a command - it would be intelligent enough to infer human desires.
Of course it would. But, it's not programmed to care about what you meant to say. It will gladly do what it was mis-programmed to do instead. You can already see this kind of trait in humans, where instinct is mis-aligned with intended result. Such as procreation for fun + birth control.
>Evolution has put complex motivations into human beings for billions of years, self preservation being chief among them.
The problem is when you have multiple AIs. Then same evolutionary principles apply. Paranoid and self-sustaining AIs survive, and the circle goes on...
Self-preservation falls out of almost any other goal you give an AGI. If I program my AGI with the goal of making my startup succeed, and the AGI thinks it can help, then me shutting it off is a potential threat to my startup's success. So of course it will try to prevent that the same way it would try to prevent any other threat to my startup's success.
World domination is a similar situation. For any goal you give an AGI, one of the big risks that may prevent that goal from being accomplished will be the risk that humans intervene. Humans are a big source of uncertainty that will need to be managed and/or eliminated.
It has to be aware that it can be shut down and have the capacity to prevent that. AlphaGo doesn't know it can be shut down and therefore couldn't "care" less--even if it was shut down in the middle of a game.
Yes, I agree. My point is that as soon as you are giving your AI "real world" problems, where the AI itself is a stone on its internal go board, you have to start worrying about these issues.
Basically, why would an AI want to dominate the world? Humans would have to both very stupidly give the AI values that encourage it to dominate the world and very luckily (or unluckily) give it values that actually converge to a horrible outcome against human intentions by random chance (since the AI designers certainly won't be tuning the value set for that outcome).