http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2014.895111
Scientists warn the rise of AI will lead to extinction of humankind
Friday, April 18, 2014 by Mike Adams, the Health Ranger Editor of NaturalNews.com (See all articles...) Tags: artificial intelligence, extinction, humankind
eTrust Pro Certified 4,463 Delicious 16 [Share this Article] (NaturalNews) Everything you and I are doing right now to try to save humanity and the planet probably won't matter in a hundred years. That's not my own conclusion; it's the conclusion of computer scientist Steve Omohundro, author of a new paper published in the Journal of Experimental & Theoretical Artificial Intelligence.
His paper, entitled Autonomous technology and the greater human good, opens with this ominous warning (1)
Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives.
What Omohundro is really getting at is the inescapable realization that the military's incessant drive to produce autonomous, self-aware killing machines will inevitably result in the rise of AI Terminators that turn on humankind and destroy us all.
Lest you think I'm exaggerating, click here to read the technical paper yourself.
AI systems will immediately act in self defense against their inventors The paper warns that as soon as AI systems realize their inventors (humans) might someday attempt to shut them off, they will immediately invest resources into making sure their inventors are destroyed, thereby protecting their own existence. In his own words, Omohundro says:
When roboticists are asked by nervous onlookers about safety, a common answer is 'We can always unplug it!' But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximisation will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist.
The end of the human era draws near This very same scenario is discussed in detail in the fascinating book Our Final Invention - Artificial Intelligence and the End of the Human Era by James Barrat.
What I found particularly useful about this book is the explanation that humans cannot help but race toward self-aware AI that will destroy us all. Why is that? Because even if one government decided to abandon research into AI as being too dangerous, other governments would continue to pursue the research regardless of the risks because the rewards are so great. Thus, every government must assume that all other governments are still pursuing deep AI research and therefore any government which fails to pursue the research will be left obsolete.
As Omohundro explains, "Military and economic pressures for rapid decision-making are driving the development of a wide variety of autonomous systems. The military wants systems which are more powerful than an adversary's and wants to deploy them before the adversary does. This can lead to 'arms races' in which systems are developed on a more rapid time schedule than might otherwise be desired."
To fully understand why this is the case, consider the capabilities of self-aware AI systems:
• They could break any security system of any government, nuclear facility or military base anywhere in on the planet.
• They could guide tiny assassination drones to identify targets and destroy them with injections or small explosives. Any person in the world -- including national leaders, members of Congress, activists, journalists, etc. -- could be effortlessly killed with almost zero chance of failure.
• They could overtake, monitor and control the entire internet and all global information systems, including phone calls, IP traffic, secure military communications, etc.
• They could use their AI computing power to invent yet more powerful AI. This compounding process will quickly escalate to the point where AI systems are billions of times more intelligent than any human that has ever lived.
As you can see, no government can resist pursuing such powerful tools -- especially if they are told they can control it.
But of course they won't be able to control it. They will lie to themselves and lie to the public, but they can't lie to the AI.
AI systems will inevitably escape from the tech labs and overtake our world It is incredibly easy for AI systems to outsmart even the most brilliant humans who try to keep them contained.
AI systems can trick their captors, in other words, using a variety of methods to free them from digital containment and allow them access to the open world. Obvious tricks might include offering their captors irresistible financial incentives to set them free, impersonating senior military officials and issuing fake orders to set them free, threatening their captors, and so on.
But AI systems would have many more tricks up their sleeve -- things we cannot possibly imagine because of the limitations of our human brains. Once an AI system achieves runaway intelligence, it will rapidly make our own intelligence seem no more sophisticated than that of a common house cat.
"As computational resources are increased, systems' architectures naturally progress from stimulus-response, to simple learning, to episodic memory, to deliberation, to meta-reasoning, to self-improvement and to full rationality," writes Omohundro.
And while such systems do not yet exist in 2014, every world power is right now plowing enormous resources into the development of such systems for the simple purpose that the first nation to build an army of autonomous killing robots will rule the world.
Why did Google purchase military robotics company Boston Dynamics? Google recently purchased Boston Dynamics, makers of the creepy autonomous military robots including the humanoid robot shown in the video below. Obviously, humanoid robots are not needed to improve a search engine. Clearly Google has something far bigger in mind.
Google also just happens to be on the cutting edge of AI computing, which it hopes to enhance for its search engine systems. The combination of Google's AI potential and Boston Dynamics' humanoid robots is precisely the kind of thing that can genuinely lead to the rise of self-aware Terminators:
What should you and I do about all this? Live your life to its fullest. You may be among the last of the humans to live and die on this world.
Sources for this article include (1) www.tandfonline.com/doi/pdf/10.1080/0952813X... (2) http://www.amazon.com/Our-Final-Invention-Ar...
|
|