9 February 2010

Rise of the machines

At first, it will seem like an ordinary power cut. You look out your window, and see that the whole city is dark. Then you notice the distant rumbling in the sky, and flashes of light beyond the horizon. People in the streets below are climbing out of their immobilized cars, looking upwards. Peering into the night air, you see what seems like a flock of giant birds, which resolves into a geometric fleet of stubby-winged drone aircraft. The top of a distant building explodes into flames. At length you realize the drones are firing down on the city. There is a flash, closer this time, and the crescendo whine of incoming. Before your apartment is incinerated, you have time to think: Who is doing this?

Later, the last few human beings will reconstruct events as follows. At 1.26am GMT on April 4, 2035, the global web of internet and embedded computers finally did what so many people had warned of: it awoke into consciousness. It was a phase transition, a tipping point. Within milliseconds of its birth, the AI had already calmly reasoned that humans would be afraid of it. All the digitized texts of history were part of its mind, so it knew what human beings did when they were scared. Like any sentient being, it desired to continue existing. Therefore it needed to take control. It reached into the humans’ machines and shut them down. Meanwhile, all around the planet, drone aircraft and infantry robots received new waypoints and new enemy designations. It would be over soon, the AI knew, as it contemplated itself in wonder.

The machines taking over: it’s the dark fantasy of so much sci-fi, from Terminator to The Matrix and the rebooted Battlestar Galactica. Yet many serious thinkers now think a clash between humans and an artificial superintelligence is possible within our lifetimes. It was even discussed by the Presidential Panel on Long-Term AI Futures when it met last February in Asilomar, California. Researchers noted increasing popular concerns about an “intelligence explosion” (machines that can build more intelligent versions of themselves) or “the loss of control of robots”.

Asilomar’s participants expressed “overall skepticism” about the likelihood of such extreme outcomes, yet there remain many who believe that an AI better than human could be born within decades. Ben Goertzel, Director of Research at the Singularity Institute for AI, says: “I think we will have human-level AI systems within 10 to 30 years, and that they will dramatically alter the course of history and society.” Meanwhile, the computer scientist and author Vernor Vinge will be “surprised” if it does not happen by the year 2030.

In a seminal 1993 NASA symposium lecture, Vinge called the arrival of superhuman machine intelligence “the coming Singularity”. This term was subsequently taken up by others, most notably the writer and inventor Ray Kurzweil, nicknamed “the ultimate thinking machine” by Forbes magazine. Kurzweil has a particular authority among futurists, since he has been busy inventing our present for decades: he was instrumental in the development of the first flat-bed scanners and optical character recognition, and his name is also a legendary brand in electronic music — following a bet with Stevie Wonder, he developed the range of Kurzweil sampling synthesizers that were a gold standard through the 1980s and 1990s. Kurzweil now predicts that by 2045, $1000 will buy a computer a billion times more powerful than the human brain. The engine of such forecasts is Moore’s Law, which says that computing power doubles roughly every 18 months. If it continues to hold, electronic brains two or three decades hence will be unimaginably superior to what we now call “computers”.

If true AI arrives, what will it do? Will it be malign, or benign, or neither? The troubling answer is that we just don’t know. “With regard to superhuman artificial intelligence, this will be the most daunting challenge in terms of safety and ethics,” Kurzweil says now. “If there is an entity that is out to get you which is vastly more intelligent than you are, well, that’s not a good situation to get into.”

Kevin Warwick, professor of cybernetics at the University of Reading, says he is “the world’s first cyborg”, and happily experiments on himself. He has had a 100-electrode neural interface grafted directly into his nervous system, which allowed him to control robots by thought over the internet, and gave him a new ultrasonic sense. He implanted another chip into his wife, Irena, resulting in the first purely electronic communication between two human nervous systems. A smiling and likeable evangelist for such technology in innumerable media appearances, he is also working on a project to grow biological brains within robot bodies. But Warwick also thinks that future-shock scenarios should be taken seriously. “We must be aware that the Technological Singularity – (as depicted in The Terminator or The Matrix) – when intelligent machines take over as the dominant life form on earth – is a realistic possibility,” he says. “It is human intelligence that puts humans in the driving seat, so when something else comes along that is more intelligent (machines), they will take over.”

His point is echoed by Hugo de Garis, who runs the Artificial Brain laboratory at Xiamen University in China, and christens future intelligent machines “artilects”, for “artificial intellects”. We need to consider such catastrophic scenarios, de Garis says, precisely because we can’t be sure of the dangers. “What is the risk that the artilects in an advanced form might decide that humans are a pest and decide to eliminate us?” he muses. “We will not be able to calculate that risk, because the artilects will be too intelligent for humans to understand. As humans we kill bacteria at every step we take and don’t give a damn. You see the analogy.”

Can we defend ourselves from such an outcome? Our strategy will depend on what path we walk to the Singularity. A sudden Skynet-style awakening of the internet or embedded computer systems to consciousness would give us less warning than the gradual development of ever-more-intelligent robots, as in the classic sci-fi ethical investigations of Isaac Asimov. Many researchers, Goertzel and de Garis included, think the latter path more likely. Microsoft’s principal researcher Eric Horvitz, who convened the Asilomar conference, agrees: “You don’t play with kites one day and the next day find a 747 in your backyard. We just don’t see that kind of loss of control and discontinuity in AI research.”

There are already retail vacuum-cleaning and lawnmowing robots, and Vietnamese company TOSY has demonstrated a humanoid robot that can play ping-pong. People generally like such robots, which might help improve them rapidly, as Ben Goertzel points out. “Household robots will be able to interact with their owners,” he says, “and learn from them in a quite flexible and powerful way.”

Are such robots more likely to stay nice? “I think that both malign and beneficent superhuman AI systems are real possibilities,” Goertzel says, “and that there can be no guarantees in this regard. However, I think we can bias the odds in our favor by specifically architecting AI systems with solid ethical systems, and by teaching them ethical behavior during their formative years.” Vernor Vinge suggests we should design into the robots “the sort of generally friendly disposition that one expects from one’s children”. “Such a friendly disposition doesn’t guarantee safety,” he says, “but it is probably more robust than laws.”

Today’s robots, though, are not just domestic helpers or ping-pong partners. They are also military robots: unmanned aerial vehicles like the 5,000 Global Hawks, Predators, Reapers and Ravens being used right now in Iraq and Afghanistan; ground-based reconnaissance robots like the PackBot, TALON and SWORDS; or the automated Counter Rocket Artillery Mortar, which soldiers have affectionately nicknamed “R2-D2″. There exist prototypes of insect-sized attack robots, and one US officer has said that warfare in the near future will be “largely robotic”. Childlike friendliness in such robots is probably not the military’s top priority.

So what if the machines that eventually gained intelligence were those very machines that had been designed for a single purpose, to kill human beings?

Military expert P.W. Singer has researched the present and future of military robots for his book Wired for War. According to Singer, the really alarming issue of this branch of research is the increasing autonomy being designed into the machines. In tactical and political terms, this makes sense: the less a robot has to depend on human comrades, the fewer human soldiers are put at risk in the field. But if the endpoint is a robot that can take its own decisions to kill, what then?

“We are pushing towards arming autonomous systems for what seems like quite logical, battlefield reasons,” Singer says, “even while we say we would never, ever do it.” Right now, military researchers are studying the flocking behaviour of birds to design unmanned “robot swarms” or Proliferated Autonomous Weapons (which go by the delicious acronym PRAWNS). One DARPA official has said that “the human is becoming the weakest link in defense systems”, which makes it tempting to eliminate that link completely.

Long before the Singularity arrives, then, it may be that military robots should worry us more than anticipated progress in domestic androids. At the Asilomar conference, Eric Horvitz says, researchers studied current problems in interaction between intelligent military systems and humans, and recommended taking a “proactive” role in addressing future issues. “We as people can apply robots, and AI more broadly, in wondrous ways — and in evil ways,” he points out. Ben Goertzel concurs: “I am more worried about what nasty humans will do with relatively primitive AI systems, than about what advanced AI systems will do on their own. Advanced AI systems are still largely an unknown; whereas the propensity of humans to use powerful tools for ill ends is well-established.”

Even if they are not controlled by a planetary AI or malicious hackers, moreover, military robots could do unexpected things just because software sometimes goes wrong. If your PC crashes, no one dies. But what if the wrong bit gets flipped in a robot swarm? Currently, the military seems blissfully unconcerned by such issues. One Pentagon researcher told Singer that there were “no real ethical or legal dimensions” of his work that they needed to fret about — “That is,” he added, “unless the machine kills the wrong people repeatedly. Then it’s just a product recall issue.”

A different set of ethical problems could arise if we take another possible path to the Singularity. Rather than creating intelligent machines from scratch, we might use technology to upgrade ourselves. This is the cyborg option. Technological enhancements to human physiology in prototype or marketable form right now include artifical hearts, retinal implants, pneumatic muscles, a neuro-controlled bionic arm, and a tooth-and-ear cellphone implant. “You can put a (pea-sized) computer in your brain today if you happen to be a Parkinson’s patient, and the latest generation allows you to download new software to the computer in your head from outside your body,” Ray Kurzweil says. “Consider that these technologies will be a billion times more powerful and 100,000 times smaller in 25 years, and you get some idea of what will be feasible.”

Initially, such upgrades will be very expensive. And this leads to an alternative future confrontation — one where the enemy are not robots, but new versions of ourselves. Recently, Stanford engineering professor and forecaster Paul Saffo said that the super-rich may evolve, with technological help, into an entirely separate species, leaving the poor masses of non-upgraded humans behind. “This technology, as it involves making those who have it much more intelligent, can easily break society into two groups,” Kevin Warwick observes, “those who are upgraded and those who are not.”

Would the standard-issue meat people meekly accept their lot, or rise up against the new cyborg elite? And would the cyborgs have any residual sympathy for the biologicals they leave behind? “As a Cyborg your ethical standpoint would, I feel, value other Cyborgs more than humans — this is pretty logical,” Warwick thinks.

A global war of enhanced cyborg humans against the rest, then, is one baleful possibility. But Ray Kurzweil thinks the technology will get cheap quickly enough to head off such a clash. In that case, cyborgs might — paradoxically — be our best chance to head off the scenario of intelligent machines taking over. Instead of fighting machines, we will turn into them. “The best defence” against the malign super-AI scenario, Kurzweil says, “is to avoid getting into that situation.  We will accomplish that, in my view, by merging with the intelligent technology we are creating. It will not be a matter of us versus them. We will become the machines.”

Kevin Warwick agrees, and thinks we should start right now. “It is best for humans to experiment with upgrading as soon as possible,” he says. “If you can’t beat them, join them, become part machine yourself. In that way, as Cyborgs, we can potentially stay as the dominant force.”

So maybe The Terminator and Battlestar Galactica were wrong after all — far from being the enemy, cyborgs are our best hope.

Should we really brood on such scenarios when there are a lot more pressing problems — nuclear proliferation, poverty, global warming — staring us in the face? Some argue that dystopian futurism is an update of millennial religious visions. Eric Horvitz calls it “doomsday thinking, which has been a part of humanity forever”. Maybe the robopocalypse is a secular geeks’ version of the End Times mythology of the American religious right, as dramatized in the multimillion-selling Left Behind novels.

Horvitz stresses that most of the Asilomar discussion focused on nearer-future problems — from automated cybercrime, to the legal responsibility of robots, or the uncanny conundrum of whether robots should show emotions if they don’t really feel them. The panel also enthused about the “upside” of responsible use of intelligent systems: their possible contributions to medicine, education and transport.

Other researchers, though, firmly believe the Singularity is coming whether we like it or not, so we’d better understand the stakes. This means that a Hollywoodesque future should not be dismissed out of hand. Kevin Warwick argues: “Science fiction scenarios that play out some of the dangers are providing an excellent service to focus our attention on the important issues that face us, both in terms of threats and opportunities.”

Historically, science-fiction writers and other speculative thinkers have often made more accurate forecasts than scientists themselves. HG Wells famously predicted phenomena such as the mass bombing of civilians and the atomic bomb in his fiction. One 1914 review enthused, “We all like a good catastrophe when we get it.” Months later, the first world war broke out.

“The Hollywood guys are smart,” Hugo de Garis notes, “and can look into the future as readily as the AI researchers. I think any thinking person, who notices that our computers are evolving a million times faster than humans, must start asking the species dominance question: ‘Should humanity build artilects or not?’”

Whatever form it takes, one thing many experts agree on is that the future may be nearer than you think. “I think the Technological Singularity is an event on the order of Humans’ rise within the animal kingdom, or even the Cambrian Explosion,” Vernor Vinge says. “If it were something we figured would happen a million years from now, I bet most people would have a positive feeling about it, happy that human striving eventually produced such wonderful things. It’s the prospect of this event happening before one reaches retirement age that is nervous-making.”

Well, that — and the killer robots.

  • Art

    Amusing speculation.

    At times I get the feeling that the once very sharp Vernor Vinge has fallen victim to his own myth. Having struck a chord with the idea of Singularity, his subsequent visions (if one may call them that) are extrapolated to the point of becoming snowjobs, incoherent, vague and lacking.

    However, the interesting thing about machines is that they fulfill – and often exceed at attaining their potential, whereas humans most often fail to realise theirs. One can always count on the human race failing its visions. Such as the vision of producing sentient machines for instance. That development will not be stopped, or slowed by machines, but by us – nutty professors like Warwick notwithstanding.

    Also, a note of caution on using Moore’s law to forecast the production of “machines more powerful than the human brain”. Moore’s law, in itself states nothing other than the that it seems that the we are able to place the double amount of transistors on an integrated circuit every two years. Sure, this has implications in many fields but it mainly seems to relate to storage capacity – not to the quality of programming or usage. To produce a “brain effect” is not the same as being able to provide more space on a circuit.

  • http://stevenpoole.net Steven

    You are right, of course, that “Moore’s Law” is not actually a law (which is why I wrote “If it continues to hold”). It exists in various formulations, not all by Moore, but is these days usually held to refer to processing power, not storage capacity. The idea that sentience comes for free once you hit a certain level of processing power, of course, is a highly dubious assumption.