Voice

Could Killer Robots Bring World Peace?

We're breaking Isaac Asimov's First Law -- and it could be good for humanity.

A few weeks ago, the United Nations affirmed Isaac Asimov's First Law of Robotics: "A robot may not injure a human being." Christof Heyns, the U.N. special rapporteur on extra-judicial, summary, or arbitrary executions, said as much in a May 29 speech to the Human Rights Council in Geneva calling for a moratorium on the development of lethal robots. His argument followed two thoughtful paths, expressing concern that they cannot be as discriminating in their judgments as humans and that their very existence might make war too easy to contemplate. As he summed up the grim prospect of robot soldiers, "War without reflection is mechanical slaughter."

Asimov's First Law -- there are a couple more that evolved later to reinforce the principle of doing no harm to individuals, or humanity overall -- was introduced in his 1941 short story "Liar" and expanded upon in his book I, Robot. In these tales, it is clear that robots are not only hard-wired to refrain from violence against humans; they will even lie to avoid causing harm. But just two years after the introduction of the fictional First Law, actual robots led missions in history's most destructive strategic bombardment campaign: the massive air war against Nazi Germany. In this instance, the action was controlled by the Norden bombsight, a robot in the strictest sense -- not humanoid, but rather "a machine able to carry out complex actions without human control." As bombers neared their targets, the Norden's computer took control of the plane through its autopilot, adjusting course for wind and other factors. Then it calculated the optimum moment for dropping bomb loads. It was not a very accurate system, being off-target by about a quarter-mile, on average -- but robots did a lot of killing in World War II.

Even before the robot-caused carnage inflicted by the Norden, and putting Asimov's First Law aside momentarily, the general literary line of thinking about robots was that they were going to prove very dangerous to humankind. Ambrose Bierce, the great curmudgeon, may have started it all in his 1909 short story, "Moxon's Master." In it, Moxon, an inventor, creates a mechanical man who can do any number of things, including play chess. But this was long before IBM's Deep Blue came along and defeated world chess champion Garry Kasparov; when Moxon beats his robot in a game, it flies into a rage and kills him.

A decade later, but still 20 years before Asimov's First Law, Karel Capek's play R.U.R. premiered (the acronym refers to "Rossum's Universal Robots"). The robots of the play are not machines, but biologically engineered entities, as in Philip K. Dick's Do Androids Dream of Electric Sheep (better known under the film title, Blade Runner). As in Dick's novel, they are exploited and rebel, but in Capek's play they do so on a large scale, finally supplanting humanity. It is a trope that has taken hold ever since in movie franchises like The Terminator and The Matrix, and in the brilliant reboot of the Battlestar Galactica television series. In more recent sci-fi literature, John Ringo's Von Neumann's War digs quite deeply into the way that alien robots would think about strategy and tactics in a war of conquest against humanity. So it seems that, in literary terms, Isaac Asimov stood against a tide of thinking about the coming lethality of robots.

Lethal robots have been making progress in the real world as well. One of the principal weapons of modern warfare, the Tomahawk missile, is a robot. To be sure, its target is chosen by humans, but the missile guides itself to its destination -- totally unlike human-controlled Predators, Reapers, and other so-called drones -- working around terrain features and dealing with all other factors on its own as well. Tomahawks have done much killing in our two wars with Iraq -- and in a few other spots as well. Israel's Harpy is another fully autonomous robot attack system; while it aims to take out radar emitters rather than people, if enemy soldiers are on site.... The British Taranis is a robot aircraft capable of engaging enemy fighter jets. On the Korean Peninsula, Techwin is a patrol robot, usually remote-controlled but capable of autonomously guarding the demilitarized zone between the North and South -- that narrow patch of green foliage surrounded by the most militarized turf on the planet.

Clearly, 21st century military affairs are already being driven by the quest to blend human soldiers with intelligent machines in the most artful fashion. For example, in urban battles, where casualties have always been high, it will be better to send a robot into the rubble first to scout out a building before the human troops advance. In future naval engagements, where the risk of killing civilians will be close to nil out at sea, robot attack craft might be the smartest weapon to use, particularly in an emerging era of supersonic anti-ship missiles that will imperil aircraft carriers and other large vessels. In the air, robots will pilot advanced jets built to perform at extreme G-forces that the human body could never tolerate. As Peter Singer has observed in his book Wired for War, robots are now implementing the swarming concept that my partner David Ronfeldt and I developed over a decade ago -- the notion of attacking from several directions at the same time -- at least in the United States military.

All this means that the moratorium Christof Heyns called for is likely to be dead on arrival if it ever gets to the U.N. Security Council -- some veto-wielding members have no intention of backing away from intelligent-machine warfare. Also, those who keep the high watch in many other countries are no doubt going to seek the diffusion, rather than the banning, of armed robots. However, the concerns that Heyns expressed are important ones. Yes, we should take care to protect noncombatants, but I think the case can be made that robots will do no worse, and perhaps will do better than humans, in the area of collateral damage. They don't tire, seek revenge, or strive to humiliate their enemies. They will make occasional mistakes -- just as humans always have and always will.

As to Heyns's worry that war will become too attractive if it can be waged by robots, I can only reaffirm Gen. William Tecumseh Sherman's assessment: "War is hell." He was right during the Civil War, and the carnage of the nearly 150 years since -- perhaps the very bloodiest century-and-a-half in human history -- has done nothing at all to disprove his point. So the coming of lethal robots, as with other technological advances, will likely make war ever deadlier. The only glimmer of hope is that on balance, and contrary to Heyns's concern, the cool, lethal effectiveness of robots properly used might, just might, give potential aggressors pause, keeping them from going to war in the first place. For if invading human armies, navies, and air forces can be decimated by defending robots, the cost of aggression will be seen as too high. Indeed, the country, or group of countries, that can gain and sustain an edge in military robots might have the ultimate peacekeeping capability.

Think of Gort and his fellow alien robots from the original Day the Earth Stood Still movie. As Klaatu, his humanoid partner, makes clear to the people of Earth, his alliance of planets had placed their security in the hands of robots programmed to annihilate any among them who would break the peace. A good use of lethal robots for a greater humane purpose.

Ethan Miller/Getty Images

National Security

How to Protect Yourself from the Online Axis of Evil

What has happened to the notion of cyberdefense?

North Korea and Iran are viewed as threats to the world because of their potential to field weapons of mass destruction, but they are far more likely to focus their malfeasance on "mass disruption" via cyber attacks. Should either state ever step out of nuclear line, overwhelming retaliation would follow. But in cyberspace, both Tehran and Pyongyang are credible powers capable of and apparently quite willing to make considerable mischief. Iran appears to have mounted a serious attack on the Saudi oil industry recently, wiping out critical data on tens of thousands of machines with the so-called Shamoon virus. North Korea is thought to have just attacked its southern neighbor's banking sector -- the latest in a steady stream of cyber strikes spanning several years.

Yet there has been no response-in-kind, which suggests that cyber attackers will press on with a growing sense of impunity, making the task of deterring them quite difficult. Indeed, instead of posing retaliatory threats -- the key to successful deterrence during the Cold War -- there appears to be a willingness to live under cyber siege, relying instead on improving defenses. Over the past few days, while all eyes have been riveted on the Snowden leaks, word has also gotten out, more quietly, about ongoing American efforts to craft cyber defensive coalitions with countries in the Persian Gulf region and in Northeast Asia. Information about these alliances remains proprietary, but it would be hard to think of them arising in the absence of Saudi Arabia and Qatar in response to the perceived threat from Iran, or without Japan and Taiwan when it comes to dealing with North Korea.

It is a very good thing that these alliances are forming. That they may rely on American cybersecurity strategies is a bit more problematic. The United States rates quite low in terms of its defensive capabilities. Last summer at the Aspen Security Forum, General Keith Alexander, head of both Cyber Command and the National Security Agency, publicly rated American cybersecurity a "3" on a scale of 1-10. Former government cyber czar Richard Clarke was a tougher grader, giving Washington a "1." The point is, it is one thing to build cyber defensive alliances, quite another to actually mount robust defenses. And ambiguous American threats either to "pre-empt" imminent cyber attacks or to respond with physical force are simply not very credible. It is extremely difficult to catch enemy electrons while they are massing -- or whatever they do before being launched -- and highly unlikely that the U.S. military will be authorized to go off and break things, and possibly kill people, in response to even costly cyber disruptions.

So the defensive alliances forming up should perhaps start, not so much by taking American direction as by opening up a spirited discourse on alternative cybersecurity paradigms. This would be good both for them and for the United States, as it is clear that American reliance on anti-virals and firewalls will not get the job done. One master hacker of my acquaintance likes to put it this way: "There are no firewalls, because they only recognize what they already know." This does not mean throwing these defenses out completely, as they do have some value. But it does mean shifting emphasis to more effective means.

For reasons that still baffle me, the ubiquitous use of very strong encryption has been neglected, sometimes resisted. Indeed, under American law there was a time not too long ago when it was illegal for the average citizen to have and use the strongest code-making capabilities. This silliness stopped some years ago yet, with our first cyber president in office -- he is very attached to his personal information technology suite -- but his bully pulpit is hardly being used to tell Americans to encrypt, encrypt, encrypt.

There are additional strategies that the emerging cyber defensive alliances should consider, perhaps the best among them being the resort to concealment in "the Cloud," an airy place in cyberspace outside one's own system where information can be encrypted, broken into several pieces, stored with much improved security, and called back home with a click. A place closer in, the area of unused capacity in a friendly network called "the fog," for example, is another way to move information around and keep it concealed. Both these approaches deal with another of the problems that my hacker friend describes: "If data just sits in your system, someone will get at it. Data at rest is data at risk. Keep it moving."

Not only will consideration of these alternative strategies improve security against the threats posed by Iran and North Korea, but adopting them would go a long way toward dealing with the nettlesome intrusions that are believed to emanate from China. President Obama has made very little progress with President Xi on cyber matters; in addition to jawboning Beijing, Washington should develop a sense of urgency about getting better at cyberdefense. After all, when the head of Cyber Command and a long-time senior official with a cyber portfolio both give failing marks to our cyberdefenses, it is high time to do something in addition to talking. If there is ever to be an effective behavior-based agreement to refrain from cyber attacks on, say, civilian infrastructure, I guarantee it will only happen when all parties have strong defenses in place as well.

So let me suggest that, for all the attention that will no doubt be devoted to the PRISM debate -- so relevant to the matter of dealing with terrorist networks -- equal time should be given to the matter of developing defenses as strong as the alliances that are being forged against the looming threat of cyberspace-based weapons of mass disruption. For it is possible, in the course of what may become a protracted, divisive domestic debate about big-data intelligence gathering methods, that the crucial need to improve our and our allies' cyberdefenses will be neglected. The anguish over possibly undue intrusions into our privacy will pale in comparison to the economic, social, and strategic costs that will be inflicted on the world -- not just the United States -- if we fail to act now to improve cyberdefenses.

PO1 Joshua Wahl/DVIDS