X-47B_over_coastline

“Three Laws Safe?” Autonomous Robots and Warfare by Dr. Paul T. Mitchell

Print pagePDF page

The X-47B looks like something out of a sci-fi space opera; more Cylon raider than fighter jet.  The analogy is not so far-fetched.  It recently passed a series of airworthiness evaluations and the next step will be testing autonomous landings aboard an aircraft carrier sometime next year.  The significance may be lost on some.  Among all the tasks that confront pilots, none, including air to air combat, are more stressful than landing an aircraft on a moving surface, smaller than most parking lots, that is pitching in all three dimensions.   Further, the complex mix of men and machines on crowded carrier decks make them among the most dangerous work spaces in existence.  An aircraft that will manage these tasks with very limited human guidance is a major technological accomplishment.


Feats such as these have sparked the imagination of many and raise questions about the progress of autonomy in weapons.  Lately, the blogosphere has come alight with speculation about the development of autonomous robots on the battlefield. David Betz from the King’s College War Studies department has suggested that technology, policy and military practice are all leading towards a future of autonomous robots in the conduct of warfare.  Steve Metz of the US Army War College has drawn similar conclusions, noting the growing challenge of recruiting (and affording) sufficient numbers of troops to deal with the many challenges confronting the United States.  LSE professor, Christopher Coker’s Waging War Without Warriors discusses so called “Transhuman Warfare” in terms reminiscent of James Cameron’s Terminator franchise.

We have to be clear – there is an enormous difference between today’s Predator B drone aircraft and autonomous robots.  UAVs, while capable of taking off and landing by themselves, and flying unassisted to patrol areas, are not true autonomous robots.  In flight, they are under the full control of humans and they can neither identify a target nor launch a weapon on their own.  Indeed, while the issue of targeted killings by drones is worthy of debate, there is no functional difference between an airstrike conducted by a manned F-16 and that of a Predator B: a human being pulls the trigger in both instances.  It just so happens that with a UAV, that human being is located thousands of miles away, rather than being on the scene.  In both cases, the result is the same.

Further, we should acknowledge that modern militaries have a legitimate interest in robotics.  The economic and social costs of warfare are spiraling ever greater and while we may decry the use of force, our governments continue to see great utility in employing it for a growing variety of purposes.  Even now many normally opposed to war in general are demanding that the international community “do something” about the situation in Syria.  The cost of soldiering is increasingly expensive, however.  Recent reports estimate that it costs between US$850,000 and US$1.4 million per soldier per year to support Afghan operations and the separate bill for training and social benefits of each soldier are equally as large. Rising costs have basically led to smaller forces.  As Metz argues, robots may be a way of dealing with this problem.  (As an aside, John Ellis’ classic, The Social History of the Machine Gun makes a similar argument in terms of the introduction of automatic weapons in the late 19th century – higher rates of fire enabled smaller forces to take on larger enemies in colonial conflicts).

Last, to a certain degree, we already have autonomous “defence systems”, at least at sea.  The Aegis Combat System, which is a combination of missiles, guns, radars and command and control software, can be placed on full automatic for ship defence, requiring no human input.  Apparently, this has never been done in an actual operational setting.  The shooting down of Iran Air Flight 655 in 1988 by the USS Vincennes was an example how the complex Aegis system can lead to a so-called “normal accident”.  While the system was under human direction, operators became confused as to the identity of the aircraft it was tracking, mistaking the Iranian Airbus for a taxiing Iranian Airforce F-14.

Target identification is central to achieving military missions as well as keeping friendly forces safe.  In the air and at sea, and for perimeter defence this problem is considerably simpler than that confronted by combat soldiers.  The identification of the enemy is frequently impossible before combat begins because of camouflage, concealment, and deceptive tactics.  Armed robots were introduced into Iraq but were never used in action for reasons that appear to be linked to targeting friendly forces.  In the sci-fi literature, Isaac Asimov introduced his famous “Three Laws of Robotics” which were developed to keep humans safe from much stronger, intelligent machines:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But in the case of militarized robots, unless every nation conducted warfare exclusively with them, these rules clearly would not apply (indeed, would be dangerous to apply for entire robot armies could be stopped in their tracks by human shields, and subsequently destroyed in place).  Military robots need to know when to kill and when not to (a Terminator is simply a system for programmatic genocide – not within the policy demands of any Western military)

Ronald Arkin at the Georgia Institute of Technology has raised the notion of an “ethical governor” which would enable military robots to engage autonomously in acts of force in an identical manner to a human soldier.  He goes so far as to make the assertion both that robots can be made “ethically superior” to human soldiers and that “there is no fundamental scientific limitation to achieving the goal”.

Far be it from me to place boundaries on the possibilities of artificial intelligence (AI).  Indeed, a succession of computers from Deep Blue’s victory over chess master Gary Kasparov to Watson’s victory in Jeopardy against both Brad Rutter and Ken Jennings indicate the amazing advances of AI.  The Defence Advanced Research Projects Agency (DARPA)sponsors a  series of contest challenges that have resulted in cars that can navigate themselves across vast distances in the deserts of southwest United States.  Yet all of these examples, including that of the Aegis system are, to my mind, highly limited problem sets, different from the challenges presented in the complexity of land combat.

In fact, humans have their own “laws of robotics” for warfare – the principles of ius in bello, “justice in war” or “just means”, which date back to St. Augustine.  Along with the related principles of ius ad bellum (justice of war or just cause), they comprise five principles:

  1. Distinction (knowing the difference between combatants and non-combatants);
  2. Proportionality (balancing military objectives against the damage operations will cause);
  3. Military necessity (keeping the employment of force at the lowest levels possible);
  4. Fair treatment of Prisoners of War; and
  5. “Just” weapons (rape as a weapon is evil, for example).

Now, these are often acknowledged more in the breach than in their actual application, but Western military forces have been paying increasing attention to them in the last twenty years.  The point is, military robots would have to be programmed with these principles in their ethical governors.  This is not simply a matter of some “wishy-washy” goal for conducting “humane warfare”.  As the pre-eminent philosopher of war, Carl Von Clausewitz argues, war has its own grammar, but not its own logic.  It serves a purpose in the form of policy and thus is a controlled force serving specific ends, not simply wanton destruction.  Choices must be continuously made about the type and level of force used in war, all of which imply ethical judgments in a very subjective environment.

It strikes me that the first three principles pose specific problems for those seeking to devise an ethical governor as they all involve intangibles.  While the combatant/non-combatant distinction may seem the easiest, in contemporary urban “hybrid” conflicts, the ability to identify the enemy is the most difficult problem confronting soldiers.  Moreover, any ethical algorithm could be “hacked” by opponents by acting outside of the programmed parameters, much in the way modern day insurgents take advantage of contemporary laws of war.  Both proportionality and necessity require the soldier to weigh immediate advantages against future contingencies.  Because this judgment entails weighing present circumstances against a multitude of uncertain future possibilities, the process is inherently subjective.  Neither laws of probability nor regression curves tell us what the right course of action is.  Arkin maintains that a hypothetical ethical governor would do this job better, not being subject to fatigue and the vagaries of emotion.  Yet emotion is the most important factor that informs both proportionality and necessity: it is the sympathetic impulse, our ability to put ourselves into the shoes of the “other”, which makes the judgment possible at all.  As AI researcher Douglas Hofstadter noted of the battle between Intel’s “Stanley” car and Carnegie Mellon’s “H1” entry in the DARPA challenge:

At one point, Stanley’s video camera picked up another robot vehicle ahead of it … and eventually pulled around H1 and left it in the dust. … At this crucial moment, did Stanley recognize the other vehicle as being “like me”?  Did Stanley think as it gaily whipped by H1 “There but for the grace of God go I?” or perhaps “Aha, gotcha!”  Come to think of it, why did I write that Stanley “gaily whipped by” H1?  What would it take for a robot vehicle to think such thoughts or have such feelings?

Many assume that the technological progress of systems like the X-47B identify the trend of inevitable development of autonomous robots.  This technological determinism, however, is belied by the human challenges that are at the heart of war.  The idea of autonomous military robots seeks to solve some of those human issues.  However, for the moment at least, AI is not up to the challenge of solving the complex problems posed by target identification and the ethical judgment necessary to engage in an act of force.  As Captain Kirk reminds us in the classic Star Trek episode A Taste of Armageddon, “Death, destruction, disease, horror. That’s what war is all about. That’s what makes it a thing to be avoided.”  Because war is so “unsafe”, it is best practiced by humans.

The views expressed here are those of the author alone and do not represent those of the Canadian Forces College or the Department of National Defence.

Dr. Paul T. Mitchell is a Professor of Defence Studies at the Canadian Forces College, an alumnus of Wilfrid Laurier University, and a Research Associate of the Laurier Centre for Military Strategic and Disarmament Studies.  This is the second of his monthly blog `The Battle Space.`



This article was made possible by the hard work of our staff and especially our student-volunteers. Please consider supporting our work by clicking here.

Posted by:

matt.symes

Categories

Back to Top