Foust’s case for killer robots engaged: Autonomous weapons are no phantom menace

Posted on 21 June 2013 by Mark Gubrud

ForeignPolicy.com blogger Joshua Foust announced on May 14 that he’d identified a “liberal” case for killer robots, including the seemingly incompatible assessments that they could “do it better”, where “it” means make the decision to kill, and that they are but a “phantom” (therefore not demanding of a serious response, such as discussion of a treaty). Foreign Policy has not accepted a response; this post is one that was offered.

By now we’ve all become accustomed to the idea of somebody sitting in a trailer, gazing at a screen with a joystick in hand, and pushing a button to blow up a person, or usually more than one, somewhere on the other side of the world. Indeed, when we first heard of this, it was just another bit of disorienting news in the brave new post-9/11 world. George W. Bush was president then, and like it or not, he was going to do it. Then came Barack Obama, the avatar of liberal hopes, to double Pentagon procurement spending for drones in his first year of office, and execute 6 times as many CIA drone strikes in Pakistan as had Bush, with the approval of a majority of Democrats as well as Republicans. So it seems odd for Joshua Foust to be announcing now that he’s found “a liberal case for drones,” as if liberals (as a political demographic) weren’t already solidly on board with the president who seems to have all but defined himself as the man with the drone cojones.

However, the drones that Foust is out to build a case for are drones not as we know them. Autonomous drones, or autonomous weapons in general, are the new shock of the new, and not yet fastened in the hearts of liberals, conservatives, or even military professionals. In fact, not many people are yet aware that in November the United States became the first nation to have an openly declared policy for the development, acquisition and use of autonomous weapon systems (AWS), weapons which “once activated, can select and engage targets without further intervention by a human operator.”

Many of those who have heard of this are under the impression that the policy is a moratorium on systems that kill without a “human in the loop.” It’s not. According to Pentagon spokesman James Gregory, the policy “mandates a review by senior defense leaders [two under-secretaries plus the chairman of the Joint Chiefs] before entering formal development and again before fielding” of systemsintended to kill people by autonomous machine decision. “This review process,” says LTC Gregory, “doesn’t really impose a moratorium on anything.”

In fact, the new policy clears the way for aggressive development of a technology that the military has been reluctant to embrace, and has actually backed away from in recent years.

Older systems such as robotic antisubmarine mines have been phased out, and systems in development such as loitering autonomous hunter-killer missiles have been canceled or redirected to include communications links to human operators, while still retaining capabilities for autonomous operation. In 2005, the US Air Force ended work on LOCAAS, a small air-launched missile that was to hunt for tanks and missile launchers, using its own sensors, and attack them autonomously. It was superseded bySMACM, a larger missile with a two-way link to an operator. The Army canceled its autonomousLoitering Attack Missile, but has recently been experimenting with Switchblade, a small ground-launched missile that can loiter and has both remote-control and semi-autonomous modes.

At the same time, a growing contingent, both inside and outside the Pentagon, has declared the inevitability of AWS and called for “policies that introduce a higher degree of autonomy to reduce the manpower burden and reliance on full-time high-speed communications links while also reducing decision loop cycle time.” The November 2012 Directive, which “Establishes DoD policy and assigns responsibilities for the…. design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems,” appears to resolve this debate in favor of a commitment to autonomous technology and AWS development, integration of AWS into “operational mission planning” and use in war.

Foust only quotes from the Directive the one line which probably does best summarize its approach:

“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

The problem with this is that nowhere does the document explain what the “appropriate levels” are, but a strongly implied assumption is that human judgment is not always required at the level of the decision to take a human life—that this is something which can be appropriately delegated to a machine.

While opening the door for delegation of lethal authority to electronic decision makers, the Directive places the burden of responsibility for their decisions on human commanders and operators. If the systems themselves are not able to ensure that they operate “in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE)” it is up to “Persons who authorize the use of, direct the use of, or operate” AWS to ensure this, despite not being able to know exactly what situations the systems will encounter, let alone how they will behave. The same can be said of a commander sending soldiers into a fight, but in most cases the soldiers themselves are supposed to be responsible for not shooting civilians.

The fact that soldiers often fail in this regard is held by Foust, and many other apologists for killer robots, to deflate the basic moral objection to machines that kill autonomously. After all, going by the record, how moral are human beings, especially in war? Perhaps emotionless robots can be programmed to be more scrupulously faithful to international humanitarian law (IHL) and military ROEs, going one better by exposing themselves to greater risk in, for example, checking whether a wounded opponent is still armed and dangerous, instead of just shooting him again to make sure.

While one may imagine such happy scenarios, in practice, for the immediate future, artificial intelligence (particularly in mobile robots) can’t come close to human performance in the cognition, reasoning and judgment required to distinguish combatants from noncombatants, to understand human behavior in combat situations, or to weigh military gains against harm or risk to civilians before deciding on the use of weapons. As long as this is so, giving machines increasing latitude to decide between one action and another, once the action starts, is only loosening the degree of control by responsible human judgment.

Will emotional humans, under stress and in danger, be as responsible in their use of AWS as the Directive demands, or will using AWS become a way to evade responsibility? Foust suggests that determining why an AWS made a bad decision could be “as simple as plugging in a black box,” and that programmers could be held responsible for war crimes caused by bugs in code. But in reality, what investigation would likely reveal is that the technology encountered a condition that had not been anticipated—the kind of thing that leads to millions of software failures every working day, most of which at least do not have fatal consequences.

Foust has it entirely backwards when he says that the Pentagon’s declaration about “appropriate levels of human judgment” implies that “the U.S. government isn’t looking to develop complex behaviors in drones.” What would we be talking about, if that were true? Increasing levels of autonomy implies increasing levels of complexity and capability of weapon systems to classify objects, interpret situations, predict the next move, and to decide how to act. Saying that it is the commanders’ and operators’ responsibility to understand the limitations of these systems, so that they can be used as they are, is clearing the way for further development of these systems without setting any ultimate limits.

This becomes clear in the Directive’s definition of “semi-autonomous weapon systems” (SAWS). These are fully green-lighted for development, acquisition and use without needing any special signatures or high-level oversight. If something is a semi-autonomous weapon system, it can be developed, funded, purchased, integrated into operational planning and used in combat today, or ASAP.

Two kinds of SAWS are defined. The first are systems with capabilities for

“…acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.”

Got that? For example, you might find an operator sitting at a console, and on the screen he sees some blurry figures through a FLIR camera. The system places a cursor over the figures and says “TARGET GROUP IDENTIFIED.” The operator says “Engage,” and the system does the rest. Obviously, it takes nothing more than a trivial software modification, or the throw of a switch, to make such a system fully autonomous. At least three companies already produce sentry systems which have capabilities of this sort. Nor is there any reason it could not be a mobile robot sent out to patrol an urban environment.

The second kind of SAWS, also green-lighted by the Directive, comprises “homing munitions” that are sent out to seek a target rather than being locked onto a particular target before launch. These SAWS clearly include missiles, cruise missiles and robotic torpedoes, and there is no clear exclusion of autonomous drones and ground robots (even if you think self-destruction would be required to meet the definition of a “munition”). Here, the operator is supposed to be responsible for following “tactics, techniques and procedures” which “maximize the probability that the only targets within the seeker’s acquisition basket” are the intended targets. However, most seekers have at least some ability to distinguish intended targets from “clutter,” and as this technology develops, the sophistication of target identification and discrimination is expected to increase. First we rely on the technology to seek out missile launchers within some limited search area, and then it gets good enough to distinguish missile launchers from gasoline trucks or the piping in a water treatment plant, so we can expand the search area. This process can be continued without limit, and eventually we are dispatching highly intelligent combat robots on search-and-destroy missions over a wide area as if they were human commandos.

In effect, the second definition of SAWS has erased any meaningful distinction between this kind of weapon and the fully autonomous ones that are supposed to require senior review (the so-called moratorium), since as soon as the missile is launched it becomes fully autonomous. There is thus no red line left to cross between the weapons that are fully approved under the new US policy, set by this administration, and, to put it baldly, the Terminator—or anything in between. It becomes a matter of technology and time.

It is curious that so many who write on both sides of this question jeer at science fiction, as if its warnings could be dismissed just because they were delivered through expressionistic art and scenarios that don’t pass analytical muster. How can we talk about the future, and about how technology may reshape or destabilize our world, without being subject to this particular form of ad hominem ridicule? Perhaps it may help to draw parallels with the indisputable past.

Obama’s assumption and expansion of the presidential prerogative to drone is eerily reminiscent of Harry Truman’s assumption of the authority to use nuclear weapons and to largely set policy for their further development, production and deployment. Liberals mostly went along with that, too, at least until realizing that the power to initiate nuclear war was the power to end human history. By the time popular (and mostly liberal) dissent against an unbounded nuclear arms race rose to the level of a potent political force, it was almost too late. We were very lucky—perhaps owing largely to themoderating role of human judgment in crises and accidents at the brink of apocalypse—to survive the Cold War, as well as two decades (and counting) of the new world disorder.

Killer robots are not the only element of the global technological arms race, but they are currently the most salient, rapidly-advancing and fateful. If we continue to allow global security policies to be driven by advancing technology, then the arms race will continue, and it may even reheat to Cold War levels, with multiple players this time. Robotic armed forces controlled by AI systems too complex for anyone to understand will be set in confrontation with each other, and sooner or later, our luck will run out.

We can stop this. The line to draw is clear: No autonomous initiation of violence, no machines deciding on their own to kill human beings or to start or escalate conflicts in which people will be killed. It is a bright red line that everyone can understand, and it defines a strong moral principle that we instinctively know is right. To enshrine this principle in a global ban on autonomous weapons is a necessary step in our unending effort to secure the human future.

Categorized | Analysis, Front Page

Leave a Reply