Smart Robots? Perhaps not smart enough to be called stupid.

Posted on 18 March 2013 by nsharkey

The New York Times has entered the discussion about the Campaign to Stop Killer Robots. Columnist Bill Keller has produced a well balanced article that looks at the pros and cons of a ban.

For the ban, he notes that

The arguments against developing fully autonomous weapons, as they are called, range from moral (“they are evil”) to technical (“they will never be that smart”) to visceral (“they are creepy”).

“This is something people seem to feel at a very gut level is wrong,” says Stephen Goose, director of the arms division of Human Rights Watch, which has assumed a leading role in challenging the dehumanizing of warfare. “The ugh factor comes through really strong.”

He then discusses the three International Humanitarian issues with autonomous robot weapons (i) inability to conform to the principle of distinction; (ii) inability to conform to the principle of Proportionality and (iii) difficulties with accountability with mishaps or war crimes.

He brings out the usual suspect, Ron Arkin, to argue against a ban. Arkin still believes that robots could do better than human because they don’t have emotional responses. Others argue that that is one of the main problems. The funniest comment to Keller’s article was a response to Ron Arkin:

Professor Arkin argues that automation can also make war more humane.” This guy has obviously been a civilian all his life. Only a civilian would believe there is a humane way to kill another human being. Does he get out of the house on a regular basis?

But Arkin’s position in other respects does not now seem that removed from those calling for a ban. “He advocates a moratorium on deployment and a full-blown discussion of ways to keep humans in charge.” The human’s in charge is a subtle change in Arkin’s position that is greatly appreciated. It moves us some way toward the discussions that should be had.

However, without a ban on the development and research on these weapons systems, they are going to end up in the US arsenal. Other countries have not said that they will have a moratorium and so we can expect and arms race that the US will not be able to resist.

In fact in terms of a moratorium, Keller appears to have made an error of interpretation with regards to the recent Department of Defence directive (November 21 2012) ” Last November the Defense Department issued what amounts to a 10-year moratorium on developing them while it discusses the ethical implications and possible safeguards.”

ICRAC member Mark Gubrud picks up on this error in a comment after Keller’s piece:

The DoD Directive (3000.09) does not impose any moratorium. It says that the United States will develop and use autonomous weapons.

Although it draws a line at AW that kill humans autonomously, it does not forbid crossing the line; rather, it sets forth the procedure for doing so. Four sub-cabinet level signatures are required. Other than that, the rules for AW that kill humans are essentially the same as for AW that target materiel, which the Directive approves already.

The directive also approves for immediate development and use “semi-autonomous weapons” which may automatically acquire, track, identify and prioritize potential targets, cue a human operator to their presence, and upon approval, engage them, automatically determining the timing of when to fire.

So, a semi-autonomous weapon system might detect a group of persons, highlight their dim outlines on a screen, and say to the operator “target group identified.” The operator says “engage” and the machine kills them.

Such a system already has every capability needed for full lethal autonomy. It has only been programmed to request approval. One trivial software modification will fix that, if the system doesn’t already have a switch to throw it into full autonomous mode.

DoDD 3000.09 approves such systems for immediate development, acquistion and use.

There is no moratorium; it is a full-speed charge into the unknown.

Nonetheless, Keller is clearly on the right side of the issues and shows a clear understanding: ” It’s a squishy directive, likely to be cast aside in a minute if we learn that China has sold autonomous weapons to Iran”

Although Keller is not optimistic about the chance of us getting a ban on killer robots, he supports it and ICRAC appreciates him for that:

I don’t hold out a lot of hope for an enforceable ban on death-dealing robots, but I’d love to be proved wrong. If war is made to seem impersonal and safe, about as morally consequential as a video game, I worry that autonomous weapons deplete our humanity. As unsettling as the idea of robots’ becoming more like humans is the prospect that, in the process, we become more like robots.

It is well worth reading Bill Keller’s full story and the comments that come afterwards – Smart Robots.

Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and was an EPSRC Senior Media Fellow (2004-2010).

Comments are closed.