Moral Machines Guru takes moral high ground on autonomous weapons

Posted on 09 March 2013 by nsharkey

Wendell Wallach is well known for co-authoring the important book ‘Moral Machines’ with Colin Allen. It presents the best state of the art description and discussion of how computers could or might make moral decisions about humans in the future.

Now Wendell has come out strongly against the whole notion of autonomous weapons. He sees them as evil in themselves: ‘mala in se’:

“Research on artificial intelligence over the past 50 years has arguably been a contemporary Tower of Babel. While AI continues to be a rich field of study and innovation, much of its edifice is built upon hype, speculation, and promises that cannot be fulfilled. The U.S. military and other government agencies have been the leaders in bankrolling new computer innovations and the AI tower of babble, and they have wasted countless billions of dollars in the process. Buying into hype and promises that cannot be fulfilled is wasteful. Failure to adequately assess the dangers posed by new weapons systems, however, places us all at risk.

The long-term consequences of building autonomous weapons systems may well exceed the short-term tactical and strategic advantages they provide. Yet the logic of maintaining technological superiority demands that we acquire new weapons systems before our potential adversaries—even if in doing so we become the lead driver propelling the arms race forward. There is, however, an alternative to a totally open-ended competition for superiority in autonomous weapons.

A longstanding concept in just war theory and international humanitarian law is that certain activities such as rape and the use of biological weapons are evil in and of themselves—what Roman philosophers called “mala in se.” I contend that machines picking targets and initiating lethal and nonlethal force are not just a bad idea, but also mala in se. Machines lack discrimination, empathy, and the capacity to make the proportional judgments necessary for weighing civilian casualties against achieving military objectives. Furthermore, delegating life and death decisions to machines is immoral because machines cannot be held responsible for their actions.

So let us establish an international principle that machines should not be making decisions that are harmful to humans.”

Read the full article as science progress – Terminating the Terminator: What to do About Autonomous Weapons

nsharkey
Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and was an EPSRC Senior Media Fellow (2004-2010).

Categorized | Analysis

Comments are closed.