Icelandic research institute shuns autonomous weapons

Posted on 02 September 2015 by nsharkey

IIIM-outside-sm

icelandic institute for intelligent machines.

The Icelandic Institute for Intelligent Machines (IIIM) has issued an ethical policy that makes them the first Artificial Intelligence research and development group to reject the development of technologies intended for military operations. IIIM is an independent research institute affiliated with Reykjavik University in Reykjavik, Iceland

“It is only fitting that a research center in Iceland should field such a policy” says Kristinn R. Thórisson, Managing Director of IIIM. “A nation without a standing army and virtually no history of war in its 1100 years”.

Thórisson believes that Artificial Intelligence has great potential for the immediate benefit of society. He asks, “Why should the taxpayers’ money fund autonomous weapons meant for killing humans when they could be funding applications for civilian uses, with enormous immediate benefits to society? Why not spend the large sums of money poured into weapons development instead for a peaceful, civilian agenda?”

He adds a strong note of caution, “When people think of a war waged with machines — autonomous killing machines — they may imagine armies or robots fighting other armies of robots. And no one gets hurt, right? But evidence so far tells us that the reality may in fact be very different —much more likely is the scenario of armies of killer robots set against individuals, groups and even nation states who are not in a good position to defend themselves, or even worse, largely at at a loss to do so.”

It is certainly heartwarming to see an AI R&D institute showing such a great sense of social responsibility and it would be great to see others following their lead.

This comes after Canada’s Clearpath robotics became the first robotics company to issue a policy against autonomous weapons systems. The cards are  stacking up against autonomous weapons with support for a ban this year from Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates and Noam Chomsky. The Future of Life Institute issued an open letter against AI weapons last month with thousands of signatures from prominent figures in the field.

The full ethics policy of IIIM is given below and can also be found at http://www.iiim.is/about-iiim/ethics-policy/ For more information about the IIIM visit http://iiim.is or email (info@iiim.is).

The Board of Directors of IIIM believes that the freedom of researchers to explore and uncover the principles of intelligence, automation, and autonomy, and to recast these as the mechanized runtime principles of man-made computing machinery, is a promising approach for producing advanced software with commercial and public applications, for solving numerous difficult challenges facing humanity, and for answering important questions about the nature of human thought.

A significant part of all past artificial intelligence (AI) research in the world is and has been funded by military authorities, or by funds assigned various militaristic purposes, indicating its importance and application to military operations. A large portion of the world’s most advanced AI research is still supported by such funding, as opposed to projects directly and exclusively targeting peaceful civilian purposes. As a result, a large and disconcerting imbalance exists between AI research with a focus on hostile applications and AI research with an explicitly peaceful agenda. Increased funding for military research has a built-in potential to fuel a continual arms race; reducing this imbalance may lessen chances of conflict due to international tension, distrust, unfriendly espionage, terrorism, undue use of military force, and unjust use of power.

Just as AI has the potential to enhance military operations, the utility of AI technology for enabling perpetration of unlawful or generally undemocratic acts is unquestioned. While less obvious at present than the military use of AI and other advanced technologies, the falling cost of computers is likely to make highly advanced automation technology increasingly accessible to anyone who wants it. The potential for all technology of this kind to do harm is therefore increasing.

 For these reasons, and as a result of IIIM’s sincere goal to focus its research towards topics and challenges of obvious benefit to the general public, and for the betterment of society, human livelihood and life on Earth, IIIM’s Board of Directors hereby states the Institute’s stance on such matters clearly and concisely, by establishing the following Ethical Policy for all current and future activities of IIIM:

1 – IIIM’s aim is to advance scientific understanding of the world, and to enable the application of this knowledge for the benefit and betterment of humankind.

2 – IIIM will not undertake any project or activity intended to (2a) cause bodily injury or severe emotional distress to any person, (2b) invade the personal privacy or violate the human rights of any person, as defined by the United Nations Declaration of Human Rights, (2c) be applied to unlawful activities, or (2d) commit or prepare for any act of violence or war.

2.1 – IIIM will not participate in projects for which there exists any reasonable evidence of activities 2a, 2b, 2c, or 2d listed above, whether alone or in collaboration with governments, institutions, companies, organizations, individuals, or groups.

2.2 – IIIM will not accept military funding for its activities. ‘Military funding’ is defined as any and all funds designated to support the activities of governments, institutions, companies, organizations, and groups, explicitly intended for furthering a military agenda, or to prepare for or commit to any act of war.

2.3 – IIIM will not collaborate with any institution, company, group, or organization whose existence or operation is explicitly, whether in part or in whole, sponsored by military funding as described in 2.2 or controlled by military authorities. For civilian institutions with a history of undertaking military-funded projects a 5-15 rule will be applied: if for the past 5 years 15% or more of their projects were sponsored by such funds, they will not be considered as IIIM collaborators.

nsharkey
Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and was an EPSRC Senior Media Fellow (2004-2010).

Categorized | News

Leave a Reply