<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>News &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/category/icracnews/news/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 23 Jun 2025 12:49:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Statement on Ethical Considerations in Open Informal Meeting at UNGA 1st Committee</title>
		<link>https://www.icrac.net/statement-on-ethical-considerations-in-open-informal-meeting-at-unga-1st-committee/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:45:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Peter Asaro]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19944</guid>

					<description><![CDATA[UNGA Informals on LAWS ICRAC Statement on Ethical Considerations Delivered by Prof. Peter Asaro on May 13, 2025 Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="1024" height="800" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799-1024x800.jpg?resize=1024%2C800&#038;ssl=1" alt="Peter Asaro delivering ICRAC Statement on Ethics" class="wp-image-19940" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=1024%2C800&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=300%2C234&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=768%2C600&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?w=1536&amp;ssl=1 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></figure>



<p><strong>UNGA Informals on LAWS <br>ICRAC Statement on Ethical Considerations <br>Delivered by Prof. Peter Asaro on May 13, 2025</strong> </p>



<p><br>Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and ethics. ICRAC is a co-founding member of the Stop Killer Robots Campaign.</p>



<p>We appreciate the organizers of this Informal Meeting including a Session on Ethical Considerations. It has been many years since Ethics has been the primary focus of substantive discussion within the CCW GGE meetings. Yet ethics and morality has provided a valuable basis for international law in the past, and is precisely where we must ground new laws to prohibit and regulate AWS in the near future. That is, in our common shared humanity, and principles which transcend human laws, particularly human dignity in a deep sense as discussed by Prof. Chengeta, and ethical decisions as discussed by the Representative of the Holy See.</p>



<p>Whenever violent force is used, there are risks involved. But merely managing those risks is not sufficient to meet the requirements for morally justifiable killing. Understanding the reasons and the potential consequences for the use of force is required for its justification. It has been argued that AWS may be highly accurate and precise in their use of force, but these are not sufficient to meet the requirements for the ethically discriminant use of force, and do not begin to address the requirements of the proportionate use of force.</p>



<p>Following the outlines of the two-tiered approach advanced by the ICRC, regulated AWS would be permitted to target autonomously. In these limited cases, more specifically cases where the target is a military object by nature, such as military vehicles and installations, automated targeting must still be carefully regulated to ensure that humans can safely supervise those systems.</p>



<p>But as soon as we start considering civilian objects, even those which might be used for military purposes and might be lawfully targeted under IHL, we must not permit their targeting by automated processes. The moral argument that leads to this conclusion is clear. It may be tempting to think that we can automate proportionality decisions–how much force is needed, or how much risk is acceptable, or how much collateral harm to civilians might be acceptable relative to a military objective. But the nature of proportionality judgments is fundamentally moral.</p>



<p>These decisions are inherently about values–the value of a target to a military objective, the value of a military objective to an operation and an overall strategy; the value of civilian infrastructure to a family, a community, a country; the value of a natural environment; and above all the value of human lives and the cost of taking those lives. They are also about duties, our duties to protect, our duties to each other.</p>



<p>These values are not intrinsically numerical or quantitative in nature, and assigning them such values in a computer program is arbitrary at best. Computers do not “understand” in any meaningful sense. They represent the world through mathematical abstractions that we design and understand, and from which we assign and seek meaning. Worse, training an algorithm to “learn” these values from a dataset is to abdicate any human responsibility in establishing the values represented in the systems, including the value of human life and the necessary conditions of human flourishing.</p>



<p>These are moral values, only understood through the lived experience of human life, moral reflection, and ethical development. In those limited cases where the decision to end a human life can be morally justified, it must be made by a moral agent who truly understands these values. Any life lost by the decision of an algorithm is, by definition, taken arbitrarily. ICRAC appreciates the work of the CCW GGE and this section of latest draft of the Chair’s Rolling Text:</p>



<p><em>States should ensure context-appropriate human judgement and control in the use of<br>LAWS, through the following measures &#8230; [which] &#8230; includes ensuring assessment of legal<br>obligations and ethical considerations by a human, in particular, with regard to the effects<br>of the selection and engagement functions.</em></p>



<p>The ethical considerations of the use of force must remain a matter of human judgement. We must not eliminate ethical considerations altogether by delegating them to machines wholly incapable of grasping such considerations. Human dignity requires that we consider a human as human–no machine can do this for us.</p>



<p>Similarly for anti-personnel AWS, in order to design systems to autonomously target people, it would be necessary to create digital representations of people, or target profiles. The same moral logic applies here.</p>



<p>While from a legal perspective, it could be argued that unmounted infantry are military objects by nature, and can pose a threat just as a tank does. But there is an important moral difference between targeting people directly, versus targeting a tank, and accepting that people inside it may be killed. People are not to be treated as objects, but always as moral subjects.</p>



<p>The aim of war, and the moral justification of killing in war, depends critically on using force to diminish the ability of your adversary to use force against you. The ultimate aim is not to harm or kill the enemy directly, this is only a means to an end, namely the end of hostilities. Targeting a human directly is to make the destruction of a human a goal in itself, rather than the true goal of eliminating the threat they pose. This might sound like a minor distinction, but by making the targeting and killing of humans the goal of a machine, rather than the elimination of military threats, we stand to vastly undermine human dignity.</p>



<p>By designing systems to target people directly, we essentially and effectively “pre-authorize” the moral judgement to take their lives. By pre-authorizing the killing of humans, and making personnel the targets of autonomous weapons, we would fundamentally violate and diminish human dignity. If we accept that a soldier on the battlefield can be directly targeted, without a human moral judgement or moral justification, then we make it more acceptable to do so in other contexts as well.</p>



<p>When we violate human dignity, it is not just the immediate victim who loses their dignity. All of humanity suffers from this loss. This is why we feel such moral disgust at the injustices of slavery, and torture, and the dropping of bombs on children–these atrocities undermine our collective dignity as human beings and offend our moral sensibility.</p>



<p>While the use of violent force against unjust aggression is sometimes necessary, it is our moral responsibility to ensure that force is used justly. The only way to ensure that force is used justly is through moral judgement, and this requires a moral agent. Machines and automated algorithms, however sophisticated they may appear, are not moral agents, and are not capable of moral judgements–only thin and arbitrary approximations. We must not delegate our morality to machines, as doing so threatens the very essence of our human dignity.</p>



<p>To quote the wise words of Christof Heyns, “War without reflection is mechanical slaughter.”</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19944</post-id>	</item>
		<item>
		<title>Statement on Security in Open informal consultations at UN GA</title>
		<link>https://www.icrac.net/statement-on-security-in-open-informal-consultations-at-un-ga/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:11:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19933</guid>

					<description><![CDATA[Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”. Thank you Chair, Presenters, Delegates, My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s International Disarmament Institute and a member of the International Committee for Robot Arms Control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-19934" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p><br>Thank you Chair, Presenters, Delegates,</p>



<p>My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s <a href="https://www.pace.edu/dyson/faculty-and-research/research-centers-and-initiatives/international-disarmament-institute">International Disarmament Institute</a> and a member of the International Committee for Robot Arms Control (ICRAC).</p>



<p>I would like to raise the importance of thinking about <em>human</em> security and protecting the integrity of the natural environment, considerations beyond traditional interpretations of security as strategic stability.</p>



<p>In this regard, I would like to highlight a report recently published by Pace’s International Disarmament Institute “<a href="https://bpb-us-w2.wpmucdn.com/blogs.pace.edu/dist/0/195/files/2025/05/Considerations-for-a-Victim-Assistance-Provision-in-a-Treaty-Banning-Killer-Robots-Submission-Draft-26-March-2025.pdf">Considering Victim Assistance and Remediation Provisions for a Treaty on Killer Robot</a>s.”</p>



<p>International diplomatic and advocacy discussions surrounding a possible treaty on autonomous weapons systems – “killer robots” – have neglected consideration of provisions on victim assistance and remediation. This departs from an almost three- decade trend in treaties banning and regulating weapons, which have included “positive obligations” to assist aMected communities and remediate contaminated environments.</p>



<p>Autonomous weapons systems have not yet been widely deployed and thus there are few who might be considered victims. Moreover, one hopes that a treaty will stymie widespread use of killer robots. Nevertheless, it is possible that some states will remain outside any eventual treaty and some non-state actors may remain outside the norm and may use autonomous weapons, whether in armed conflict, policing or terrorism. Therefore, it is important for diplomats and advocates to discuss whether positive obligations to address harms from killer robots belong in a treaty regulating and/or banning them. If so, further consideration should be given to the scope and shape of such provisions on victim assistance and remediation in advance of any negotiations.</p>



<p>To phrase this as a set of questions for the panelists:</p>



<ul class="wp-block-list">
<li>If an autonomous weapon sinks a ship, who would be responsible for addressing the resulting pollution, environmental injustices and insecurities? </li>
</ul>



<ul class="wp-block-list">
<li>If civilians are harmed or disabled by the use of an autonomous armed drone, how might we secure their medical care and rehabilitation, as well as prosecution of those responsible? How would we give them satisfaction that justice is secured?</li>
</ul>



<p>The specificity of autonomous weapons systems mean that diplomats and activists should not simply “copy and paste” the victim assistance and remediation provisions from other instruments into a killer robots treaty. In particular, care should be taken to ensure that provisions fill legal gaps and/or strengthen rather than undermine existing obligations.</p>



<ul class="wp-block-list">
<li>What complementarities are relevant in International Humanitarian Law, weapons treaties, but also the UN Voluntary Trust Fund on Torture or the Convention on the Rights of Persons with Disabilities?</li>
</ul>



<p>Diplomats, civil society advocates, humanitarian workers and activists engaged in discussions of a potential treaty on autonomous weapons systems should consider:</p>



<ul class="wp-block-list">
<li>Whether to include positive obligations addressing possible harms resulting from the use of killer robots, such as victim assistance and remediation of contaminated environments;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of precedent offered by recent international treaties and norms on weapons, which have included provisions on victim assistance and remediation of contaminated land;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of other normative frameworks for redress and remediation, such as from human rights and environmental law;</li>
</ul>



<ul class="wp-block-list">
<li>How to ensure that possible provisions fill legal gaps and strengthen rather than undermine existing obligations.</li>
</ul>



<p>We would be interested to hear from panelists, as well as states here today, their thoughts on the human and environmental security implications of autonomous weapons systems particularly how to remedy the harms resulting from their use, such as through practices of victim assistance and environmental remediation.</p>



<p>This is among several dimensions of autonomous weapons that have not yet been discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) mandated by the Convention on Certain Conventional Weapons (CCW). Discussion of these issues here demonstrates the potential value of this forum.</p>



<p>Thank you for the opportunity to address this meeting!</p>



<p></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19933</post-id>	</item>
		<item>
		<title>ICRAC submission to the United Nations Secretary General on &#8220;AI in the Military Domain and its Implications for International Peace and Security&#8221;</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-ai-in-the-military-domain-and-its-implications-for-international-peace-and-security/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Fri, 11 Apr 2025 13:36:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19898</guid>

					<description><![CDATA[11 April 2025 The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.” Founded in 2009, ICRAC is a civil society organization of experts in artificial [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><em>11 April 2025</em></p>



<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.”</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="683" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&#038;ssl=1" alt="" class="wp-image-19896" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1536%2C1024&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=2048%2C1365&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p>Founded in 2009, ICRAC is a civil society organization of experts in artificial intelligence, robotics, philosophy, international relations, human security, arms control, and international law. We are deeply concerned about the pressing dangers posed by AI in the military domain. As members of the Stop Killer Robots Campaign, ICRAC fully endorses their submission to this report, and wishes to provide further detail regarding the concerns raised by AI-enabled targeting.</p>



<p>Increasing investments in AI-based systems for military applications, specifically AI-enabled targeting, present new threats to peace and security and underscore the urgent need for effective governance. ICRAC identifies the following concerns in the case of AI-enabled targeting:</p>



<ol class="wp-block-list">
<li>AI-enabled targeting systems are only as valid as the data and models that inform them. ‘Training’ data for targeting requires the classification of persons and associated objects (buildings, vehicles) or ‘patterns of life’ (activities) based on digital traces coded according to vaguely specified categories of threat, e.g. ‘operatives’ or ‘affiliates’ of groups designated as combatants. Often the boundary of the target group is itself poorly defined. Although this casts into question the validity of input data and associated models, there is little accountability and no transparency regarding the bases for target nominations or for target identification. AI-enabled systems thus threaten to undermine the Principle of Distinction, <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/" data-type="link" data-id="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">even as they claim to provide greater accuracy</a>.</li>



<li><a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some" data-type="link" data-id="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some">Human Rights Watch research</a> indicates that in the case of IDF operations in Gaza, AI-enabled targeting tools rely on ongoing and systematic Israeli surveillance of all Palestinian residents of Gaza, including with data collected prior to the current hostilities in a manner that is incompatible with international human rights law.</li>



<li>The increasing reliance on profiling required by AI-enabled targeting furthers a shift from the recognition of persons and objects identified as legitimate targets by their observable disposition as an imminent military threat, to the ‘discovery’ of threats through mass surveillance, based on statistical speculation, suspicion and guilt by association.</li>



<li>The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of AI-enabled targeting.</li>



<li>The use of AI-enabled targeting to accelerate the scale and speed of target generation further undermines processes for validation of the output of targeting systems by humans, while greatly amplifying the potential for direct and collateral civil harm, as well as diminishing the possibilities for de-escalation of conflict through means other than military action.</li>
</ol>



<p>Justification for the adoption of AI-enabled targeting is based on the premise that acceleration of target generation is necessary for ‘decision-advantage’, but the relation between speed of targeting and effectiveness in overall military success, or longer-term political outcomes, is questionable at best. The ‘<a href="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/" data-type="link" data-id="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/">need’ for speed</a> that justifies AI- enabled targeting is based on a circular logic, which perpetuates what has become an arms race to accelerate the automation of warfighting. <em>Accelerating the speed and scale of target generation effectively renders human judgment impossible or, de facto, meaningless.</em> The risks to peace and security &#8211; especially to human life and dignity &#8211; are greatest for operations outside of conventional or clearly defined battlespaces. Insofar as the use of AI-enabled targeting is shown to be contrary to international law, the mandate must be to <em>not</em> use AI in targeting.</p>



<p>In this regard, ICRAC notes that the above systems present challenges to compliance with various branches of international law such as international humanitarian law (IHL), <em>jus ad bellum</em> (<a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">UN law on prohibition of use of force</a>), international human rights law (IHRL) and international environmental law. In the context of military AI’s implications for peace and security, <em>jus ad bellum</em>, a framework that prohibits aggressive military actions and regulates the conditions under which states may lawfully resort to the use of force, is the most relevant. In the same manner IHRL is important in this context because it is designed to uphold human dignity, equality, and justice—values that form the foundation of peaceful and secure societies.</p>



<p><strong>Citations</strong></p>



<p>Alvarez, Jimena Sofia Viveros. September 4, 2024. The risks and inefficacies of AI systems in military targeting support. <em>Humanitarian Law and Policy.</em> <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/</a></p>



<p>Bo, Marta and Dorsey, Jessica. April 4, 2024 Symposium on Military AI and the Law of Armed Conflict: The ‘Need’ for Speed – The Cost of Unregulated AI Decision-Support Systems to Civilians. <em>OpinioJuris</em>. <a href="https://opiniojuris.org/2024/04/04/symposium-on- military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of- unregulated-ai-decision-support-systems-to-civilians/">https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/</a></p>



<p>Chengeta, Thompson. May, 2024. African Commission for Human and Peoples’ Rights submission to the UN Secretary General Report on Lethal Autonomous Weapons, ASSEMBLY RESOLUTION 78/241, Commissioner Ayele Dersso Focal Point on the ACHPR Study on AI and Other Technologies. <a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">78-241-African_Commission-EN.pdf</a></p>



<p>Human Rights Watch. September 10, 2024. Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza. <a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza">https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza</a></p>



<p>ICRC. 6 June 2019. Artificial intelligence and machine learning in armed conflict: A human-centred approach.<br><a href="https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learn ing_in_armed_conflict-icrc.pdf">https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learning_in_armed_conflict-icrc.pdf</a>; published version at <em>International Review of the Red Cross: Digital technologies and war</em> (2020), 102 (913), 463–479.</p>



<p>Schwarz, Elke. December 12, 2024. The (im)possibility of responsible military AI governance. <em>Humanitarian Law and Policy</em>. <a href="https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/">https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19898</post-id>	</item>
		<item>
		<title>ICRAC Statement at Informal Consultations of the August 2019 CCW GGE on LAWS</title>
		<link>https://www.icrac.net/icrac-statement-at-informal-consultations-of-the-ccw-gge-on-laws/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 20 Aug 2019 08:29:57 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6289</guid>

					<description><![CDATA[Statement delivered by ICRAC Vice-chair Peter Asaro to the CCW GGE Informal Session on the Chair&#8217;s Non-Paper, August 19, 2019. &#8220;The International Committee for Robot Arms Control, which is a member of the Campaign to Stop Killer Robots, would like to thank the Chair for this Draft, and make the following comments and requests. First [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><img data-recalc-dims="1" loading="lazy" decoding="async" width="600" height="450" class="wp-image-6290" style="width: 600px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=600%2C450&#038;ssl=1" alt="" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?w=4032&amp;ssl=1 4032w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>



<p><strong>Statement delivered by ICRAC Vice-chair Peter Asaro to the CCW GGE Informal Session on the <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/E7600EE67661D5B0C125845B00569CED/$file/CCW_GGE.1_2019_CRP.1_Draft+Report.pdf">Chair&#8217;s Non-Paper</a>, August 19, 2019.</strong></p>



<p>&#8220;The International Committee for Robot Arms Control, which is
a member of the Campaign to Stop Killer Robots, would like to thank the Chair
for this Draft, and make the following comments and requests.</p>



<p>First and most importantly, we would urge the Chair to set a
higher bar for the goals of this GGE and the discussions of the next two
years.&nbsp; In particular, we would like to
see the set goal to be a legally binding instrument, and not merely a
“Normative Framework” of an unknown or unstated legal status. This GGE can and
should begin discussing what a legally binding instrument that could
effectively regulate autonomy in weapons systems might look like. Normativity
could also imply ethical and moral norms, and we would welcome a broader
discussion of the ethical and moral issues raised by autonomous weapons,
particularly with respect to human dignity.</p>



<p>Further, we would like to remind the Chair that the “Guiding
Principles” were developed to guide discussions of this body over the past few
years, and were never meant to be a goal or outcome of those discussions.&nbsp; We would like to see a more substantive
outcome of the current GGE.</p>



<p>Finally, we are concerned that the current draft does not mention “human control” much less “meaningful human control” or its other variants. This is despite the fact that many States, as well as civil society, have repeatedly expressed the view that human control is central to both understanding and regulating autonomy in weapons systems.&nbsp; Towards this end, ICRAC has produced a new white paper entitled <a href="https://www.icrac.net/wp-content/uploads/2019/08/Amoroso-Tamburrini_Human-Control_ICRAC-WP4.pdf">“What makes human control over weapons “Meaningful”?</a>&nbsp; You will find copies of this new report in the back of the room tomorrow. In it you will find a rigorous analysis of the requirements for human control in weapons, which could provide useful concepts for the elements of a treaty, including the positive obligation on states to ensure that weapons have the necessary elements of control to ensure accountable and responsible use of weapons under international law. And we hope the Chair will stand by <a href="https://twitter.com/Jivan_Gj/status/1163379176780587008">his recent tweet</a>, and allow this document to inform discussions of the Legal, Technical and Military work streams, as well as a much needed ethical discussion that cuts across all three.</p>



<p>We hope that tomorrow’s formal discussions are productive, and will continue to urge this body to work on the substantive concepts necessary to build a legally binding instrument.&#8221;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6289</post-id>	</item>
		<item>
		<title>ICRAC statement on the human control of weapons systems at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-on-the-human-control-of-weapons-systems-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Wed, 11 Apr 2018 13:06:43 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4006</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Prof. Noel Sharkey, on 11 April 2018 Mr Chairperson, We have been very pleased with this morning&#8217;s session as states begin to contemplate a move towards policies on the human control of weapons systems. On a pedantic note: we cannot [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4007" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135-300x300.jpg?resize=300%2C300&#038;ssl=1" alt="" width="300" height="300" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135.jpg?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135.jpg?w=640&amp;ssl=1 640w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Prof. Noel Sharkey, on 11 April 2018</p>
<p>Mr Chairperson,</p>
<p>We have been very pleased with this morning&#8217;s session as states begin to contemplate a move towards policies on the human control of weapons systems. On a pedantic note: we cannot talk about the meaningful human control of LAWS as that would make them no longer an autonomous weapon.</p>
<p>In the view of ICRAC, the control of weapons systems is more nuanced than can be captured by terms such as in-the-loop, on-the-loop, the broader loop, looping-the loop, human oversight, and appropriate human judgement. In this way we agree strongly with the statement made by Brazil and several others in this session who believe that the devil is in the detail.</p>
<p>For human control to be meaningful we need to examine how humans interact with machines and understand the types of human-machine biases that can occur in the selection of legitimate targets. Lessons should be learned from 30 years of research on human supervisory control of machinery <a href="https://xn--yxadbbg.tv/tag/milf-sex/" style="border: none; color: #333; font-weight: normal !important; text-decoration: none;">xn--yxadbbg milf</a> and more than 100 years of research on the psychology of human reasoning. This combination of this work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principle of humanity.</p>
<p>First, there should be a focus on what the human operator<strong> MUST</strong> do in the targeting cycle. This is control by use which is governed by targeting rules under International Humanitarian  Law and International Human Rights Law. Further, international law rules that apply after the use of weapons – such as those that relate to human responsibility – must be satisfied.</p>
<p>Second, the design of weapon systems must render them <strong>INCAPABLE</strong> of operating without meaningful human control. &nbsp;This is control by design, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled in terms of the law, then such a weapon is illegal <em>per se.</em></p>
<p>Ideally the following three conditions should be followed for the control of weapons systems:</p>
<ol>
<li>a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.</li>
<li>there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and</li>
<li>there will be a means for the rapid suspension or abortion of every attack.</li>
</ol>
<p>Thank you Mr Chairperson</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4006</post-id>	</item>
		<item>
		<title>ICRAC Working Paper #3 (CCW GGE April 2018): Guidelines for the human control of weapons systems</title>
		<link>https://www.icrac.net/icrac-working-paper-3-ccw-gge-april-2018-guidelines-for-the-human-control-of-weapons-systems/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Tue, 10 Apr 2018 11:33:10 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3998</guid>

					<description><![CDATA[Guidelines for the human control of weapons systems [PDF] Authored by Noel Sharkey, chair of ICRAC[1] Since 2014, high contracting parties to the CCW have expressed interest and concern about the meaningful human control of weapons systems. There is an extensive scientific and engineering literature on the dynamics of human-machine interaction and human supervisory control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4001" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/ICRAC-WP3_CCW-GGE-April-2018-1.jpg?resize=300%2C263&#038;ssl=1" alt="" width="300" height="263" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/ICRAC-WP3_CCW-GGE-April-2018-1.jpg?resize=300%2C263&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/ICRAC-WP3_CCW-GGE-April-2018-1.jpg?w=667&amp;ssl=1 667w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p><strong>Guidelines for the human control of weapons systems [<a href="https://www.icrac.net/wp-content/uploads/2018/04/Sharkey_Guideline-for-the-human-control-of-weapons-systems_ICRAC-WP3_GGE-April-2018.pdf">PDF</a>]<br />
</strong></p>
<p>Authored by Noel Sharkey, chair of ICRAC<a href="#_ftn1" name="_ftnref1">[1]</a></p>
<p>Since 2014, high contracting parties to the CCW have expressed interest and concern about the meaningful human control of weapons systems. There is an extensive scientific and engineering literature on the dynamics of human-machine interaction and human supervisory control of machinery. A short guide is presented here consisting of two parts. Part 1 is a simple primer on the psychology of human reasoning. Part 2 outlines different levels for the control of weapons systems, adapted from human-machine interaction research, and discusses them in terms of the properties of human reasoning. This outlines which of the levels can ensure the legality of human control of weapons systems and guarantee that precautionary measures are taken to assess the significance of potential targets, their necessity and appropriateness, as well as the likely incidental and possible accidental effects of the attack.</p>
<ol>
<li><strong> A short primer on human reasoning for the control of weapons</strong></li>
</ol>
<p>A well-established distinction in human psychology, backed by over 100 years of substantial research, divides human reasoning into two types: (i) fast <em>automatic </em>processes needed for routine and/or well tasks like riding a bicycle or playing tennis and (ii) slower <em>deliberativ</em>e processes needed for thoughtful reasoning such as making a diplomatic decision.</p>
<p>The drawback of deliberative reasoning is that it requires attention and memory resources and so it can easily be disrupted by anything like stress, or being pressured into making a quick decision.</p>
<p>Automatic processes kick in first, but we can override them if we are operating in novel circumstances or performing tasks that require active control or attention. Automatic processes are essential to our normal functioning, but they have a number of liabilities when it comes to making important decisions such as those required to determine the legitimacy of a target.</p>
<p>Four of the known properties of automatic reasoning<a href="#_ftn2" name="_ftnref2">[2]</a> illustrate why it is it problematic for the supervisory control of weapons.</p>
<ul>
<li><strong>neglects ambiguity and suppresses doubt</strong>. Automatic processes jump to conclusions. An unambiguous answer pops up instantly without question. There is no search for alternative interpretations or uncertainty. If something looks like it might be a legitimate target, in ambiguous circumstances, automatic reasoning will be certain that it is legitimate.</li>
<li><strong>infers and invents causes and intentions.</strong> Automatic reasoning rapidly invents coherent causal stories by linking fragments of available information. Events that include people are automatically attributed with intentions that fit a causal story. For example, people loading muckrakes onto a truck could initiate a causal story that they were loading rifles. This is called <em>assimilation bias</em> in the human supervisory control literature.<a href="#_ftn3" name="_ftnref3">[3]</a></li>
<li><strong>is biased to believe and confirm. </strong>Automatic reasoning favours uncritical acceptance of suggestions and maintains a strong bias. If a computer suggests a target to an operator, automatic reasoning alone would make it highly likely to be accepted. This is <em>automation bias</em>.<a href="#_ftn4" name="_ftnref4">[4]</a> <em>Confirmation bias</em><a href="#_ftn5" name="_ftnref5">[5]</a> selects information that confirms a prior belief.</li>
<li><strong>focuses on existing evidence and ignores absent evidence. </strong>Automatic reasoning builds coherent explanatory stories without consideration of evidence or contextual information that might be missing. What You See Is All There Is (WYSIATI)<a href="#_ftn6" name="_ftnref6">[6]</a>. It facilitates the feeling of coherence that makes us confident to accept information as true. For example, a man firing a rifle may be deemed to be a hostile target with WYSIATI when a quick look around might reveal that he is shooting a wolf hunting his goats.</li>
</ul>
<p>&nbsp;</p>
<ol start="2">
<li><strong> Levels of human control and how they impact on human decision-making</strong></li>
</ol>
<p>We can look at levels of human control for weapons systems by adapting research from the human supervisory control literature as shown in Table 1.<a href="#_ftn7" name="_ftnref7">[7]</a></p>
<table>
<tbody>
<tr>
<td width="594">A classification for levels of human supervisory control of weapons</td>
</tr>
<tr>
<td width="594">
<ol>
<li><strong>a human deliberates about a target before initiating any attack </strong></li>
<li><strong>program provides a list of targets and a human chooses which to attack</strong></li>
<li><strong>program selects target and a human must approve before attack</strong></li>
<li><strong>program selects target and a human has restricted time to veto </strong></li>
<li><strong>program selects target and initiates attack without human involvement</strong></li>
</ol>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p><strong>Level 1 control is the ideal</strong>. A human commander (or operator) has full contextual and situational awareness of the target area at the time of a specific attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack. There is active cognitive participation in the attack and sufficient time for deliberation on the nature of the target, its significance in terms of the necessity and appropriateness, and likely incidental and possible accidental effects. There must also be a means for the rapid suspension or abortion of the attack.</p>
<p><strong>Level 2 control could be acceptable</strong> if it is shown to meet the requirement of deliberating on potential targets. The human operator or commander should deliberatively assess necessity and appropriateness and whether any of the suggested alternatives are permissible objects of attack. Without sufficient time or in a distracting environment the illegitimacy of a target could be overlooked.</p>
<p>A rank ordered list of targets is particularly problematic as automation bias could create a tendency to accept the top ranked target unless sufficient time and attentional space is given for deliberative reasoning.</p>
<p><strong>Level 3 is unacceptable.</strong> This type of control has been experimentally shown to create what is known as <em>automation bias</em> in which human operators come to trust computer generated solutions as correct and disregard or don’t search for contradictory information. Cummings experimented with automation bias in a study on an interface designed for supervision and resource allocation of in-flight GPS guided Tomahawk missiles.<a href="#_ftn8" name="_ftnref8">[8]</a> She found that when the computer recommendations were wrong, operators using Level 3 control had a significantly decreased accuracy.</p>
<p><strong>Level 4 is unacceptable</strong> because it does not promote target validation and a short time to veto would reinforce automation bias and leave no room for doubt or deliberation. As the attack will take place <em>unless</em> a human intervenes, this undermines well-established presumptions under international humanitarian law that promote civilian protection.</p>
<p>The time pressure will result in operators neglecting ambiguity and suppressing doubt, inferring and inventing causes and intentions, being biased to believe and confirm, focusing on existing evidence and ignoring absent but needed evidence. An example of the errors caused by demands of fast veto was in the 2004 Iraq war when the U.S. Army&#8217;s Patriot missile system shot down a British Tornado and an American F/A-18, killing three pilots.</p>
<p><strong>Level 5 control</strong> <strong>is unacceptable</strong> as it describes weapons that are autonomous in the critical functions of target selection and the application of violent force.</p>
<p>It should be clear from the above that there are lessons to be drawn both from the psychology of human reasoning and from the literature on human-machine interaction. An understanding of this research is urgently needed to ensure that human-machine interaction is designed to get the best level of human control needed to comply with the international law in all circumstances.</p>
<p><strong>Conclusion: Necessary conditions for meaningful human control of weapons.</strong></p>
<p>A commander or operator should</p>
<ul>
<li>have full contextual and situational awareness of the target area at the time of initiating a specific attack;</li>
<li>be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack, such as changes in the legitimacy of the targets;</li>
<li>have active cognitive participation in the attack;</li>
<li>have sufficient time for deliberation on the nature of targets, their significance in terms of the necessity and appropriateness of an attack and the likely incidental and possible accidental effects of the attack and…</li>
<li>&#8230;have a means for the rapid suspension or abortion of the attack.</li>
</ul>
<p>&#8212;</p>
<p><a href="#_ftnref1" name="_ftn1">[1]</a> Special thanks to Lucy Suchman, Frank Sauer and Amanda Sharkey and members of ICRAC for helpful comments.</p>
<p><a href="#_ftnref2" name="_ftn2">[2]</a> D. Kahneman 2011:, Thinking, Fast and Slow, Penguin Books. He refers to the two processes as System 1 and System 2, These are exactly the same as the terms automatic and deliberative used here for clarity and consistency.</p>
<p><a href="#_ftnref3" name="_ftn3">[3]</a> J.M. Carroll and M.B. Rosson, ‘Paradox of the active user’, in J.M. Carroll (eds.), Interfacing Thought: Cognitive Aspects of Human-Computer Interaction (MIT Press, 1987), 80–111.</p>
<p><a href="#_ftnref4" name="_ftn4">[4]</a> K.L. Mosier and L.J. Skitka 1996: Human decision makers and automated decision aids: made for each other?, in: Mouloua, M. (Eds.): Automation and Human Performance: Theory and Applications, Lawrence Erlbaum Associates, 201–220.</p>
<p><a href="#_ftnref5" name="_ftn5">[5]</a> C.G. Lord, L. Ross and M. Lepper 1979: ‘Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence’, Journal of Personality and Social Psychology, 47, 1231–1243.</p>
<p><a href="#_ftnref6" name="_ftn6">[6]</a> Kaheneman ibid.</p>
<p><a href="#_ftnref7" name="_ftn7">[7]</a> For a more in-depth understanding of these analyses and references see N. Sharkey 2016: Staying in the Loop. Human Supervisory Control of Weapons, in: Bhuta, Nehal et al. (Eds.): Autonomous Weapons Systems. Law, Ethics, Policy. Cambridge University Press, 23-38.</p>
<p><a href="#_ftnref8" name="_ftn8">[8]</a> M.L. Cummings 2006: Automation and Accountability in Decision Support System Interface Design, in: Journal of Technology Studies 32: 1, 23–31.</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3998</post-id>	</item>
		<item>
		<title>Short ICRAC Statement at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/short-icrac-statement-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Tue, 10 Apr 2018 11:06:41 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3993</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Prof. Noel Sharkey, on 10 April 2018 Mr. Chairperson, There have been very useful and interesting discussions this morning. I speak here as chair of an academic NGO: the International Committee for Robot Arms Control (ICRAC) and as a member [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3994" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Prof. Noel Sharkey, on 10 April 2018</p>
<p>Mr. Chairperson,</p>
<p>There have been very useful and interesting discussions this morning.</p>
<p>I speak here as chair of an academic NGO: the International Committee for Robot Arms Control (ICRAC) and as a member of the scientific community in the field of Artificial Intelligence and Robotics with specialty in Machine Learning.</p>
<p>We stress again that it would be confusing to broaden the discussion of LAWS into issues about Artificial Intelligence or weapons with emerging intelligence. By chasing definitions of LAWS down the rabbit hole of AI, we remove ourselves from the key issues that need to be urgently discussed here. The definition extracted from ICRC, and echoed by a number of states this morning, is concerned with weapons that have autonomy in the critical functions of target selection and the application of force. This is sufficient for our definitional purposes here: decisions about target selection and the application of force are delegated to a machine. <strong>Let me highlight that it does not matter what techniques or computing methods are used to create autonomy in these critical functions.</strong></p>
<p>What is important here are questions about the nature of human control required and acceptable to ensure compliance with international law. <strong>It is key that we get this right.</strong> You can read more about this in ICRAC’s new working paper <strong>Guidelines for the Human Control of Weapons Systems</strong> that will be delivered at Wednesday’s side event. We support those states who have stated that the focus of this meeting should be on human control of weapons systems and human-machine interaction. In this way we can make real progress this week and protect our future no matter what the technological developments.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3993</post-id>	</item>
		<item>
		<title>ICRAC Statement at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Mon, 09 Apr 2018 13:56:04 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3975</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Dr Thompson Chengeta, on 9 April 2018 Mr. Chairperson, I speak on behalf of the International Committee for Robot Arms Control [ICRAC], a founding member of the Campaign to Stop Killer Robots. Ambassador Gill, we thank you for your important [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3979" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Dr Thompson Chengeta, on 9 April 2018</p>
<p><iframe loading="lazy" title="Dr Thompson Chengeta Statement on behalf of the International Committee for Robot Arms Control" width="500" height="281" src="https://www.youtube.com/embed/ALvbgCAfBW8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>Mr. Chairperson,</p>
<p>I speak on behalf of the International Committee for Robot Arms Control [ICRAC], a founding member of the Campaign to Stop Killer Robots. Ambassador Gill, we thank you for your important work. Mr Chairperson, we are going to focus here on four points:</p>
<p>FIRST, a ban on LAWS will have no negative impact on the development of socially beneficial uses of autonomy, robotics or artificial intelligence. In fact, such a ban will direct more resources and specialists to work on humanitarian and beneficial applications.</p>
<p>SECOND, human control of weapon systems is a critical key component of the present discussions. It does not matter what name or term is used to describe human control, what is imperative is that we make sure that human control is consistent with applicable legal, ethical and moral standards.</p>
<p>THIRD, human input in the making of judgements to use violent force is at the centre of legal, ethics and moral standards pertaining to human responsibility for use of such force. No matter how attractive, if a proposed definition of human control does not resolve the accountability gap challenge, then such a proposal is legally inadequate. To that end, States should ask the question: What is the Legally Required Level of Human Control at each “touch point” in the human-machine interaction chain? At every step in the development, deployment, targeting and use of a weapon system, there is an obligation to ensure that the system is both capable of being used in compliance with applicable legal norms.</p>
<p>FOURTH, Poland and ICRC Working Papers’ emphasis on ethics and reassertion of the Principle of Non-Delegation of the Authority to Kill to non-human mechanisms is worth noting. Dictates of public conscience must always take precedence over any short-term advantage that might be gained from autonomous technologies. Furthermore, respect for human rights and human dignity, even within armed conflict, is a moral imperative recognized by the UN and the CCW. ICRAC reiterates the spirit of the Martens Clause—that morality can provide a strong basis for new law.</p>
<p>Finally, human control over critical functions of weapon systems and a ban on fully autonomous weapon systems are two sides of the same coin. States are urged to focus on the requirement of human control rather than technical definitions of autonomy. Further, States must move towards negotiation of a legally binding instrument on this issue.</p>
<p>Mr Chairperson, I thank you.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3975</post-id>	</item>
		<item>
		<title>Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons</title>
		<link>https://www.icrac.net/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/</link>
		
		<dc:creator><![CDATA[Lucy Suchman]]></dc:creator>
		<pubDate>Sun, 08 Apr 2018 22:43:38 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3957</guid>

					<description><![CDATA[*Originally published on the &#8220;Robot Futures Blog&#8221; In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence. Designated a primer for CCW [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3958" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=1024%2C7681&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /><br />
*Originally published on the <a href="https://robotfutures.wordpress.com/2018/04/07/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/">&#8220;Robot Futures Blog&#8221;</a></p>
<p>In the lead up to the next meeting of the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">CCW’s Group of Governmental Experts</a> at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled <a href="http://www.unidir.ch/files/publications/pdfs/the-weaponization-of-increasingly-autonomous-technologies-artificial-intelligence-en-700.pdf">The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence</a>. Designated <em>a primer for CCW delegates</em>, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based <a href="https://www.cnas.org/">CNAS </a>are well represented.</p>
<p>Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:</p>
<p style="padding-left: 30px;">Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).</p>
<p>The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.</p>
<p>The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.</p>
<p>We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of <em>if–then </em>rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique <em>that makes use of labelled training data</em>” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.</p>
<p>Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:</p>
<p style="padding-left: 30px;">Intelligence is a system’s ability to <em>determine the best course of action </em>to achieve its goals. Autonomy is the <em>freedom </em>a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems <em>could </em>be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).</p>
<p>The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?</p>
<p>The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (<a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf">CCW/GGE.1/2018/WP.4</a>). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3957</post-id>	</item>
		<item>
		<title>Frequently Asked Questions on LAWS</title>
		<link>https://www.icrac.net/frequently-asked-questions-on-laws/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Sat, 11 Nov 2017 20:05:12 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php56-3.dfw3-2.websitetestlink.com/?p=3344</guid>

					<description><![CDATA[Memorandum for delegates at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) Meeting on Lethal Autonomous Weapons Systems (LAWS) Geneva, 13-17 November 2017 ICRAC is an international not-for-profit association of scientists, technologists, lawyers and policy experts committed to the peaceful use of robotics and the regulation of robot weapons. Please visit our [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><strong>Memorandum for delegates at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) Meeting on Lethal Autonomous Weapons Systems (LAWS)</strong></p>
<p><strong>Geneva, 13-17 November 2017<img data-recalc-dims="1" loading="lazy" decoding="async" class="alignright wp-image-3347 size-medium" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2017/12/ICRAC_CCWUN24-300x225-300x225.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" /></strong></p>
<p><strong>ICRAC</strong> is an international not-for-profit association of scientists, technologists, lawyers and policy experts committed to the peaceful use of robotics and the regulation of robot weapons. Please visit our website <a href="http://www.icrac.net/">www.icrac.net</a> and follow us on Twitter <a href="https://twitter.com/icracnet">@icracnet</a></p>
<p><strong>ICRAC</strong> is a founding member of the Campaign to Stop Killer Robots <a href="http://www.stopkillerrobots.org/">www.stopkillerrobots.org</a></p>
<p>&nbsp;</p>
<p><strong>What is Artificial Intelligence (AI)?</strong></p>
<p>The term AI tends to evoke science-fiction tropes and even notions of “super intelligence”. But in reality, AI is just an umbrella term given to computational techniques that automate tasks that we would normally consider to require human intelligence. This does not mean that these software programs themselves are intelligent.</p>
<p>&nbsp;</p>
<p><strong>How fast is AI progressing?</strong></p>
<p>Enthusiasm about the progress of AI has increased considerably in the last couple of years even while techniques have not improved much since the 1980s. This is largely because of two factors</p>
<p>(i) the acquisition of big data sets with billions of examples;</p>
<p>(ii) plummeting costs for massive processing power.</p>
<p>Both factors provide an ideal environment for a cluster of computational techniques called Machine Learning (ML). The exploitation of ML has led to the mass commercialization of AI over a wide range of applications by various companies. So current AI progress is best described as spreading sideways rather than moving upwards.</p>
<p><strong> </strong></p>
<p><strong>Do civilian and military applications of AI differ?</strong></p>
<p><strong>Yes.</strong> What makes any autonomous system relying on AI computational techniques work is brittle software based on algorithms and statistics. Thanks to the availability of large amounts of training data, we will hopefully soon be able to make these techniques work in applications such as self-driving cars, to name a prominent example from the civilian sector. But this does not translate into military applications. Aside from the fact that cars and weapon systems are designed for completely different purposes, the comparably structured and regulated environment of road traffic does not compare at all to the adversarial, chaotic environment of the battlefield. The fog of war will only allow for faulty or, at best, noisy data. So beware of false equivalences!</p>
<p>&nbsp;</p>
<p><strong>Would LAWS be “precision weapons”?</strong></p>
<p><strong>Possibly (yet illegal to use as well). </strong>LAWS could take various forms. For instance, a swarm of hobby drones fitted with a heat sensor and a small explosive payload could be programmed to attack everything that emits body temperature. Such a three-dimensional moving minefield of LAWS would be the opposite of a precision weapon.</p>
<p>But let’s assume, for the sake of the argument, LAWS designed with military-grade accuracy in mind. Fitted with better sensing and data processing hard- and software as well as payloads tailored to the system’s mission, those could be more precise than current weapon systems. But the technical potential for accuracy and the application of violent force to a legitimate target are two separate issues. Even the most high-tech precision weapon system has to be used in a manner that is legal under International Humanitarian Law (IHL).</p>
<p>IHL dictates that, when using a weapon system, constant care should be taken to avoid or minimize civilian casualties (principles of distinction and precautions in attack). It also prohibits to launch or continue an attack, when the expected civilian losses exceed the military advantage sought for (principle of proportionality). These concepts enshrined in IHL are only meaningful in the context of human judgment. Machines are a far cry from the reasoning that a human military commander acting responsibly and in compliance with the law would engage in. Machines will for the foreseeable future not be able to discriminate combatants from civilians, let alone judge which use of force or type of munition is proportionate in light of the military objective. Hence we cannot and must not expect modern weapon systems to free us from these legal obligations. On the contrary, we have to heed these principles in equivalence with our growing technological capabilities.</p>
<p>For example, before launching an attack, and throughout its execution, IHL requires military commanders to take all feasible precautions to spare the civilian population, by making use of all the information from all sources available to them. An autonomous weapon system fitted with various sensors for targeting purposes would thus require a commander to make use of the data that is gathered and the additional information that is generated whilst using the system. A commander cannot choose to treat this new “smart” precision weapon akin to the “dumb” weapons of the past, that is, as if this information were not being made available by the system or as if it could be ignored. Instead, weapon technology and legal obligations go hand in hand. Consequently, the more sophisticated our weapon systems become, the more meaningful human control becomes <em>feasible</em> regarding the critical functions of identifying (“fixing”), selecting and engaging targets. And hence the more care for ensuring meaningful human control is <em>required</em>.</p>
<p>This is not a particularly new insight of course, it is why advanced laser guided munitions are used with tactics, techniques and procedures that differ from those of simple free-falling bombs. So, in sum, fully autonomous weapon systems (=LAWS), that is, systems designed in a way that would require commanders to abdicate meaningful human control, are simply incompatible with the way IHL demands weapons to be used by human military commanders on the battlefield.</p>
<p>&nbsp;</p>
<p><strong>Would LAWS make war more humane?</strong></p>
<p><strong>No.</strong> It is sometimes argued that autonomy in weapons systems could make wars more humane by ensuring greater precision in targeting military objectives and by clearing the battlefield from human passions, such as anger, fear and vengefulness. Even assuming – but not conceding (see above: <em>Would LAWS be “precision weapons”?</em>) – that one day LAWS might somehow reach human or even “higher-than-human” performances with respect to adherence to IHL, this would not “humanize” future armed conflicts for at least three reasons</p>
<p>(i) delegating the power to take life-or-death decisions to machines blatantly denies the human dignity of the recipients of lethal force and their intrinsic worth as human beings;</p>
<p>(ii) LAWS trivialize the decision to take someone else’s life by relieving war-fighters from the moral burden inevitably associated with it;</p>
<p>(iii) while it is true that machines’ decision-making will never be influenced by negative human emotions, it is equally true that LAWS are also immune to compassion and empathy, which in certain situations could compel a human to refrain from using lethal force even when she or he would legally be entitled to do so.</p>
<p>&nbsp;</p>
<p><strong>Would LAWS proliferate?</strong></p>
<p><strong>Yes. </strong>LAWS need not necessarily take the shape of one specific weapon system akin to, for instance, a drone. LAWS also do not require a very specific military technology development path, the way nuclear weapons do, for example. As AI software and robotic hardware mature and continue to pervade the civilian sphere, militaries will feel prompted to increasingly adopt them (however, see above: <em>Do civilian and military applications of AI differ?</em>) in continuation of a dual-use-trend that is already observable in, for instance, armed drones.</p>
<p>Research and development for LAWS-related technology is thus already well underway and distributed over countless university laboratories and commercial enterprises, making use of economies of scale and the forces of the free market to spur competition, lower prices and shorten innovation cycles. This renders the military research and development effort in the case of LAWS different from those of past hi-tech conventional weapon systems. So (without even taking exports into account) it is easy to see that LAWS would be comparably easy to obtain (as well as reverse-engineer) and thus prone to quickly proliferate to a wide range of state and non-state actors.</p>
<p>&nbsp;</p>
<p><strong>Would LAWS threaten global stability?</strong></p>
<p><strong>Yes. </strong>LAWS promise a military advantage inter alia because they are expected to perform certain tasks much faster than a human could do. We argued above that IHL does not allow for relinquishing meaningful human control. In addition, there are considerations from a strategic perspective that also suggest restraining ourselves and keeping meaningful human control intact. Without meaningful human control, the actions and reactions of individual LAWS as well as swarms of LAWS would have to be controlled by software alone.</p>
<p>Consider the example of adversarial swarms deployed in close proximity to each other. Their respective control software would have to react to signs of an attack within a very short, split-second timeframe – by evading or, possibly, counter-attacking in a use-them-or-lose-them situation. Indications of an attack – sun glint interpreted as a rocket flame, sudden and unexpected moves of the adversary, or just some malfunction – could trigger escalation. It is within the nature of military conflict that these kinds of interactions between two adversarial systems or swarms would obviously not be tested or trained beforehand. In addition, it is, technically speaking, impossible to fathom all possible outcomes in advance. In other words, the interaction of LAWS, if handed over full autonomy, would be unpredictable and take place at operational speeds far beyond human fail-safe capabilities.</p>
<p>Comparable runaway interactions between algorithms are already observable in financial markets. Hence it is a real possibility that LAWS interactions could result in an unwanted escalation from crisis to war, or, within armed conflict, to unintended higher levels of violence. This means an increase in global instability and is unpleasantly reminiscent of Cold War scenarios of “accidental war”.</p>
<p>&nbsp;</p>
<p><strong>Would banning LAWS stifle technology?</strong></p>
<p><strong>No. </strong>On the contrary. Global Governance for LAWS would not mean a prohibition or control of specific technologies as such. The wide spread and the dual-use potential of AI software and robotics suggest that this would not only be a completely futile, luddite endeavor. It would also be severely misguided in light of the various benefits potentially flowing from the maturation of these technologies with regard to civilian applications.</p>
<p>What is more, a number of recent developments in fact suggest that technology companies would welcome a ban on LAWS since they do not want their products to be associated with “Killer Robots”. Google, for instance, stated already years ago that it is not interested in military robotics. The Canadian robot manufacturer Clearpath Robotics even officially joined forces with the Campaign to Stop Killer Robots in 2014 and “ask[s] everyone to consider the many ways in which this technology would change the face of war for the worse” and create robotic products solely “for the betterment of humankind” instead. And in 2017, 160 high profile CEOs of companies developing artificial intelligence technologies signed an open letter calling for the CCW to act.</p>
<p>So preventive arms control for LAWS would not mean the regulation or prohibition of specific technologies. Instead, it would give tech entrepreneurs and manufacturers guidance and assurance that their inventions and products cannot be misused. Hence arms control for LAWS is not about listing or counting (stockpiles of) individual weapon systems. Rather, it is about drawing a line regarding the use of autonomy in weapon systems, a line to retain meaningful human control and prohibit the application of autonomy in specific (especially the “critical”) functions of weapon systems.</p>
<p>The CCW has drawn a comparable line and established a strong norm like that before, with the preventive prohibition of laser blinding weapons in 1995. This prohibition protects a soldier’s eyes on the battlefield; it is, obviously, not a blanket ban on laser technology in all its other uses, be they military or, especially, civilian in nature. In other words, just as we got to keep our CD players and laser pointers back then, we will get to keep our smartphones and self-driving cars this time.</p>
<p><strong> </strong></p>
<p><strong>Further reading:</strong></p>
<p>Altmann, Jürgen/Sauer, Frank (2017): <a href="http://www.tandfonline.com/eprint/qnJKjAUPXWPhmyMjZ6cD/full">Autonomous Weapon Systems and Strategic Stability</a>, in: Survival 59: 5, 117–142.</p>
<p>Amoroso, Daniele/Tamburrini, Guglielmo (2017): The Ethical and Legal Case Against Autonomy in Weapons Systems, in: Global Jurist. Online first.</p>
<p>Asaro, Peter (2012): On Banning Autonomous Weapon Systems. Human Rights, Automation, and the Dehumanization of Lethal Decision-Making, in: International Review of the Red Cross 94: 886, 687–709.</p>
<p>Garcia, Denise (2016): Future Arms, Technologies, and International Law: Preventive Security Governance, in: European Journal of International Security 1: 1, 94-111.</p>
<p>Sauer, Frank (2016): <a href="https://www.armscontrol.org/ACT/2016_10/Features/Stopping-Killer-Robots-Why-Now-Is-the-Time-to-Ban-Autonomous-Weapons-Systems">Stopping “Killer Robots”. Why Now Is the Time to Ban Autonomous Weapons Systems</a>, in: Arms Control Today 46: 8, 8–13.</p>
<p>Sharkey, Noel (2012): The Evitability of Autonomous Robot Warfare, in: International Review of the Red Cross 94: 886, 787–799.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3344</post-id>	</item>
	</channel>
</rss>
