<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Laura Nolan &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/author/laura-nolan/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 09 Jun 2025 20:24:40 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Statement on Security in Open informal consultations at UN GA</title>
		<link>https://www.icrac.net/statement-on-security-in-open-informal-consultations-at-un-ga/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:11:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19933</guid>

					<description><![CDATA[Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”. Thank you Chair, Presenters, Delegates, My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s International Disarmament Institute and a member of the International Committee for Robot Arms Control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-19934" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?w=2048&amp;ssl=1 2048w" sizes="(max-width: 1000px) 100vw, 1000px" /></figure>



<p><br>Thank you Chair, Presenters, Delegates,</p>



<p>My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s <a href="https://www.pace.edu/dyson/faculty-and-research/research-centers-and-initiatives/international-disarmament-institute">International Disarmament Institute</a> and a member of the International Committee for Robot Arms Control (ICRAC).</p>



<p>I would like to raise the importance of thinking about <em>human</em> security and protecting the integrity of the natural environment, considerations beyond traditional interpretations of security as strategic stability.</p>



<p>In this regard, I would like to highlight a report recently published by Pace’s International Disarmament Institute “<a href="https://bpb-us-w2.wpmucdn.com/blogs.pace.edu/dist/0/195/files/2025/05/Considerations-for-a-Victim-Assistance-Provision-in-a-Treaty-Banning-Killer-Robots-Submission-Draft-26-March-2025.pdf">Considering Victim Assistance and Remediation Provisions for a Treaty on Killer Robot</a>s.”</p>



<p>International diplomatic and advocacy discussions surrounding a possible treaty on autonomous weapons systems – “killer robots” – have neglected consideration of provisions on victim assistance and remediation. This departs from an almost three- decade trend in treaties banning and regulating weapons, which have included “positive obligations” to assist aMected communities and remediate contaminated environments.</p>



<p>Autonomous weapons systems have not yet been widely deployed and thus there are few who might be considered victims. Moreover, one hopes that a treaty will stymie widespread use of killer robots. Nevertheless, it is possible that some states will remain outside any eventual treaty and some non-state actors may remain outside the norm and may use autonomous weapons, whether in armed conflict, policing or terrorism. Therefore, it is important for diplomats and advocates to discuss whether positive obligations to address harms from killer robots belong in a treaty regulating and/or banning them. If so, further consideration should be given to the scope and shape of such provisions on victim assistance and remediation in advance of any negotiations.</p>



<p>To phrase this as a set of questions for the panelists:</p>



<ul class="wp-block-list">
<li>If an autonomous weapon sinks a ship, who would be responsible for addressing the resulting pollution, environmental injustices and insecurities? </li>
</ul>



<ul class="wp-block-list">
<li>If civilians are harmed or disabled by the use of an autonomous armed drone, how might we secure their medical care and rehabilitation, as well as prosecution of those responsible? How would we give them satisfaction that justice is secured?</li>
</ul>



<p>The specificity of autonomous weapons systems mean that diplomats and activists should not simply “copy and paste” the victim assistance and remediation provisions from other instruments into a killer robots treaty. In particular, care should be taken to ensure that provisions fill legal gaps and/or strengthen rather than undermine existing obligations.</p>



<ul class="wp-block-list">
<li>What complementarities are relevant in International Humanitarian Law, weapons treaties, but also the UN Voluntary Trust Fund on Torture or the Convention on the Rights of Persons with Disabilities?</li>
</ul>



<p>Diplomats, civil society advocates, humanitarian workers and activists engaged in discussions of a potential treaty on autonomous weapons systems should consider:</p>



<ul class="wp-block-list">
<li>Whether to include positive obligations addressing possible harms resulting from the use of killer robots, such as victim assistance and remediation of contaminated environments;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of precedent offered by recent international treaties and norms on weapons, which have included provisions on victim assistance and remediation of contaminated land;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of other normative frameworks for redress and remediation, such as from human rights and environmental law;</li>
</ul>



<ul class="wp-block-list">
<li>How to ensure that possible provisions fill legal gaps and strengthen rather than undermine existing obligations.</li>
</ul>



<p>We would be interested to hear from panelists, as well as states here today, their thoughts on the human and environmental security implications of autonomous weapons systems particularly how to remedy the harms resulting from their use, such as through practices of victim assistance and environmental remediation.</p>



<p>This is among several dimensions of autonomous weapons that have not yet been discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) mandated by the Convention on Certain Conventional Weapons (CCW). Discussion of these issues here demonstrates the potential value of this forum.</p>



<p>Thank you for the opportunity to address this meeting!</p>



<p></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19933</post-id>	</item>
		<item>
		<title>ICRAC submission to the United Nations Secretary General on &#8220;AI in the Military Domain and its Implications for International Peace and Security&#8221;</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-ai-in-the-military-domain-and-its-implications-for-international-peace-and-security/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Fri, 11 Apr 2025 13:36:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19898</guid>

					<description><![CDATA[11 April 2025 The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.” Founded in 2009, ICRAC is a civil society organization of experts in artificial [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><em>11 April 2025</em></p>



<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.”</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="683" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&#038;ssl=1" alt="" class="wp-image-19896" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1536%2C1024&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=2048%2C1365&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p>Founded in 2009, ICRAC is a civil society organization of experts in artificial intelligence, robotics, philosophy, international relations, human security, arms control, and international law. We are deeply concerned about the pressing dangers posed by AI in the military domain. As members of the Stop Killer Robots Campaign, ICRAC fully endorses their submission to this report, and wishes to provide further detail regarding the concerns raised by AI-enabled targeting.</p>



<p>Increasing investments in AI-based systems for military applications, specifically AI-enabled targeting, present new threats to peace and security and underscore the urgent need for effective governance. ICRAC identifies the following concerns in the case of AI-enabled targeting:</p>



<ol class="wp-block-list">
<li>AI-enabled targeting systems are only as valid as the data and models that inform them. ‘Training’ data for targeting requires the classification of persons and associated objects (buildings, vehicles) or ‘patterns of life’ (activities) based on digital traces coded according to vaguely specified categories of threat, e.g. ‘operatives’ or ‘affiliates’ of groups designated as combatants. Often the boundary of the target group is itself poorly defined. Although this casts into question the validity of input data and associated models, there is little accountability and no transparency regarding the bases for target nominations or for target identification. AI-enabled systems thus threaten to undermine the Principle of Distinction, <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/" data-type="link" data-id="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">even as they claim to provide greater accuracy</a>.</li>



<li><a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some" data-type="link" data-id="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some">Human Rights Watch research</a> indicates that in the case of IDF operations in Gaza, AI-enabled targeting tools rely on ongoing and systematic Israeli surveillance of all Palestinian residents of Gaza, including with data collected prior to the current hostilities in a manner that is incompatible with international human rights law.</li>



<li>The increasing reliance on profiling required by AI-enabled targeting furthers a shift from the recognition of persons and objects identified as legitimate targets by their observable disposition as an imminent military threat, to the ‘discovery’ of threats through mass surveillance, based on statistical speculation, suspicion and guilt by association.</li>



<li>The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of AI-enabled targeting.</li>



<li>The use of AI-enabled targeting to accelerate the scale and speed of target generation further undermines processes for validation of the output of targeting systems by humans, while greatly amplifying the potential for direct and collateral civil harm, as well as diminishing the possibilities for de-escalation of conflict through means other than military action.</li>
</ol>



<p>Justification for the adoption of AI-enabled targeting is based on the premise that acceleration of target generation is necessary for ‘decision-advantage’, but the relation between speed of targeting and effectiveness in overall military success, or longer-term political outcomes, is questionable at best. The ‘<a href="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/" data-type="link" data-id="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/">need’ for speed</a> that justifies AI- enabled targeting is based on a circular logic, which perpetuates what has become an arms race to accelerate the automation of warfighting. <em>Accelerating the speed and scale of target generation effectively renders human judgment impossible or, de facto, meaningless.</em> The risks to peace and security &#8211; especially to human life and dignity &#8211; are greatest for operations outside of conventional or clearly defined battlespaces. Insofar as the use of AI-enabled targeting is shown to be contrary to international law, the mandate must be to <em>not</em> use AI in targeting.</p>



<p>In this regard, ICRAC notes that the above systems present challenges to compliance with various branches of international law such as international humanitarian law (IHL), <em>jus ad bellum</em> (<a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">UN law on prohibition of use of force</a>), international human rights law (IHRL) and international environmental law. In the context of military AI’s implications for peace and security, <em>jus ad bellum</em>, a framework that prohibits aggressive military actions and regulates the conditions under which states may lawfully resort to the use of force, is the most relevant. In the same manner IHRL is important in this context because it is designed to uphold human dignity, equality, and justice—values that form the foundation of peaceful and secure societies.</p>



<p><strong>Citations</strong></p>



<p>Alvarez, Jimena Sofia Viveros. September 4, 2024. The risks and inefficacies of AI systems in military targeting support. <em>Humanitarian Law and Policy.</em> <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/</a></p>



<p>Bo, Marta and Dorsey, Jessica. April 4, 2024 Symposium on Military AI and the Law of Armed Conflict: The ‘Need’ for Speed – The Cost of Unregulated AI Decision-Support Systems to Civilians. <em>OpinioJuris</em>. <a href="https://opiniojuris.org/2024/04/04/symposium-on- military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of- unregulated-ai-decision-support-systems-to-civilians/">https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/</a></p>



<p>Chengeta, Thompson. May, 2024. African Commission for Human and Peoples’ Rights submission to the UN Secretary General Report on Lethal Autonomous Weapons, ASSEMBLY RESOLUTION 78/241, Commissioner Ayele Dersso Focal Point on the ACHPR Study on AI and Other Technologies. <a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">78-241-African_Commission-EN.pdf</a></p>



<p>Human Rights Watch. September 10, 2024. Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza. <a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza">https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza</a></p>



<p>ICRC. 6 June 2019. Artificial intelligence and machine learning in armed conflict: A human-centred approach.<br><a href="https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learn ing_in_armed_conflict-icrc.pdf">https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learning_in_armed_conflict-icrc.pdf</a>; published version at <em>International Review of the Red Cross: Digital technologies and war</em> (2020), 102 (913), 463–479.</p>



<p>Schwarz, Elke. December 12, 2024. The (im)possibility of responsible military AI governance. <em>Humanitarian Law and Policy</em>. <a href="https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/">https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19898</post-id>	</item>
		<item>
		<title>ICRAC Submission to the United Nations Secretary-General on Autonomous Weapon Systems</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-autonomous-weapon-systems/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Mon, 20 May 2024 18:45:00 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19903</guid>

					<description><![CDATA[The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, robot ethics, international relations, international security, arms control, international humanitarian law, international human rights law and philosophy of technology. We are deeply concerned about the pressing dangers that military robotics and automation pose to international peace, international security and stability, and the rights and safety of civilians in war. Based on our expertise, we are particularly concerned that military robotic systems will lead to more frequent, less restrained, and less accountable armed conflict. In light of these risks, we call for an international treaty to prohibit and restrict autonomous weapon systems.</p>



<p>As has been discussed in detail at the CCW GGE over the past decade, autonomous weapon systems (AWS) raise serious concerns for international humanitarian law in regard to complying with the principles of distinction and proportionality. The risk of triggering the proliferation of arms is another stark reality posed by AWS, as is the accessibility of these types of weapon systems to non-state armed groups, among other actors. The use of AWS may further spill into the arena of national and transnational organized crime in addition to policing at the domestic level. All the while, several operational concerns remain as to the use of AWS from the perspective of accountability, bias and the use of machine-learning algorithms which may develop beyond the capacity of “the human in the loop.” There are also serious risks to regional and global stability posed by replacing human decision making with machine decision making, as it becomes more difficult for political and military leaders to anticipate and interpret the intentions, decision and actions of their adversaries, and thus find ways to avoid or de-escalate conflicts.</p>



<p>We also note the threat that AWS pose to compliance with international human rights, particularly the right to life, the prohibition against torture, cruel and inhumane treatment, and above all the human right to dignity. We fear that an additional protocol to the CCW would fail to address these human rights concerns. We are concerned that the automated targeting and release of non-conventional weapons, including nuclear weapons, may also fall outside the scope of any legally binding CCW protocol. We thus advocate and support all calls for a legally binding instrument to prohibit and restrict the use of AWS, and urge the Secretary-General to encourage the initiation of a forum within the United Nations General Assembly that can include all States, cover autonomy and automation in the use of all weapons, and address international humanitarian law as well as human rights concerns.</p>



<p>This submission is informed by our comprehensive interdisciplinary expertise. We have published extensively on the ethical, legal, technical and security challenges of autonomous weapon systems, on the question of meaningful human control, and on the challenges of escalation at speed.</p>



<p><strong>Scope</strong></p>



<p>In accordance with the International Committee of the Red Cross we understand an autonomous weapon system as one that, potentially after initial activation or launch by a human, selects targets based on sensor data and engages the targets without human intervention. We endorse the recommendations of the International Committee of the Red Cross for a two-tiered approach that prohibits unpredictable systems and systems that explicitly target humans, while strictly regulating the use of autonomy in all other systems for the command, control and engagement of lethal force. This includes restrictions on the time, space, scope and scale of operations of such systems, as well as the types of targets and situations in which they may be used. In particular, we strongly agree that the only permissible targets of<br>such systems should be military objects by nature, and never civilian or dual-use targets, which should always require human judgment. More discussion is needed on the appropriate forms and regulation of the human-machine interaction in complex command and control systems. In particular, as computers and artificial intelligence collect and automatically analyze more and more data, greater clarity is needed in what constitutes meaningful human control in the context of automated target generation and identification, and how to ensure respect and responsibility for international law when such systems are used.</p>



<p><strong><span style="text-decoration: underline;">Key Challenges to Global Peace and Security</span></strong></p>



<ul class="wp-block-list">
<li>Uncontrolled Escalation and Missed Opportunities for De-escalation and Diplomacy</li>
</ul>



<p>The technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation at speed. As the thresholds for applying military force will be lowered, the likelihood of conflicts will go up. Actions and reactions to the adversary will have to be programmed in advance. Two AWS swarms moving at relatively close distance from each other, in international air space, for example, might interact in ways that could not be mitigated or controlled by a human in an appropriate time window. In case of an enemy attack, even a few seconds delay could mean loss of one’s systems, thus there will be strong pressure for fast counterattacks that preclude human consideration.</p>



<p>Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about by erroneous indications of attack or a simple sensor or computer error. Mutual interaction between the control programs could not be tested in advance. The outcome of the interaction of such complex systems would be intrinsically unpredictable, but fast escalation is possible and likely. In a severe crisis with fear of preemption this could greatly destabilize the military situation between potential enemies.</p>



<p>As political and military leaders become increasingly dependent on systems they cannot explain or predict, it will make the traditional means of conflict resolution and de-escalation more difficult or impossible. Unpredictable systems will give leaders false impressions of their capabilities, leading to overconfidence or encouraging preemptive attacks. Moreover, automated attacks, responses, and escalations will make it more difficult for leaders to interpret the intentions, decisions and actions of their adversaries, and will also limit their options for response. Systems that automatically react or attack may miss opportunities to find other, less violent, ways to achieve military objectives, or preclude opportunities for diplomatic or political resolutions to a conflict. The overall effect of these systems will be to close off avenues and opportunities to avoid conflicts, to de-escalate conflicts, and to find means to end hostilities.</p>



<ul class="wp-block-list">
<li>Moral responsibility</li>
</ul>



<p>No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect humans as inherent bearers of rights and dignity. A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end, a human life. Decisions to end human life must be made by humans in order to be morally justifiable. These are responsibilities of unavoidable moral weight that cannot be delegated to machines or satisfied by the mere inclusion of humans in the writing of computer programs. While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility. This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.</p>



<ul class="wp-block-list">
<li>Meaningful Human Control</li>
</ul>



<p>Much hinges on the degree to which AWS can be meaningfully controlled by humans. Robust scientific scholarship on human psychology suggests that humans experience cognitive limitations when it comes to technological/computational systems. This condition is known as automation bias by which the human is cognitively hindered from having sufficient contextual understanding to be able to intervene with systems that are fully autonomous and function at speeds beyond human capabilities. In order to safeguard meaningful human control (not merely functional control) over AI-enabled AWS, those<br>involved in operating or deciding to deploy AWS should have full contextual and situational awareness of the target area at the time of a specific attack. They must also be able to perceive and react to changes or unanticipated situations that arise; ensure active and deliberate participation in the action; have sufficient training and understanding of the system and its likely actions; have adequate time for meaningful control and have the means and knowledge for a rapid suspension of an action. For many AWS this is not possible. Meaningful human control is fundamental to the edifice of the laws of war and the ethics of war.</p>



<p><strong><span style="text-decoration: underline;">Moving Forward: A Treaty to Prohibit and Regulate the Use of AWS</span></strong></p>



<p>We support calls from States, as well as the UN Secretary-General and the President of the ICRC, for an international legally binding treaty prohibiting and regulating the use of AWS.</p>



<p>What is needed is a legally binding instrument that obligates States to adhere to prohibitions and regulatory limitations for AWS. Codes of conduct and political declarations are not enough for systems that pose such grave risks to global peace and security. This legally binding instrument must apply to the automated control of all weapons, and require meaningful human control in compliance with substantive regulations for the use of force in all cases. Such a treaty should apply to all military uses of AWS and systems that generate or select targets, as well as to all police, border security and other civilian applications that automate the use of force.</p>



<p>The treaty should prohibit autonomous weapons systems that are ethically or legally unacceptable. This includes autonomous weapons systems for which the operation or effects cannot be sufficiently understood, predicted and explained; autonomous weapons systems that cannot be used with meaningful human control; and autonomous weapons systems designed to target human beings.<br></p>



<p>The treaty should include positive obligations for States to use AWS systems that are permitted only within the bounds of clearly stipulated regulations that ensure adherence to international human rights and the key principles of international humanitarian law. We believe that an emerging norm around meaningful human control can be articulated and codified through a treaty negotiation in a process that includes all States, civil society, and industry and technical experts. We urge the Secretary-General to advance the creation of such a forum within the General Assembly, and look forward to offering our expertise to those discussions.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19903</post-id>	</item>
	</channel>
</rss>
