<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ICRAC</title>
	<atom:link href="https://www.icrac.net/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 23 Jun 2025 12:49:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Statement on Ethical Considerations in Open Informal Meeting at UNGA 1st Committee</title>
		<link>https://www.icrac.net/statement-on-ethical-considerations-in-open-informal-meeting-at-unga-1st-committee/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:45:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Peter Asaro]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19944</guid>

					<description><![CDATA[UNGA Informals on LAWS ICRAC Statement on Ethical Considerations Delivered by Prof. Peter Asaro on May 13, 2025 Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="1024" height="800" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799-1024x800.jpg?resize=1024%2C800&#038;ssl=1" alt="Peter Asaro delivering ICRAC Statement on Ethics" class="wp-image-19940" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=1024%2C800&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=300%2C234&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=768%2C600&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?w=1536&amp;ssl=1 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></figure>



<p><strong>UNGA Informals on LAWS <br>ICRAC Statement on Ethical Considerations <br>Delivered by Prof. Peter Asaro on May 13, 2025</strong> </p>



<p><br>Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and ethics. ICRAC is a co-founding member of the Stop Killer Robots Campaign.</p>



<p>We appreciate the organizers of this Informal Meeting including a Session on Ethical Considerations. It has been many years since Ethics has been the primary focus of substantive discussion within the CCW GGE meetings. Yet ethics and morality has provided a valuable basis for international law in the past, and is precisely where we must ground new laws to prohibit and regulate AWS in the near future. That is, in our common shared humanity, and principles which transcend human laws, particularly human dignity in a deep sense as discussed by Prof. Chengeta, and ethical decisions as discussed by the Representative of the Holy See.</p>



<p>Whenever violent force is used, there are risks involved. But merely managing those risks is not sufficient to meet the requirements for morally justifiable killing. Understanding the reasons and the potential consequences for the use of force is required for its justification. It has been argued that AWS may be highly accurate and precise in their use of force, but these are not sufficient to meet the requirements for the ethically discriminant use of force, and do not begin to address the requirements of the proportionate use of force.</p>



<p>Following the outlines of the two-tiered approach advanced by the ICRC, regulated AWS would be permitted to target autonomously. In these limited cases, more specifically cases where the target is a military object by nature, such as military vehicles and installations, automated targeting must still be carefully regulated to ensure that humans can safely supervise those systems.</p>



<p>But as soon as we start considering civilian objects, even those which might be used for military purposes and might be lawfully targeted under IHL, we must not permit their targeting by automated processes. The moral argument that leads to this conclusion is clear. It may be tempting to think that we can automate proportionality decisions–how much force is needed, or how much risk is acceptable, or how much collateral harm to civilians might be acceptable relative to a military objective. But the nature of proportionality judgments is fundamentally moral.</p>



<p>These decisions are inherently about values–the value of a target to a military objective, the value of a military objective to an operation and an overall strategy; the value of civilian infrastructure to a family, a community, a country; the value of a natural environment; and above all the value of human lives and the cost of taking those lives. They are also about duties, our duties to protect, our duties to each other.</p>



<p>These values are not intrinsically numerical or quantitative in nature, and assigning them such values in a computer program is arbitrary at best. Computers do not “understand” in any meaningful sense. They represent the world through mathematical abstractions that we design and understand, and from which we assign and seek meaning. Worse, training an algorithm to “learn” these values from a dataset is to abdicate any human responsibility in establishing the values represented in the systems, including the value of human life and the necessary conditions of human flourishing.</p>



<p>These are moral values, only understood through the lived experience of human life, moral reflection, and ethical development. In those limited cases where the decision to end a human life can be morally justified, it must be made by a moral agent who truly understands these values. Any life lost by the decision of an algorithm is, by definition, taken arbitrarily. ICRAC appreciates the work of the CCW GGE and this section of latest draft of the Chair’s Rolling Text:</p>



<p><em>States should ensure context-appropriate human judgement and control in the use of<br>LAWS, through the following measures &#8230; [which] &#8230; includes ensuring assessment of legal<br>obligations and ethical considerations by a human, in particular, with regard to the effects<br>of the selection and engagement functions.</em></p>



<p>The ethical considerations of the use of force must remain a matter of human judgement. We must not eliminate ethical considerations altogether by delegating them to machines wholly incapable of grasping such considerations. Human dignity requires that we consider a human as human–no machine can do this for us.</p>



<p>Similarly for anti-personnel AWS, in order to design systems to autonomously target people, it would be necessary to create digital representations of people, or target profiles. The same moral logic applies here.</p>



<p>While from a legal perspective, it could be argued that unmounted infantry are military objects by nature, and can pose a threat just as a tank does. But there is an important moral difference between targeting people directly, versus targeting a tank, and accepting that people inside it may be killed. People are not to be treated as objects, but always as moral subjects.</p>



<p>The aim of war, and the moral justification of killing in war, depends critically on using force to diminish the ability of your adversary to use force against you. The ultimate aim is not to harm or kill the enemy directly, this is only a means to an end, namely the end of hostilities. Targeting a human directly is to make the destruction of a human a goal in itself, rather than the true goal of eliminating the threat they pose. This might sound like a minor distinction, but by making the targeting and killing of humans the goal of a machine, rather than the elimination of military threats, we stand to vastly undermine human dignity.</p>



<p>By designing systems to target people directly, we essentially and effectively “pre-authorize” the moral judgement to take their lives. By pre-authorizing the killing of humans, and making personnel the targets of autonomous weapons, we would fundamentally violate and diminish human dignity. If we accept that a soldier on the battlefield can be directly targeted, without a human moral judgement or moral justification, then we make it more acceptable to do so in other contexts as well.</p>



<p>When we violate human dignity, it is not just the immediate victim who loses their dignity. All of humanity suffers from this loss. This is why we feel such moral disgust at the injustices of slavery, and torture, and the dropping of bombs on children–these atrocities undermine our collective dignity as human beings and offend our moral sensibility.</p>



<p>While the use of violent force against unjust aggression is sometimes necessary, it is our moral responsibility to ensure that force is used justly. The only way to ensure that force is used justly is through moral judgement, and this requires a moral agent. Machines and automated algorithms, however sophisticated they may appear, are not moral agents, and are not capable of moral judgements–only thin and arbitrary approximations. We must not delegate our morality to machines, as doing so threatens the very essence of our human dignity.</p>



<p>To quote the wise words of Christof Heyns, “War without reflection is mechanical slaughter.”</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19944</post-id>	</item>
		<item>
		<title>Statement on Technical Considerations in Open Informal Meeting at UNGA 1st Committee</title>
		<link>https://www.icrac.net/statement-on-technical-considerations-in-open-informal-meeting-at-unga-1st-committee/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:40:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19938</guid>

					<description><![CDATA[UNGA LAWS Informals ICRAC Statement on Technical Considerations Delivered by Prof. Peter Asaro, 13 May 2025 Thank you Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a co-founding member of the Stop Killer Robots Campaign. ICRAC has many concerns about the development and use of autonomous weapons and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c"><img data-recalc-dims="1" loading="lazy" decoding="async" width="600" height="450" class="wp-image-19939" style="width: 600px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=600%2C450&#038;ssl=1" alt="" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?w=2560&amp;ssl=1 2560w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=1536%2C1152&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG_20250512_101911990_HDR_AE-scaled.jpg?resize=2048%2C1536&amp;ssl=1 2048w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c"><strong>UNGA LAWS Informals <br>ICRAC Statement on Technical Considerations <br>Delivered by Prof. Peter Asaro, 13 May 2025</strong> </p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c"><br>Thank you Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a co-founding member of the Stop Killer Robots Campaign.</p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">ICRAC has many concerns about the development and use of autonomous weapons and the accelerated production and promotion of these systems by private technology companies. Far from being a technically inevitable and practically necessary, autonomous weapons pose a considerable risk to global stability and security, and are likely to cause more civilian harm, rather than less. As a group of scholars with expertise in relevant domains, including robotics, AI, and digital information systems, we strongly urge caution. </p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">We are concerned that the technology that underpins the functionalities of AWS is dangerously unsuitable for the complex and dynamic contexts of conflict. Specifically, the AI element in AWS poses considerable risks. Testing such systems is difficult and time-consuming, and the tools and methods for the verification and validation of AI systems do not yet exist, if they are possible at all. The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of using AI supported AWS.</p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">At best, AI supported systems are only as good as the data on which they are trained on and appropriate, comprehensive and up-to-date data is hard to come by in contested conflict spaces. AI systems need frequent updates to remain relevant and functional, but with each substantial update, vital systems-aspects may become compromised requiring further verification and validation.</p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">As we heard from these presenters, it is a well-known fact in technology and industry circles that AI systems remain unproven in terms of reliability for safety-critical situations and complex situations such as armed conflict. They are known to give inaccurate outputs, and newer generative AI systems, which are likely to find their way into the wider AWS environment, are known to hallucinate – that is they give false or misleading output which is difficult to distinguish from accurate results. In the case of generative AI, this behavior is guaranteed by its technical architecture and these types of errors can only be managed not eliminated. When AI experts and those that make the technologies used in AWS raise alarms about the inadequacies of AWS for conflict, we should listen.</p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">We are concerned that the technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation and conflict at speed. Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about due to erroneous indications of attack or a simple sensor or computer error. Unpredictable systems, and systems which operators cannot understand or explain, will give leaders false impressions of their capabilities, leading to overconfidence or encouraging pre-emptive<br>attacks. This will lead to greater global instability and insecurity. </p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">Finally, there are operational risks posed by AWS in that they give the illusion that such weapons are more precise and accurate, and will therefore inflict less harm. The extensive use of AI in current conflicts has given us an indication that the contrary might be the case. This is particularly so for database-driven systems that generate targeting lists faster than humans can evaluate and verify the lawfulness of targets. The technical capacity for precision or accuracy is not a warrant for discrimination or proportionality in use. Unless we establish clear legally binding limitations on AWS, there is no safeguard that systems that prioritize speed and scale are not used in an indiscriminate and disproportional manner, either intentionally or because humans have abdicated their judgement to a machine.</p>



<p id="block-5c43bd10-ca39-48af-98ad-e1748d15cd4c">Thank you.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19938</post-id>	</item>
		<item>
		<title>Statement on Security in Open informal consultations at UN GA</title>
		<link>https://www.icrac.net/statement-on-security-in-open-informal-consultations-at-un-ga/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:11:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19933</guid>

					<description><![CDATA[Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”. Thank you Chair, Presenters, Delegates, My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s International Disarmament Institute and a member of the International Committee for Robot Arms Control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-19934" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p><br>Thank you Chair, Presenters, Delegates,</p>



<p>My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s <a href="https://www.pace.edu/dyson/faculty-and-research/research-centers-and-initiatives/international-disarmament-institute">International Disarmament Institute</a> and a member of the International Committee for Robot Arms Control (ICRAC).</p>



<p>I would like to raise the importance of thinking about <em>human</em> security and protecting the integrity of the natural environment, considerations beyond traditional interpretations of security as strategic stability.</p>



<p>In this regard, I would like to highlight a report recently published by Pace’s International Disarmament Institute “<a href="https://bpb-us-w2.wpmucdn.com/blogs.pace.edu/dist/0/195/files/2025/05/Considerations-for-a-Victim-Assistance-Provision-in-a-Treaty-Banning-Killer-Robots-Submission-Draft-26-March-2025.pdf">Considering Victim Assistance and Remediation Provisions for a Treaty on Killer Robot</a>s.”</p>



<p>International diplomatic and advocacy discussions surrounding a possible treaty on autonomous weapons systems – “killer robots” – have neglected consideration of provisions on victim assistance and remediation. This departs from an almost three- decade trend in treaties banning and regulating weapons, which have included “positive obligations” to assist aMected communities and remediate contaminated environments.</p>



<p>Autonomous weapons systems have not yet been widely deployed and thus there are few who might be considered victims. Moreover, one hopes that a treaty will stymie widespread use of killer robots. Nevertheless, it is possible that some states will remain outside any eventual treaty and some non-state actors may remain outside the norm and may use autonomous weapons, whether in armed conflict, policing or terrorism. Therefore, it is important for diplomats and advocates to discuss whether positive obligations to address harms from killer robots belong in a treaty regulating and/or banning them. If so, further consideration should be given to the scope and shape of such provisions on victim assistance and remediation in advance of any negotiations.</p>



<p>To phrase this as a set of questions for the panelists:</p>



<ul class="wp-block-list">
<li>If an autonomous weapon sinks a ship, who would be responsible for addressing the resulting pollution, environmental injustices and insecurities? </li>
</ul>



<ul class="wp-block-list">
<li>If civilians are harmed or disabled by the use of an autonomous armed drone, how might we secure their medical care and rehabilitation, as well as prosecution of those responsible? How would we give them satisfaction that justice is secured?</li>
</ul>



<p>The specificity of autonomous weapons systems mean that diplomats and activists should not simply “copy and paste” the victim assistance and remediation provisions from other instruments into a killer robots treaty. In particular, care should be taken to ensure that provisions fill legal gaps and/or strengthen rather than undermine existing obligations.</p>



<ul class="wp-block-list">
<li>What complementarities are relevant in International Humanitarian Law, weapons treaties, but also the UN Voluntary Trust Fund on Torture or the Convention on the Rights of Persons with Disabilities?</li>
</ul>



<p>Diplomats, civil society advocates, humanitarian workers and activists engaged in discussions of a potential treaty on autonomous weapons systems should consider:</p>



<ul class="wp-block-list">
<li>Whether to include positive obligations addressing possible harms resulting from the use of killer robots, such as victim assistance and remediation of contaminated environments;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of precedent offered by recent international treaties and norms on weapons, which have included provisions on victim assistance and remediation of contaminated land;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of other normative frameworks for redress and remediation, such as from human rights and environmental law;</li>
</ul>



<ul class="wp-block-list">
<li>How to ensure that possible provisions fill legal gaps and strengthen rather than undermine existing obligations.</li>
</ul>



<p>We would be interested to hear from panelists, as well as states here today, their thoughts on the human and environmental security implications of autonomous weapons systems particularly how to remedy the harms resulting from their use, such as through practices of victim assistance and environmental remediation.</p>



<p>This is among several dimensions of autonomous weapons that have not yet been discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) mandated by the Convention on Certain Conventional Weapons (CCW). Discussion of these issues here demonstrates the potential value of this forum.</p>



<p>Thank you for the opportunity to address this meeting!</p>



<p></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19933</post-id>	</item>
		<item>
		<title>ICRAC submission to the United Nations Secretary General on &#8220;AI in the Military Domain and its Implications for International Peace and Security&#8221;</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-ai-in-the-military-domain-and-its-implications-for-international-peace-and-security/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Fri, 11 Apr 2025 13:36:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19898</guid>

					<description><![CDATA[11 April 2025 The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.” Founded in 2009, ICRAC is a civil society organization of experts in artificial [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><em>11 April 2025</em></p>



<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.”</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="683" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&#038;ssl=1" alt="" class="wp-image-19896" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1536%2C1024&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=2048%2C1365&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p>Founded in 2009, ICRAC is a civil society organization of experts in artificial intelligence, robotics, philosophy, international relations, human security, arms control, and international law. We are deeply concerned about the pressing dangers posed by AI in the military domain. As members of the Stop Killer Robots Campaign, ICRAC fully endorses their submission to this report, and wishes to provide further detail regarding the concerns raised by AI-enabled targeting.</p>



<p>Increasing investments in AI-based systems for military applications, specifically AI-enabled targeting, present new threats to peace and security and underscore the urgent need for effective governance. ICRAC identifies the following concerns in the case of AI-enabled targeting:</p>



<ol class="wp-block-list">
<li>AI-enabled targeting systems are only as valid as the data and models that inform them. ‘Training’ data for targeting requires the classification of persons and associated objects (buildings, vehicles) or ‘patterns of life’ (activities) based on digital traces coded according to vaguely specified categories of threat, e.g. ‘operatives’ or ‘affiliates’ of groups designated as combatants. Often the boundary of the target group is itself poorly defined. Although this casts into question the validity of input data and associated models, there is little accountability and no transparency regarding the bases for target nominations or for target identification. AI-enabled systems thus threaten to undermine the Principle of Distinction, <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/" data-type="link" data-id="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">even as they claim to provide greater accuracy</a>.</li>



<li><a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some" data-type="link" data-id="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some">Human Rights Watch research</a> indicates that in the case of IDF operations in Gaza, AI-enabled targeting tools rely on ongoing and systematic Israeli surveillance of all Palestinian residents of Gaza, including with data collected prior to the current hostilities in a manner that is incompatible with international human rights law.</li>



<li>The increasing reliance on profiling required by AI-enabled targeting furthers a shift from the recognition of persons and objects identified as legitimate targets by their observable disposition as an imminent military threat, to the ‘discovery’ of threats through mass surveillance, based on statistical speculation, suspicion and guilt by association.</li>



<li>The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of AI-enabled targeting.</li>



<li>The use of AI-enabled targeting to accelerate the scale and speed of target generation further undermines processes for validation of the output of targeting systems by humans, while greatly amplifying the potential for direct and collateral civil harm, as well as diminishing the possibilities for de-escalation of conflict through means other than military action.</li>
</ol>



<p>Justification for the adoption of AI-enabled targeting is based on the premise that acceleration of target generation is necessary for ‘decision-advantage’, but the relation between speed of targeting and effectiveness in overall military success, or longer-term political outcomes, is questionable at best. The ‘<a href="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/" data-type="link" data-id="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/">need’ for speed</a> that justifies AI- enabled targeting is based on a circular logic, which perpetuates what has become an arms race to accelerate the automation of warfighting. <em>Accelerating the speed and scale of target generation effectively renders human judgment impossible or, de facto, meaningless.</em> The risks to peace and security &#8211; especially to human life and dignity &#8211; are greatest for operations outside of conventional or clearly defined battlespaces. Insofar as the use of AI-enabled targeting is shown to be contrary to international law, the mandate must be to <em>not</em> use AI in targeting.</p>



<p>In this regard, ICRAC notes that the above systems present challenges to compliance with various branches of international law such as international humanitarian law (IHL), <em>jus ad bellum</em> (<a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">UN law on prohibition of use of force</a>), international human rights law (IHRL) and international environmental law. In the context of military AI’s implications for peace and security, <em>jus ad bellum</em>, a framework that prohibits aggressive military actions and regulates the conditions under which states may lawfully resort to the use of force, is the most relevant. In the same manner IHRL is important in this context because it is designed to uphold human dignity, equality, and justice—values that form the foundation of peaceful and secure societies.</p>



<p><strong>Citations</strong></p>



<p>Alvarez, Jimena Sofia Viveros. September 4, 2024. The risks and inefficacies of AI systems in military targeting support. <em>Humanitarian Law and Policy.</em> <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/</a></p>



<p>Bo, Marta and Dorsey, Jessica. April 4, 2024 Symposium on Military AI and the Law of Armed Conflict: The ‘Need’ for Speed – The Cost of Unregulated AI Decision-Support Systems to Civilians. <em>OpinioJuris</em>. <a href="https://opiniojuris.org/2024/04/04/symposium-on- military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of- unregulated-ai-decision-support-systems-to-civilians/">https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/</a></p>



<p>Chengeta, Thompson. May, 2024. African Commission for Human and Peoples’ Rights submission to the UN Secretary General Report on Lethal Autonomous Weapons, ASSEMBLY RESOLUTION 78/241, Commissioner Ayele Dersso Focal Point on the ACHPR Study on AI and Other Technologies. <a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">78-241-African_Commission-EN.pdf</a></p>



<p>Human Rights Watch. September 10, 2024. Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza. <a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza">https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza</a></p>



<p>ICRC. 6 June 2019. Artificial intelligence and machine learning in armed conflict: A human-centred approach.<br><a href="https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learn ing_in_armed_conflict-icrc.pdf">https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learning_in_armed_conflict-icrc.pdf</a>; published version at <em>International Review of the Red Cross: Digital technologies and war</em> (2020), 102 (913), 463–479.</p>



<p>Schwarz, Elke. December 12, 2024. The (im)possibility of responsible military AI governance. <em>Humanitarian Law and Policy</em>. <a href="https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/">https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19898</post-id>	</item>
		<item>
		<title>ICRAC Submission to the United Nations Secretary-General on Autonomous Weapon Systems</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-autonomous-weapon-systems/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Mon, 20 May 2024 18:45:00 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19903</guid>

					<description><![CDATA[The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, robot ethics, international relations, international security, arms control, international humanitarian law, international human rights law and philosophy of technology. We are deeply concerned about the pressing dangers that military robotics and automation pose to international peace, international security and stability, and the rights and safety of civilians in war. Based on our expertise, we are particularly concerned that military robotic systems will lead to more frequent, less restrained, and less accountable armed conflict. In light of these risks, we call for an international treaty to prohibit and restrict autonomous weapon systems.</p>



<p>As has been discussed in detail at the CCW GGE over the past decade, autonomous weapon systems (AWS) raise serious concerns for international humanitarian law in regard to complying with the principles of distinction and proportionality. The risk of triggering the proliferation of arms is another stark reality posed by AWS, as is the accessibility of these types of weapon systems to non-state armed groups, among other actors. The use of AWS may further spill into the arena of national and transnational organized crime in addition to policing at the domestic level. All the while, several operational concerns remain as to the use of AWS from the perspective of accountability, bias and the use of machine-learning algorithms which may develop beyond the capacity of “the human in the loop.” There are also serious risks to regional and global stability posed by replacing human decision making with machine decision making, as it becomes more difficult for political and military leaders to anticipate and interpret the intentions, decision and actions of their adversaries, and thus find ways to avoid or de-escalate conflicts.</p>



<p>We also note the threat that AWS pose to compliance with international human rights, particularly the right to life, the prohibition against torture, cruel and inhumane treatment, and above all the human right to dignity. We fear that an additional protocol to the CCW would fail to address these human rights concerns. We are concerned that the automated targeting and release of non-conventional weapons, including nuclear weapons, may also fall outside the scope of any legally binding CCW protocol. We thus advocate and support all calls for a legally binding instrument to prohibit and restrict the use of AWS, and urge the Secretary-General to encourage the initiation of a forum within the United Nations General Assembly that can include all States, cover autonomy and automation in the use of all weapons, and address international humanitarian law as well as human rights concerns.</p>



<p>This submission is informed by our comprehensive interdisciplinary expertise. We have published extensively on the ethical, legal, technical and security challenges of autonomous weapon systems, on the question of meaningful human control, and on the challenges of escalation at speed.</p>



<p><strong>Scope</strong></p>



<p>In accordance with the International Committee of the Red Cross we understand an autonomous weapon system as one that, potentially after initial activation or launch by a human, selects targets based on sensor data and engages the targets without human intervention. We endorse the recommendations of the International Committee of the Red Cross for a two-tiered approach that prohibits unpredictable systems and systems that explicitly target humans, while strictly regulating the use of autonomy in all other systems for the command, control and engagement of lethal force. This includes restrictions on the time, space, scope and scale of operations of such systems, as well as the types of targets and situations in which they may be used. In particular, we strongly agree that the only permissible targets of<br>such systems should be military objects by nature, and never civilian or dual-use targets, which should always require human judgment. More discussion is needed on the appropriate forms and regulation of the human-machine interaction in complex command and control systems. In particular, as computers and artificial intelligence collect and automatically analyze more and more data, greater clarity is needed in what constitutes meaningful human control in the context of automated target generation and identification, and how to ensure respect and responsibility for international law when such systems are used.</p>



<p><strong><span style="text-decoration: underline;">Key Challenges to Global Peace and Security</span></strong></p>



<ul class="wp-block-list">
<li>Uncontrolled Escalation and Missed Opportunities for De-escalation and Diplomacy</li>
</ul>



<p>The technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation at speed. As the thresholds for applying military force will be lowered, the likelihood of conflicts will go up. Actions and reactions to the adversary will have to be programmed in advance. Two AWS swarms moving at relatively close distance from each other, in international air space, for example, might interact in ways that could not be mitigated or controlled by a human in an appropriate time window. In case of an enemy attack, even a few seconds delay could mean loss of one’s systems, thus there will be strong pressure for fast counterattacks that preclude human consideration.</p>



<p>Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about by erroneous indications of attack or a simple sensor or computer error. Mutual interaction between the control programs could not be tested in advance. The outcome of the interaction of such complex systems would be intrinsically unpredictable, but fast escalation is possible and likely. In a severe crisis with fear of preemption this could greatly destabilize the military situation between potential enemies.</p>



<p>As political and military leaders become increasingly dependent on systems they cannot explain or predict, it will make the traditional means of conflict resolution and de-escalation more difficult or impossible. Unpredictable systems will give leaders false impressions of their capabilities, leading to overconfidence or encouraging preemptive attacks. Moreover, automated attacks, responses, and escalations will make it more difficult for leaders to interpret the intentions, decisions and actions of their adversaries, and will also limit their options for response. Systems that automatically react or attack may miss opportunities to find other, less violent, ways to achieve military objectives, or preclude opportunities for diplomatic or political resolutions to a conflict. The overall effect of these systems will be to close off avenues and opportunities to avoid conflicts, to de-escalate conflicts, and to find means to end hostilities.</p>



<ul class="wp-block-list">
<li>Moral responsibility</li>
</ul>



<p>No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect humans as inherent bearers of rights and dignity. A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end, a human life. Decisions to end human life must be made by humans in order to be morally justifiable. These are responsibilities of unavoidable moral weight that cannot be delegated to machines or satisfied by the mere inclusion of humans in the writing of computer programs. While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility. This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.</p>



<ul class="wp-block-list">
<li>Meaningful Human Control</li>
</ul>



<p>Much hinges on the degree to which AWS can be meaningfully controlled by humans. Robust scientific scholarship on human psychology suggests that humans experience cognitive limitations when it comes to technological/computational systems. This condition is known as automation bias by which the human is cognitively hindered from having sufficient contextual understanding to be able to intervene with systems that are fully autonomous and function at speeds beyond human capabilities. In order to safeguard meaningful human control (not merely functional control) over AI-enabled AWS, those<br>involved in operating or deciding to deploy AWS should have full contextual and situational awareness of the target area at the time of a specific attack. They must also be able to perceive and react to changes or unanticipated situations that arise; ensure active and deliberate participation in the action; have sufficient training and understanding of the system and its likely actions; have adequate time for meaningful control and have the means and knowledge for a rapid suspension of an action. For many AWS this is not possible. Meaningful human control is fundamental to the edifice of the laws of war and the ethics of war.</p>



<p><strong><span style="text-decoration: underline;">Moving Forward: A Treaty to Prohibit and Regulate the Use of AWS</span></strong></p>



<p>We support calls from States, as well as the UN Secretary-General and the President of the ICRC, for an international legally binding treaty prohibiting and regulating the use of AWS.</p>



<p>What is needed is a legally binding instrument that obligates States to adhere to prohibitions and regulatory limitations for AWS. Codes of conduct and political declarations are not enough for systems that pose such grave risks to global peace and security. This legally binding instrument must apply to the automated control of all weapons, and require meaningful human control in compliance with substantive regulations for the use of force in all cases. Such a treaty should apply to all military uses of AWS and systems that generate or select targets, as well as to all police, border security and other civilian applications that automate the use of force.</p>



<p>The treaty should prohibit autonomous weapons systems that are ethically or legally unacceptable. This includes autonomous weapons systems for which the operation or effects cannot be sufficiently understood, predicted and explained; autonomous weapons systems that cannot be used with meaningful human control; and autonomous weapons systems designed to target human beings.<br></p>



<p>The treaty should include positive obligations for States to use AWS systems that are permitted only within the bounds of clearly stipulated regulations that ensure adherence to international human rights and the key principles of international humanitarian law. We believe that an emerging norm around meaningful human control can be articulated and codified through a treaty negotiation in a process that includes all States, civil society, and industry and technical experts. We urge the Secretary-General to advance the creation of such a forum within the General Assembly, and look forward to offering our expertise to those discussions.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19903</post-id>	</item>
		<item>
		<title>New Book Shows Catastrophic Folly of Automating Warfare</title>
		<link>https://www.icrac.net/new-book-shows-catastrophic-folly-of-automating-warfare/</link>
		
		<dc:creator><![CDATA[mbolton]]></dc:creator>
		<pubDate>Sun, 20 Sep 2020 10:40:20 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://www.icrac.net/?p=6475</guid>

					<description><![CDATA[Despite the challenges of COVID-19, 21-25 September the world’s governments will discuss at the UN in Geneva the humanitarian and security threat posed by “lethal autonomous weapons systems” – high-tech killer robots that could target people without meaningful human control over the use of violence. But despite the media clichés that often accompany these discussions [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img data-recalc-dims="1" loading="lazy" decoding="async" width="191" height="300" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2023/04/Political-Minefields-cover.jpg?resize=191%2C300&#038;ssl=1" alt="" class="wp-image-19759"/></figure>



<p>Despite the challenges of COVID-19, 21-25 September the world’s governments will <a href="https://www.stopkillerrobots.org/2020/09/maintaining-momentum-during-the-pandemic/">discuss</a> at the UN in Geneva the humanitarian and security threat posed by “lethal autonomous weapons systems” – high-tech killer robots that could target people without meaningful human control over the use of violence.</p>



<p>But despite the media clichés that often accompany these discussions – journalists seem unable to resist references to Terminator and Skynet – killer robots are not some sci-fi fantasy <a href="https://buff.game/splitgate-arena-warfare/">Splitgate Arena</a>. They are the result of a long trend in weapons development of using technology to disembody killing.</p>



<p>In my new <a href="https://www.bloomsburycollections.com/book/political-minefields-the-struggle-against-automated-killing/">book</a>, <em>Political Minefields </em>(I.B. Tauris), I trace the history of efforts to avoid responsibility for violence by using remote and automated weapons like landmines, cluster munitions, armed drones and, now killer robots.</p>



<p>“Weapons developers are seeking to pervert the power of information and communications technology for deadly ends, taking humans entirely out of the decision to kill,” writes Jody Williams, who was awarded the 1997 Nobel Peace Prize along with the International Campaign to Ban Landmines (<a href="http://www.icbl.org/">ICBL</a>), in her foreword to my book. “In the autonomous, weaponized robot, they are essentially designing mines that actively seek out their targets, that can follow you, that can fly.”</p>



<p>From the WWII minefields of North Africa and US automated bombing of Laos to armed drones targeting makers of improvised explosive devices, military planners have tried, as US Vietnam War General William Westmoreland put it, to “replace wherever possible the man with the machine.”</p>



<p>But in my research, I’ve learned that the fever dream of algorithmic warfare never delivers on its promise of victory by remote control. People are too messy, unpredictable, clever, and tricky to be meet the assumptions programmed into military technology.</p>



<p>Allied troops just marched through the Nazi minefields at El Alamein, taking the casualties and repurposing the mines they found for their own uses. Vietnamese communist soldiers spoofed the various electronic detectors dropped from US warplanes onto their pathways through the jungle. They sent animals down the trail, placed bags of urine next to so-called “people sniffers”, and played tapes of vehicle noises next to microphones – prompting computerized bombers to unload explosives onto phantom guerillas.</p>



<p>As I have travelled in and around the world’s minefields and cluster munition strike zones, I have heard the story over and over again, in Afghanistan, Bosnia, Cambodia, Iraq, Laos and South Sudan. In each place, it is civilians who have borne the consequences of turning warfare over to automated devices. Decades after the soldier who placed and armed them, landmines in Afghanistan continue to maim children. Lao farmers still risk setting off unexploded cluster bomblets when they plough their rice fields.</p>



<p>The final chapter of my book highlights the far-sighted work of the International Committee for Robot Arms Control (<a href="https://www.icrac.net/">ICRAC</a>) and <a href="https://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, which have sounded the alarm on the emerging militarization of artificial intelligence and robotics. They are not anti-technology Luddites. “It’s OK for a plane to fly itself,” Dr. Peter Asaro, co-founder of ICRAC and New School professor told me. “It’s not OK for a plane to decide who to shoot at.” We have, he says, “the right not to be killed by a machine.”</p>



<p>For the last six years, governments have gathered in Geneva to consider how to address lethal autonomous weapons systems, under the auspices of the Convention on Certain Conventional Weapons (CCW). Despite pressure from a wide range of countries and civil society, the big military powers have dragged their feet, abusing the rule of consensus decisionmaking, running down the clock to avoid constraining their high-tech weapons R&amp;D.</p>



<p>But diplomatic patience of the <a href="https://www.stopkillerrobots.org/2019/03/minority-of-states-delay-effort-to-ban-killer-robots/">majority</a> of nations in the CCW is running out. A global <a href="https://www.stopkillerrobots.org/2019/01/global-poll-61-oppose-killer-robots/">poll</a> found that 61% of people in 26 countries opposed killer robots. And UN Secretary-General António Guterres has directly <a href="https://www.stopkillerrobots.org/2018/11/unban/">called</a> on governments “to ban these weapons, which are politically unacceptable and morally repugnant.”</p>



<p>Just as human reality is more complex than software programs, society is not the passive recipient of technological change. In writing my book, I have been inspired to meet people who have mobilized to clear up the minefields, provide support to survivors of cluster munitions and pressured diplomats to negotiate treaties banning inhumane weapons. We must following their example to demand new international law ensuring that the use of force remains under meaningful human control.</p>



<p><em>Matthew Bolton, ICRAC member and associate professor of political science at Pace University, is author of </em>Political Minefields: The Struggle against Automated Killing<em>. If you can’t find a copy in your local bookstore, it is available with a 35% discount from </em><a href="https://www.bloomsburycollections.com/book/political-minefields-the-struggle-against-automated-killing/"><em>Bloomsbury</em></a><em> with the code GLR TW5</em></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6475</post-id>	</item>
		<item>
		<title>ICRAC Releases New Report on Meaningful Human Control</title>
		<link>https://www.icrac.net/icrac-releases-new-report-on-meaningful-human-control/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 20 Aug 2019 08:38:40 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6292</guid>

					<description><![CDATA[ICRAC Members Daniele Amoroso and Guglielmo Taburrini have completed a new ICRAC Working paper #4 on “What makes human control over weapons “Meaningful”? The paper was prepared for distribution at the August 2019 meeting of the United Nations CCW GGE on Lethal Autonomous Weapons. The paper can be downloaded from our Resources Page, along with [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>ICRAC Members Daniele Amoroso and Guglielmo Taburrini have completed a new ICRAC Working paper #4 on <a href="https://www.icrac.net/wp-content/uploads/2019/08/Amoroso-Tamburrini_Human-Control_ICRAC-WP4.pdf">“What makes human control over weapons “Meaningful”?</a>  The paper was prepared for distribution at the August 2019 meeting of the United Nations CCW GGE on Lethal Autonomous Weapons. The paper can be downloaded from our <a href="https://www.icrac.net/research/">Resources Page</a>, along with ICRAC&#8217;s other working papers.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6292</post-id>	</item>
		<item>
		<title>ICRAC Statement at Informal Consultations of the August 2019 CCW GGE on LAWS</title>
		<link>https://www.icrac.net/icrac-statement-at-informal-consultations-of-the-ccw-gge-on-laws/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 20 Aug 2019 08:29:57 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6289</guid>

					<description><![CDATA[Statement delivered by ICRAC Vice-chair Peter Asaro to the CCW GGE Informal Session on the Chair&#8217;s Non-Paper, August 19, 2019. &#8220;The International Committee for Robot Arms Control, which is a member of the Campaign to Stop Killer Robots, would like to thank the Chair for this Draft, and make the following comments and requests. First [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><img data-recalc-dims="1" loading="lazy" decoding="async" width="600" height="450" class="wp-image-6290" style="width: 600px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=600%2C450&#038;ssl=1" alt="" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?w=4032&amp;ssl=1 4032w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/08/IMG_20190819_121216.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>



<p><strong>Statement delivered by ICRAC Vice-chair Peter Asaro to the CCW GGE Informal Session on the <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/E7600EE67661D5B0C125845B00569CED/$file/CCW_GGE.1_2019_CRP.1_Draft+Report.pdf">Chair&#8217;s Non-Paper</a>, August 19, 2019.</strong></p>



<p>&#8220;The International Committee for Robot Arms Control, which is
a member of the Campaign to Stop Killer Robots, would like to thank the Chair
for this Draft, and make the following comments and requests.</p>



<p>First and most importantly, we would urge the Chair to set a
higher bar for the goals of this GGE and the discussions of the next two
years.&nbsp; In particular, we would like to
see the set goal to be a legally binding instrument, and not merely a
“Normative Framework” of an unknown or unstated legal status. This GGE can and
should begin discussing what a legally binding instrument that could
effectively regulate autonomy in weapons systems might look like. Normativity
could also imply ethical and moral norms, and we would welcome a broader
discussion of the ethical and moral issues raised by autonomous weapons,
particularly with respect to human dignity.</p>



<p>Further, we would like to remind the Chair that the “Guiding
Principles” were developed to guide discussions of this body over the past few
years, and were never meant to be a goal or outcome of those discussions.&nbsp; We would like to see a more substantive
outcome of the current GGE.</p>



<p>Finally, we are concerned that the current draft does not mention “human control” much less “meaningful human control” or its other variants. This is despite the fact that many States, as well as civil society, have repeatedly expressed the view that human control is central to both understanding and regulating autonomy in weapons systems.&nbsp; Towards this end, ICRAC has produced a new white paper entitled <a href="https://www.icrac.net/wp-content/uploads/2019/08/Amoroso-Tamburrini_Human-Control_ICRAC-WP4.pdf">“What makes human control over weapons “Meaningful”?</a>&nbsp; You will find copies of this new report in the back of the room tomorrow. In it you will find a rigorous analysis of the requirements for human control in weapons, which could provide useful concepts for the elements of a treaty, including the positive obligation on states to ensure that weapons have the necessary elements of control to ensure accountable and responsible use of weapons under international law. And we hope the Chair will stand by <a href="https://twitter.com/Jivan_Gj/status/1163379176780587008">his recent tweet</a>, and allow this document to inform discussions of the Legal, Technical and Military work streams, as well as a much needed ethical discussion that cuts across all three.</p>



<p>We hope that tomorrow’s formal discussions are productive, and will continue to urge this body to work on the substantive concepts necessary to build a legally binding instrument.&#8221;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6289</post-id>	</item>
		<item>
		<title>Acto de campaña con Jody Williams en Barcelona</title>
		<link>https://www.icrac.net/acto-de-campana-con-jody-williams-en-barcelona/</link>
		
		<dc:creator><![CDATA[Joaquin Rodriguez]]></dc:creator>
		<pubDate>Mon, 08 Apr 2019 13:59:03 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6181</guid>

					<description><![CDATA[El próximo Viernes día 12 de Abril a las 16:30h celebraremos en compañía de Jody Williams, premio Nobel de la Paz 1997, y presidenta de la Iniciativa de mujeres Nobel desde 2006, un acto enmarcado en la campaña Stop Killer Robots en el que además participaran los miembros del ICRAC, la Dra Roser Martinez Quirante [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Joaquin Rodriguez' src='https://secure.gravatar.com/avatar/06ecb23ad24f0d156d3caac752517f80dd8e7c6b4010b903914036f205332d98?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/06ecb23ad24f0d156d3caac752517f80dd8e7c6b4010b903914036f205332d98?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Joaquin Rodriguez</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image"><img data-recalc-dims="1" loading="lazy" decoding="async" width="930" height="1024" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/04/Asset-18.png?resize=930%2C1024&#038;ssl=1" alt="" class="wp-image-6180" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/04/Asset-18.png?resize=930%2C1024&amp;ssl=1 930w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/04/Asset-18.png?resize=273%2C300&amp;ssl=1 273w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/04/Asset-18.png?resize=768%2C845&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/04/Asset-18.png?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/04/Asset-18.png?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 930px) 100vw, 930px" /></figure>



<p>El próximo Viernes día 12 de Abril a las 16:30h celebraremos en compañía de Jody Williams, premio Nobel de la Paz 1997, y presidenta de la Iniciativa de mujeres Nobel desde 2006, un acto enmarcado en la campaña Stop Killer Robots en el que además participaran los miembros del ICRAC, la Dra Roser Martinez Quirante y el Dr. Joaquín Rodríguez Álvarez, así como un representante del Centre Delàs d&#8217;estudis per la Pau, el Dr Pere Brunet.</p>



<p>El acto se celebrará en la Sala de Actos del Rectorado de la Universidad Autónoma de Barcelona y tiene como principal objetivo, desplegar el argumentario de la Campaña Stop Killer Robots a favor de la elaboración de un tratado internacional vinculante para la prohibición de los sistemas de armamento autónomo.</p>



<p>Para más información contactar con: Joaquín Rodríguez email: <a href="mailto:joaquin.rodriguez@uab.cat">joaquin.rodriguez@uab.cat</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Joaquin Rodriguez' src='https://secure.gravatar.com/avatar/06ecb23ad24f0d156d3caac752517f80dd8e7c6b4010b903914036f205332d98?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/06ecb23ad24f0d156d3caac752517f80dd8e7c6b4010b903914036f205332d98?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Joaquin Rodriguez</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6181</post-id>	</item>
		<item>
		<title>ICRAC statement at the March 2019 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-at-the-march-2019-ccw-gge/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 26 Mar 2019 15:50:46 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6170</guid>

					<description><![CDATA[As delivered by Prof. Peter Asaro, March 26, 2019. ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><img data-recalc-dims="1" loading="lazy" decoding="async" width="600" height="450" class="wp-image-6177" style="width: 600px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=600%2C450&#038;ssl=1" alt="" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=4608&amp;ssl=1 4608w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>



<p><strong><em>As delivered by Prof. Peter Asaro</em>, March 26, 2019.</strong></p>



<p>ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems. And (as an aside) we should not forget the scientifically recognized limitations of the technology or the foreseeable threats to global security such weapons pose.</p>



<p>We are
also pleased with the statements and working papers beginning to examine the
requirements for human control and planning in military systems. While this can
be multifaceted, we must not let the complexity of military planning throw a
smoke screen over the core issues of the meaningful human assessment of all
targets, their legitimacy and the proportionate use of force.</p>



<p>We are glad that we see the beginnings of a more nuanced approach to the control of weapons systems that cannot be captured by gross terms such as in-the-loop, on-the-loop, the broader loop, human oversight, and appropriate levels of human judgement. However, these terms continue to insinuate themselves in military, political and defence contractor’s narratives outside of the CCW. We welcome the suggestion of the IPRAW report to distinguish control-by-design and control-in-use—acknowledging that ultimate responsibility for the use of force lies in the specific context of its use.</p>



<figure class="wp-block-image"><img data-recalc-dims="1" loading="lazy" decoding="async" width="480" height="640" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?resize=480%2C640&#038;ssl=1" alt="" class="wp-image-6174" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?w=480&amp;ssl=1 480w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?resize=225%2C300&amp;ssl=1 225w" sizes="auto, (max-width: 480px) 100vw, 480px" /></figure>



<p>As a
scientific and scholarly group, our focus is on how we can make control
effective and ensure that operators, commanders and planners are making clear
judgements about the validity of every attack at the time of that attack.</p>



<p>To do
this we need to move away from blanket terms and examine in detail how humans
interact with automated machinery. As we have pointed out before, there has
been more than 30 years of scientific research on human supervisory control of
machinery and more than 100 years of research on the psychology of human
reasoning. Ignoring the science for sake of expediency could lead us down a
path to a humanitarian disaster.</p>



<p>The
scientific approach is not mutually exclusive to an examination of the military
control of weapons and the many lessons to be learned for current methods. Indeed,
we applaud the UK’s paper on human control in 2018 and that of the Netherlands
and others this year.&nbsp; We may not agree
with all of the detail, but it is what we have urged all of the high
contracting parties to bring to the table.</p>



<p>This
combination of work can help us to design human-machine interfaces that allow
weapons to be controlled in a manner that is fully compliant with international
law and the principle of humanity.</p>



<p>First,
there should be a focus on what the human operator<strong>&nbsp;MUST</strong>&nbsp;do
in the targeting cycle. This is control by use which is governed by targeting
rules under International Humanitarian Law and International Human Rights Law,
which were well articulated by the ICRC in their statement this morning.
Further, international law rules that apply after the use of weapons – such as
those that relate to human responsibility – must be satisfied.</p>



<p>Second,
the design of weapon systems must render them&nbsp;<strong>INCAPABLE</strong>&nbsp;of
operating without meaningful human control. &nbsp;This is control by design,
which is governed by international weapons law. In terms of international
weapons law, if the weapon system, by its design, is incapable of being
sufficiently controlled in terms of the law, then such a weapon should be
prohibited.</p>



<p>We need further
discussion of the details of human-machine interfaces, the distribution of
responsibility in the targeting cycle, and how their design can ensure IHL and
IHRL compliance. Such details need not be the substance of a treaty, and we
must resist being caught up in the weeds of process. We support German’s goal
of finding a shared understanding of the principles of human control that apply
to all weapons systems now and in the future, regardless of context, planning
or process. This is not different from the normal processes that operate in
science. One of the goals of science is to reduce the complexity of the world
to simple theories or principles that capture all of the experimental data. In
other words, we create abstractions of the details that are firmly coupled with
and informed by the details. As Einstein once said, explanations should be a
simple as possible but no simpler. “Human in the loop” and its variants fall
under the too simple category. Detailed accounts of every weapon type and how
it is controlled in every context is far too complex.</p>



<p>Let me
give you an example of an abstraction with three conditions that could make a
good starting point for discussions on the control of weapons systems. I have
said this before but clearly there is no prohibition on repeating yourself in
this room. </p>



<ol class="wp-block-list">
<li>a human commander (or operator) will have full contextual and<br>situational awareness of the target area for each and every attack and be able<br>to perceive and react to any change or unanticipated situations that may have<br>arisen since planning the attack.</li>



<li>there will be active cognitive participation in every attack with<br>sufficient time for deliberation on the nature of any target, its significance<br>in terms of the necessity and appropriateness of attack, and likely incidental<br>and possible accidental effects of the attack and</li>



<li>there will be a means for the rapid suspension or abortion of<br>every attack.</li>
</ol>



<p>These are general principles that could provide
a starting point for discussion by states in the context of negotiating a
legally binding treaty that clearly articulates the legal obligations of human
control. </p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6170</post-id>	</item>
	</channel>
</rss>
