<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Statements &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/category/statements/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 23 Jun 2025 12:49:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Statement on Ethical Considerations in Open Informal Meeting at UNGA 1st Committee</title>
		<link>https://www.icrac.net/statement-on-ethical-considerations-in-open-informal-meeting-at-unga-1st-committee/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:45:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Peter Asaro]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19944</guid>

					<description><![CDATA[UNGA Informals on LAWS ICRAC Statement on Ethical Considerations Delivered by Prof. Peter Asaro on May 13, 2025 Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="1024" height="800" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799-1024x800.jpg?resize=1024%2C800&#038;ssl=1" alt="Peter Asaro delivering ICRAC Statement on Ethics" class="wp-image-19940" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=1024%2C800&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=300%2C234&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=768%2C600&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?w=1536&amp;ssl=1 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></figure>



<p><strong>UNGA Informals on LAWS <br>ICRAC Statement on Ethical Considerations <br>Delivered by Prof. Peter Asaro on May 13, 2025</strong> </p>



<p><br>Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and ethics. ICRAC is a co-founding member of the Stop Killer Robots Campaign.</p>



<p>We appreciate the organizers of this Informal Meeting including a Session on Ethical Considerations. It has been many years since Ethics has been the primary focus of substantive discussion within the CCW GGE meetings. Yet ethics and morality has provided a valuable basis for international law in the past, and is precisely where we must ground new laws to prohibit and regulate AWS in the near future. That is, in our common shared humanity, and principles which transcend human laws, particularly human dignity in a deep sense as discussed by Prof. Chengeta, and ethical decisions as discussed by the Representative of the Holy See.</p>



<p>Whenever violent force is used, there are risks involved. But merely managing those risks is not sufficient to meet the requirements for morally justifiable killing. Understanding the reasons and the potential consequences for the use of force is required for its justification. It has been argued that AWS may be highly accurate and precise in their use of force, but these are not sufficient to meet the requirements for the ethically discriminant use of force, and do not begin to address the requirements of the proportionate use of force.</p>



<p>Following the outlines of the two-tiered approach advanced by the ICRC, regulated AWS would be permitted to target autonomously. In these limited cases, more specifically cases where the target is a military object by nature, such as military vehicles and installations, automated targeting must still be carefully regulated to ensure that humans can safely supervise those systems.</p>



<p>But as soon as we start considering civilian objects, even those which might be used for military purposes and might be lawfully targeted under IHL, we must not permit their targeting by automated processes. The moral argument that leads to this conclusion is clear. It may be tempting to think that we can automate proportionality decisions–how much force is needed, or how much risk is acceptable, or how much collateral harm to civilians might be acceptable relative to a military objective. But the nature of proportionality judgments is fundamentally moral.</p>



<p>These decisions are inherently about values–the value of a target to a military objective, the value of a military objective to an operation and an overall strategy; the value of civilian infrastructure to a family, a community, a country; the value of a natural environment; and above all the value of human lives and the cost of taking those lives. They are also about duties, our duties to protect, our duties to each other.</p>



<p>These values are not intrinsically numerical or quantitative in nature, and assigning them such values in a computer program is arbitrary at best. Computers do not “understand” in any meaningful sense. They represent the world through mathematical abstractions that we design and understand, and from which we assign and seek meaning. Worse, training an algorithm to “learn” these values from a dataset is to abdicate any human responsibility in establishing the values represented in the systems, including the value of human life and the necessary conditions of human flourishing.</p>



<p>These are moral values, only understood through the lived experience of human life, moral reflection, and ethical development. In those limited cases where the decision to end a human life can be morally justified, it must be made by a moral agent who truly understands these values. Any life lost by the decision of an algorithm is, by definition, taken arbitrarily. ICRAC appreciates the work of the CCW GGE and this section of latest draft of the Chair’s Rolling Text:</p>



<p><em>States should ensure context-appropriate human judgement and control in the use of<br>LAWS, through the following measures &#8230; [which] &#8230; includes ensuring assessment of legal<br>obligations and ethical considerations by a human, in particular, with regard to the effects<br>of the selection and engagement functions.</em></p>



<p>The ethical considerations of the use of force must remain a matter of human judgement. We must not eliminate ethical considerations altogether by delegating them to machines wholly incapable of grasping such considerations. Human dignity requires that we consider a human as human–no machine can do this for us.</p>



<p>Similarly for anti-personnel AWS, in order to design systems to autonomously target people, it would be necessary to create digital representations of people, or target profiles. The same moral logic applies here.</p>



<p>While from a legal perspective, it could be argued that unmounted infantry are military objects by nature, and can pose a threat just as a tank does. But there is an important moral difference between targeting people directly, versus targeting a tank, and accepting that people inside it may be killed. People are not to be treated as objects, but always as moral subjects.</p>



<p>The aim of war, and the moral justification of killing in war, depends critically on using force to diminish the ability of your adversary to use force against you. The ultimate aim is not to harm or kill the enemy directly, this is only a means to an end, namely the end of hostilities. Targeting a human directly is to make the destruction of a human a goal in itself, rather than the true goal of eliminating the threat they pose. This might sound like a minor distinction, but by making the targeting and killing of humans the goal of a machine, rather than the elimination of military threats, we stand to vastly undermine human dignity.</p>



<p>By designing systems to target people directly, we essentially and effectively “pre-authorize” the moral judgement to take their lives. By pre-authorizing the killing of humans, and making personnel the targets of autonomous weapons, we would fundamentally violate and diminish human dignity. If we accept that a soldier on the battlefield can be directly targeted, without a human moral judgement or moral justification, then we make it more acceptable to do so in other contexts as well.</p>



<p>When we violate human dignity, it is not just the immediate victim who loses their dignity. All of humanity suffers from this loss. This is why we feel such moral disgust at the injustices of slavery, and torture, and the dropping of bombs on children–these atrocities undermine our collective dignity as human beings and offend our moral sensibility.</p>



<p>While the use of violent force against unjust aggression is sometimes necessary, it is our moral responsibility to ensure that force is used justly. The only way to ensure that force is used justly is through moral judgement, and this requires a moral agent. Machines and automated algorithms, however sophisticated they may appear, are not moral agents, and are not capable of moral judgements–only thin and arbitrary approximations. We must not delegate our morality to machines, as doing so threatens the very essence of our human dignity.</p>



<p>To quote the wise words of Christof Heyns, “War without reflection is mechanical slaughter.”</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19944</post-id>	</item>
		<item>
		<title>Statement on Security in Open informal consultations at UN GA</title>
		<link>https://www.icrac.net/statement-on-security-in-open-informal-consultations-at-un-ga/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:11:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19933</guid>

					<description><![CDATA[Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”. Thank you Chair, Presenters, Delegates, My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s International Disarmament Institute and a member of the International Committee for Robot Arms Control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-19934" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p><br>Thank you Chair, Presenters, Delegates,</p>



<p>My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s <a href="https://www.pace.edu/dyson/faculty-and-research/research-centers-and-initiatives/international-disarmament-institute">International Disarmament Institute</a> and a member of the International Committee for Robot Arms Control (ICRAC).</p>



<p>I would like to raise the importance of thinking about <em>human</em> security and protecting the integrity of the natural environment, considerations beyond traditional interpretations of security as strategic stability.</p>



<p>In this regard, I would like to highlight a report recently published by Pace’s International Disarmament Institute “<a href="https://bpb-us-w2.wpmucdn.com/blogs.pace.edu/dist/0/195/files/2025/05/Considerations-for-a-Victim-Assistance-Provision-in-a-Treaty-Banning-Killer-Robots-Submission-Draft-26-March-2025.pdf">Considering Victim Assistance and Remediation Provisions for a Treaty on Killer Robot</a>s.”</p>



<p>International diplomatic and advocacy discussions surrounding a possible treaty on autonomous weapons systems – “killer robots” – have neglected consideration of provisions on victim assistance and remediation. This departs from an almost three- decade trend in treaties banning and regulating weapons, which have included “positive obligations” to assist aMected communities and remediate contaminated environments.</p>



<p>Autonomous weapons systems have not yet been widely deployed and thus there are few who might be considered victims. Moreover, one hopes that a treaty will stymie widespread use of killer robots. Nevertheless, it is possible that some states will remain outside any eventual treaty and some non-state actors may remain outside the norm and may use autonomous weapons, whether in armed conflict, policing or terrorism. Therefore, it is important for diplomats and advocates to discuss whether positive obligations to address harms from killer robots belong in a treaty regulating and/or banning them. If so, further consideration should be given to the scope and shape of such provisions on victim assistance and remediation in advance of any negotiations.</p>



<p>To phrase this as a set of questions for the panelists:</p>



<ul class="wp-block-list">
<li>If an autonomous weapon sinks a ship, who would be responsible for addressing the resulting pollution, environmental injustices and insecurities? </li>
</ul>



<ul class="wp-block-list">
<li>If civilians are harmed or disabled by the use of an autonomous armed drone, how might we secure their medical care and rehabilitation, as well as prosecution of those responsible? How would we give them satisfaction that justice is secured?</li>
</ul>



<p>The specificity of autonomous weapons systems mean that diplomats and activists should not simply “copy and paste” the victim assistance and remediation provisions from other instruments into a killer robots treaty. In particular, care should be taken to ensure that provisions fill legal gaps and/or strengthen rather than undermine existing obligations.</p>



<ul class="wp-block-list">
<li>What complementarities are relevant in International Humanitarian Law, weapons treaties, but also the UN Voluntary Trust Fund on Torture or the Convention on the Rights of Persons with Disabilities?</li>
</ul>



<p>Diplomats, civil society advocates, humanitarian workers and activists engaged in discussions of a potential treaty on autonomous weapons systems should consider:</p>



<ul class="wp-block-list">
<li>Whether to include positive obligations addressing possible harms resulting from the use of killer robots, such as victim assistance and remediation of contaminated environments;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of precedent offered by recent international treaties and norms on weapons, which have included provisions on victim assistance and remediation of contaminated land;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of other normative frameworks for redress and remediation, such as from human rights and environmental law;</li>
</ul>



<ul class="wp-block-list">
<li>How to ensure that possible provisions fill legal gaps and strengthen rather than undermine existing obligations.</li>
</ul>



<p>We would be interested to hear from panelists, as well as states here today, their thoughts on the human and environmental security implications of autonomous weapons systems particularly how to remedy the harms resulting from their use, such as through practices of victim assistance and environmental remediation.</p>



<p>This is among several dimensions of autonomous weapons that have not yet been discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) mandated by the Convention on Certain Conventional Weapons (CCW). Discussion of these issues here demonstrates the potential value of this forum.</p>



<p>Thank you for the opportunity to address this meeting!</p>



<p></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19933</post-id>	</item>
		<item>
		<title>ICRAC submission to the United Nations Secretary General on &#8220;AI in the Military Domain and its Implications for International Peace and Security&#8221;</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-ai-in-the-military-domain-and-its-implications-for-international-peace-and-security/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Fri, 11 Apr 2025 13:36:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19898</guid>

					<description><![CDATA[11 April 2025 The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.” Founded in 2009, ICRAC is a civil society organization of experts in artificial [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><em>11 April 2025</em></p>



<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.”</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="683" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&#038;ssl=1" alt="" class="wp-image-19896" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=1536%2C1024&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/04/un.jpeg?resize=2048%2C1365&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p>Founded in 2009, ICRAC is a civil society organization of experts in artificial intelligence, robotics, philosophy, international relations, human security, arms control, and international law. We are deeply concerned about the pressing dangers posed by AI in the military domain. As members of the Stop Killer Robots Campaign, ICRAC fully endorses their submission to this report, and wishes to provide further detail regarding the concerns raised by AI-enabled targeting.</p>



<p>Increasing investments in AI-based systems for military applications, specifically AI-enabled targeting, present new threats to peace and security and underscore the urgent need for effective governance. ICRAC identifies the following concerns in the case of AI-enabled targeting:</p>



<ol class="wp-block-list">
<li>AI-enabled targeting systems are only as valid as the data and models that inform them. ‘Training’ data for targeting requires the classification of persons and associated objects (buildings, vehicles) or ‘patterns of life’ (activities) based on digital traces coded according to vaguely specified categories of threat, e.g. ‘operatives’ or ‘affiliates’ of groups designated as combatants. Often the boundary of the target group is itself poorly defined. Although this casts into question the validity of input data and associated models, there is little accountability and no transparency regarding the bases for target nominations or for target identification. AI-enabled systems thus threaten to undermine the Principle of Distinction, <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/" data-type="link" data-id="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">even as they claim to provide greater accuracy</a>.</li>



<li><a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some" data-type="link" data-id="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza#_What_are_some">Human Rights Watch research</a> indicates that in the case of IDF operations in Gaza, AI-enabled targeting tools rely on ongoing and systematic Israeli surveillance of all Palestinian residents of Gaza, including with data collected prior to the current hostilities in a manner that is incompatible with international human rights law.</li>



<li>The increasing reliance on profiling required by AI-enabled targeting furthers a shift from the recognition of persons and objects identified as legitimate targets by their observable disposition as an imminent military threat, to the ‘discovery’ of threats through mass surveillance, based on statistical speculation, suspicion and guilt by association.</li>



<li>The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of AI-enabled targeting.</li>



<li>The use of AI-enabled targeting to accelerate the scale and speed of target generation further undermines processes for validation of the output of targeting systems by humans, while greatly amplifying the potential for direct and collateral civil harm, as well as diminishing the possibilities for de-escalation of conflict through means other than military action.</li>
</ol>



<p>Justification for the adoption of AI-enabled targeting is based on the premise that acceleration of target generation is necessary for ‘decision-advantage’, but the relation between speed of targeting and effectiveness in overall military success, or longer-term political outcomes, is questionable at best. The ‘<a href="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/" data-type="link" data-id="https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/">need’ for speed</a> that justifies AI- enabled targeting is based on a circular logic, which perpetuates what has become an arms race to accelerate the automation of warfighting. <em>Accelerating the speed and scale of target generation effectively renders human judgment impossible or, de facto, meaningless.</em> The risks to peace and security &#8211; especially to human life and dignity &#8211; are greatest for operations outside of conventional or clearly defined battlespaces. Insofar as the use of AI-enabled targeting is shown to be contrary to international law, the mandate must be to <em>not</em> use AI in targeting.</p>



<p>In this regard, ICRAC notes that the above systems present challenges to compliance with various branches of international law such as international humanitarian law (IHL), <em>jus ad bellum</em> (<a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">UN law on prohibition of use of force</a>), international human rights law (IHRL) and international environmental law. In the context of military AI’s implications for peace and security, <em>jus ad bellum</em>, a framework that prohibits aggressive military actions and regulates the conditions under which states may lawfully resort to the use of force, is the most relevant. In the same manner IHRL is important in this context because it is designed to uphold human dignity, equality, and justice—values that form the foundation of peaceful and secure societies.</p>



<p><strong>Citations</strong></p>



<p>Alvarez, Jimena Sofia Viveros. September 4, 2024. The risks and inefficacies of AI systems in military targeting support. <em>Humanitarian Law and Policy.</em> <a href="https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/">https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/</a></p>



<p>Bo, Marta and Dorsey, Jessica. April 4, 2024 Symposium on Military AI and the Law of Armed Conflict: The ‘Need’ for Speed – The Cost of Unregulated AI Decision-Support Systems to Civilians. <em>OpinioJuris</em>. <a href="https://opiniojuris.org/2024/04/04/symposium-on- military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of- unregulated-ai-decision-support-systems-to-civilians/">https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/</a></p>



<p>Chengeta, Thompson. May, 2024. African Commission for Human and Peoples’ Rights submission to the UN Secretary General Report on Lethal Autonomous Weapons, ASSEMBLY RESOLUTION 78/241, Commissioner Ayele Dersso Focal Point on the ACHPR Study on AI and Other Technologies. <a href="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf" data-type="link" data-id="https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-African_Commission-EN.pdf">78-241-African_Commission-EN.pdf</a></p>



<p>Human Rights Watch. September 10, 2024. Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza. <a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza">https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza</a></p>



<p>ICRC. 6 June 2019. Artificial intelligence and machine learning in armed conflict: A human-centred approach.<br><a href="https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learn ing_in_armed_conflict-icrc.pdf">https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learning_in_armed_conflict-icrc.pdf</a>; published version at <em>International Review of the Red Cross: Digital technologies and war</em> (2020), 102 (913), 463–479.</p>



<p>Schwarz, Elke. December 12, 2024. The (im)possibility of responsible military AI governance. <em>Humanitarian Law and Policy</em>. <a href="https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/">https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19898</post-id>	</item>
		<item>
		<title>ICRAC statement at the March 2019 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-at-the-march-2019-ccw-gge/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 26 Mar 2019 15:50:46 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6170</guid>

					<description><![CDATA[As delivered by Prof. Peter Asaro, March 26, 2019. ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><img data-recalc-dims="1" loading="lazy" decoding="async" width="600" height="450" class="wp-image-6177" style="width: 600px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=600%2C450&#038;ssl=1" alt="" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=4608&amp;ssl=1 4608w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>



<p><strong><em>As delivered by Prof. Peter Asaro</em>, March 26, 2019.</strong></p>



<p>ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems. And (as an aside) we should not forget the scientifically recognized limitations of the technology or the foreseeable threats to global security such weapons pose.</p>



<p>We are
also pleased with the statements and working papers beginning to examine the
requirements for human control and planning in military systems. While this can
be multifaceted, we must not let the complexity of military planning throw a
smoke screen over the core issues of the meaningful human assessment of all
targets, their legitimacy and the proportionate use of force.</p>



<p>We are glad that we see the beginnings of a more nuanced approach to the control of weapons systems that cannot be captured by gross terms such as in-the-loop, on-the-loop, the broader loop, human oversight, and appropriate levels of human judgement. However, these terms continue to insinuate themselves in military, political and defence contractor’s narratives outside of the CCW. We welcome the suggestion of the IPRAW report to distinguish control-by-design and control-in-use—acknowledging that ultimate responsibility for the use of force lies in the specific context of its use.</p>



<figure class="wp-block-image"><img data-recalc-dims="1" loading="lazy" decoding="async" width="480" height="640" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?resize=480%2C640&#038;ssl=1" alt="" class="wp-image-6174" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?w=480&amp;ssl=1 480w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?resize=225%2C300&amp;ssl=1 225w" sizes="auto, (max-width: 480px) 100vw, 480px" /></figure>



<p>As a
scientific and scholarly group, our focus is on how we can make control
effective and ensure that operators, commanders and planners are making clear
judgements about the validity of every attack at the time of that attack.</p>



<p>To do
this we need to move away from blanket terms and examine in detail how humans
interact with automated machinery. As we have pointed out before, there has
been more than 30 years of scientific research on human supervisory control of
machinery and more than 100 years of research on the psychology of human
reasoning. Ignoring the science for sake of expediency could lead us down a
path to a humanitarian disaster.</p>



<p>The
scientific approach is not mutually exclusive to an examination of the military
control of weapons and the many lessons to be learned for current methods. Indeed,
we applaud the UK’s paper on human control in 2018 and that of the Netherlands
and others this year.&nbsp; We may not agree
with all of the detail, but it is what we have urged all of the high
contracting parties to bring to the table.</p>



<p>This
combination of work can help us to design human-machine interfaces that allow
weapons to be controlled in a manner that is fully compliant with international
law and the principle of humanity.</p>



<p>First,
there should be a focus on what the human operator<strong>&nbsp;MUST</strong>&nbsp;do
in the targeting cycle. This is control by use which is governed by targeting
rules under International Humanitarian Law and International Human Rights Law,
which were well articulated by the ICRC in their statement this morning.
Further, international law rules that apply after the use of weapons – such as
those that relate to human responsibility – must be satisfied.</p>



<p>Second,
the design of weapon systems must render them&nbsp;<strong>INCAPABLE</strong>&nbsp;of
operating without meaningful human control. &nbsp;This is control by design,
which is governed by international weapons law. In terms of international
weapons law, if the weapon system, by its design, is incapable of being
sufficiently controlled in terms of the law, then such a weapon should be
prohibited.</p>



<p>We need further
discussion of the details of human-machine interfaces, the distribution of
responsibility in the targeting cycle, and how their design can ensure IHL and
IHRL compliance. Such details need not be the substance of a treaty, and we
must resist being caught up in the weeds of process. We support German’s goal
of finding a shared understanding of the principles of human control that apply
to all weapons systems now and in the future, regardless of context, planning
or process. This is not different from the normal processes that operate in
science. One of the goals of science is to reduce the complexity of the world
to simple theories or principles that capture all of the experimental data. In
other words, we create abstractions of the details that are firmly coupled with
and informed by the details. As Einstein once said, explanations should be a
simple as possible but no simpler. “Human in the loop” and its variants fall
under the too simple category. Detailed accounts of every weapon type and how
it is controlled in every context is far too complex.</p>



<p>Let me
give you an example of an abstraction with three conditions that could make a
good starting point for discussions on the control of weapons systems. I have
said this before but clearly there is no prohibition on repeating yourself in
this room. </p>



<ol class="wp-block-list">
<li>a human commander (or operator) will have full contextual and<br>situational awareness of the target area for each and every attack and be able<br>to perceive and react to any change or unanticipated situations that may have<br>arisen since planning the attack.</li>



<li>there will be active cognitive participation in every attack with<br>sufficient time for deliberation on the nature of any target, its significance<br>in terms of the necessity and appropriateness of attack, and likely incidental<br>and possible accidental effects of the attack and</li>



<li>there will be a means for the rapid suspension or abortion of<br>every attack.</li>
</ol>



<p>These are general principles that could provide
a starting point for discussion by states in the context of negotiating a
legally binding treaty that clearly articulates the legal obligations of human
control. </p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6170</post-id>	</item>
		<item>
		<title>ICRAC statement at the 2018 CCW States Parties Meeting</title>
		<link>https://www.icrac.net/icrac-statement-at-the-2018-ccw-states-parties-meeting/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Fri, 23 Nov 2018 08:34:47 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4341</guid>

					<description><![CDATA[As delivered by Prof. Roser Martínez Quirante (in Spanish) &#160; Mr. President, representatives of nations, members of civil society, During the past 5 years, at the Convention on Conventional Weapons, we have seen a greater understanding of the problems and challenges posed by autonomous weapon systems. The ICRAC is satisfied with the general consensus on [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>As delivered by Prof. Roser Martínez Quirante (in Spanish)</p>
<p>&nbsp;</p>
<p>Mr. President, representatives of nations, members of civil society,</p>
<p>During the past 5 years, at the Convention on Conventional Weapons, we have seen a greater understanding of the problems and challenges posed by autonomous weapon systems. The ICRAC is satisfied with the general consensus on the need to retain human control over these systems, in particular, on the critical functions of selection and elimination of objectives. Therefore, we believe that the time has come to establish binding legal mechanisms that restrict the use of autonomous weaponry, underlining the importance of human judgment in critical decisions.</p>
<p>During our participation in this convention, we have generated a large number of scientific articles, books and reports that emphasize three main classes of risk.</p>
<p>First, this type of weapons can not guarantee compliance with international humanitarian law. We should not give a blank check to future technology. With the large-scale commercialization of AI it is true that we are observing a great innovation in areas that are beneficial for humanity, but at the same time we are witnessing the appearance of many problems with biases in decision and facial recognition algorithms that can be dramatic if they are applied in a warlike context.</p>
<p>If nations invest based on techno-scientific speculations, we believe that it will be practically impossible to return to the starting position when the new typologies of conflict that announce these weapons materialize. We urge states to consider the veracity of current technology and its limitations in the critical selection of objectives.</p>
<p>Second, there are considerable moral values ​​at risk. No machine, computer or algorithm is capable of recognizing a human being as such, nor can he respect it as a human being with rights and dignity. He only observes it as a bit of information. A machine, without intuition, without ethics or morals, can not even understand what it means to be in a state of war, much less what it means to end a human life.</p>
<p>Decisions to end human life must be made by humans and in a non-arbitrary way to be justified. In addition, we must not confuse the fact that humans develop computer programs with the objective that the calculated results of these programs constitute human decisions. While responsibility for the deployment of lethal force is a necessary condition for compliance with minimum ethical standards in armed conflict, that responsibility alone is not enough, also requiring recognition of the human, of its dignity, and the reflection on the value of life and the justification of the use of violent force.</p>
<p>Third, autonomous weapons systems represent a great danger to global security. The threshold for the application of military force will be reduced and the likelihood of conflict will increase. We are concerned that the human control mechanisms established and controlled for double verification and reconsideration, function as security boxes or switches and can be easily disconnected. This, in combination with unpredictable algorithmic interactions and unpredictable results, will increase the instability of the conflict. In addition, the development and use of autonomous weapons by some States unilaterally will provide strong incentives for their proliferation, including their use by actors who are not responsible to the legal frameworks governing the use of force. Do we really need this new competitive arms race?</p>
<p>From the ICRAC as well as from other organizations involved in the Stop Killer Robots Campaign, representing a large part of international civil society, we urge the Convention to lay the foundations for the elaboration of an international treaty whose main objective is to prohibit preventive way autonomous weapons in clear application of the precautionary principle.</p>
<p>&nbsp;</p>
<p>***</p>
<p>&nbsp;</p>
<p>Señor presidente, representantes de las naciones, miembros de la sociedad civil,</p>
<p>&nbsp;</p>
<p>Durante los últimos 5 años en la Convención de armas Convencionales, hemos visto una mayor comprensión de los problemas y desafíos planteados por los sistemas de armamento autónomo. El ICRAC está satisfecho con el consenso general sobre la necesidad de retener el control humano sobre estos sistemas de armamento, en particular, sobre las funciones críticas de selección y eliminación de objetivos. Por ello consideramos que ha llegado el momento de establecer unos mecanismos legales vinculantes que restrinjan el uso de armamento autónomo subrayando la importancia del juicio humano en decisiones críticas.</p>
<p>&nbsp;</p>
<p>Durante nuestra participación en esta convención, hemos generado un gran número artículos científicos, libros e informes que enfatizan tres clases principales de riesgo.</p>
<p>&nbsp;</p>
<p>Primero, esta tipología de armas no puede garantizar el cumplimiento del derecho internacional humanitario. No debemos dar un cheque en blanco a la tecnología futura. Con la comercialización a gran escala de la IA es cierto que estamos observando una gran innovación en ámbitos benéficos para la humanidad, pero al mismo tiempo estamos comprobando la aparición de muchos problemas con sesgos en los algoritmos de decisión y de reconocimiento facial que pueden ser dramáticos si se aplican en un contexto bélico.</p>
<p>&nbsp;</p>
<p>Si las naciones invierten en base a especulaciones tecnocientíficas, creemos que será prácticamente imposible volver a la posición de partida cuando las nuevas tipologías de conflicto que anuncian estas armas, se materialicen. Instamos a los estados a considerar la veracidad de la tecnología actual y sus limitaciones en la selección crítica de objetivos legítimos.</p>
<p>&nbsp;</p>
<p>Segundo, hay considerables valores morales en riesgo. Ninguna máquina, computadora o algoritmo es capaz de reconocer a un ser humano como tal, ni puede respetarlo como un ser con derechos y dignidad humana. Solo lo observa como un bit de información. Una máquina, sin intuición, sin ética ni moral, ni siquiera puede entender lo que significa estar en estado de guerra, y mucho menos lo que significa terminar con una vida humana.</p>
<p>&nbsp;</p>
<p>Las decisiones para acabar con la vida humana deben ser tomadas por los humanos y de forma no arbitraria para ser justificadas. Además, no debemos confundir el hecho de que los humanos desarrollan programas informáticos con el objetivo que los resultados calculados de esos programas constituyan decisiones humanas. Si bien la responsabilidad por el despliegue de la fuerza letal es una condición necesaria para el cumplimiento de los estándares mínimos éticos en el conflicto armado, esa responsabilidad por sí sola no es suficiente, requiriendo también el reconocimiento de lo humano, de su dignidad, y la reflexión sobre el valor de la vida y la justificación del uso de la fuerza violenta.</p>
<p>&nbsp;</p>
<p>En tercer lugar, los sistemas de armas autónomos representan un gran peligro para la seguridad global. El umbral para la aplicación de la fuerza militar se reducirá y la probabilidad de conflicto aumentará. Nos preocupa que los mecanismos de control humano establecidos y controlados encaminados a una doble verificación y reconsideración, funcionen como cajas de seguridad o interruptores y puedan ser fácilmente desconectados. Esto, en combinación con interacciones de algoritmos imprevisibles y resultados impredecibles, aumentará la inestabilidad del conflicto. Además, el desarrollo y uso de armas autónomas por parte de algunos Estados proporcionará fuertes incentivos para su proliferación, incluido su uso por parte de actores que no son responsables ante los marcos legales que rigen el uso de la fuerza. ¿Realmente necesitamos esta nueva carrera competitiva de armamentos?</p>
<p>&nbsp;</p>
<p>Desde el ICRAC así como desde otras organizaciones involucradas en la Campaña Stop Killer Robots, en representación de una gran parte de la sociedad civil internacional, instamos a la Convención a sentar las bases para la elaboración de un tratado internacional que tenga como objetivo principal prohibir de manera preventiva las armas autónomas en aplicación clara del principio de precaución.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4341</post-id>	</item>
		<item>
		<title>ICRAC general statement at the August 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-general-statement-at-the-august-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Wed, 29 Aug 2018 15:28:10 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4263</guid>

					<description><![CDATA[As delivered by Prof. Noel Sharkey Mr Chairman, Over the last 5 years at the CCW we have seen an increased understanding of the issues and challenges posed by autonomous weapons systems. ICRAC is pleased with the general consensus that we must retain human control over weapons systems, in particular, over the critical functions of [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><em>As delivered by Prof. Noel Sharkey</em></p>
<p>Mr Chairman,</p>
<p>Over the last 5 years at the CCW we have seen an increased understanding of the issues and challenges posed by autonomous weapons systems. ICRAC is pleased with the general consensus that we must retain human control over weapons systems, in particular, over the critical functions of selecting and killing targets. During our time at the CCW, we have produced many scientific papers and reports emphasising three major classes of risk.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-medium wp-image-4308 alignright" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p><strong>First</strong>, we do not believe that IHL compliance can be guaranteed with autonomous weapons systems. Some argue that the technology will be able to comply with IHL in the future. But there is absolutely no evidence for that. We must not rely on hopeware and speculations about future technology. With the mass scale commercialisation of AI we are seeing great innovation but we are also seeing the emergence of many problems with bias in decision algorithms and face recognition (see my new ICRC blog post for more on this). If nations invest heavily on the basis of technical speculations, we believe that it will be difficult to put the toothpaste back in the tube when the humanitarian crises begin to emerge. We urge states to look at the plausibility of the current technology and how it falls short in the critical function of selecting legitimate targets.</p>
<p><strong>Second,</strong> there are considerable moral values at risk. No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect the human as a being with human rights and human dignity.  A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end a human life. Decisions to end human life must be made by humans in order to be justified.  Further, we should not mistake the fact that humans write computer programs to imply that the calculated results of those programs constitute human decisions.  While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility.  This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.</p>
<p><strong>Third</strong>, Autonomous Weapons Systems pose great dangers to global security. The threshold for applying military force will be lowered and the likelihood of conflict will go up. We are concerned that tried and tested human control mechanisms for double checking and reconsidering, with humans functioning as fail-safes or circuit-breakers, would be discontinued. This, in combination with unforeseeable algorithm interactions and their unpredictable outcomes, increases crisis instability. In addition, the development and use of Autonomous weapons by <em>some</em> States will provide strong incentives for their proliferation, including their use by actors not accountable to legal frameworks governing the use of force. Do we really need a new arms race?</p>
<p><strong>Finally,</strong> we urge that nations urgently move towards negotiations for a legally binding instrument in further deliberations next year. I am going off script here but look &#8211; come on – and no offence intended – but I am a scientist and not a diplomat so in plain speech there are those here who have an interest in slowing down the move towards a ban while they quickly continue to develop the weapons. Don’t be fooled or bullied by these tactics or the mudslide of refining definitions. We ask you &#8211; please &#8211; get on with ridding us of these morally reprehensible weapons before it is too late.</p>
<p>Thank you, Mr Chairman</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4263</post-id>	</item>
		<item>
		<title>ICRAC statement on the human control of weapons systems at the August 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-on-the-human-control-of-weapons-systems-at-the-august-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Wed, 29 Aug 2018 08:40:49 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4246</guid>

					<description><![CDATA[As delivered by Dr. Elke Schwarz Thank you, Mr Chairperson, The International Committee for Robot Arms Control is pleased to see states move away from the use of broad, brush-stroke terms such as in-the-loop, on-the-loop, the wider loop, human oversight, and appropriate human judgement. We agree with the working paper from Estonia and Finland that [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4248" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" />As delivered by Dr. Elke Schwarz</p>
<p>Thank you, Mr Chairperson,</p>
<p>The International Committee for Robot Arms Control is pleased to see states move away from the use of broad, brush-stroke terms such as in-the-loop, on-the-loop, the wider loop, human oversight, and appropriate human judgement. We agree with the working paper from Estonia and Finland that complex definitions of autonomy and autonomous weapons systems is moving us in the wrong direction. As scientists we believe, following Einstein, that definitions should be as simple as possible but no simpler. In that, we applaud the approach of the ICRC, that focus should be on the <em>critical functions</em> of target selection and the application of violent force. This counters concerns that a prohibition of autonomous weapons systems (AWS) would impact on innovation in other civilian and non-lethal military applications.</p>
<p>&nbsp;</p>
<p>ICRAC holds that the way forward is to focus on the meaningful human control of weapons systems. For human control to be <em>meaningful</em> we need to examine how humans interact with machines and understand the types of human-machine biases that can occur in the selection of legitimate targets. Lessons should be learned from 30 years of research on human supervisory control of machinery and more than 100 years of research on the psychology of human reasoning. A combination of this work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principles of humanity.</p>
<p>First, there should be a focus on what the human operator<strong> MUST</strong> <em>do</em> in the targeting cycle. This is <em>control in use</em> which is governed by targeting rules under International Humanitarian Law and International Human Rights Law. Further, international law rules that apply <em>after</em> the use of weapons – such as those that relate to human responsibility – must be satisfied.</p>
<p>Second, the design of weapon systems must render them <strong>INCAPABLE</strong> of operating <em>without</em> meaningful human control.  This is <em>control by design</em>, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled, then such a weapon is illegal <em>per se. </em>Systems <strong>MUST</strong> be designed to ensure human responsibility and accountability.</p>
<p>Ideally the following three conditions should be followed for the control of weapons systems:</p>
<ol>
<li>a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.</li>
<li>there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and</li>
<li>there will be a means for the rapid suspension or abortion of every attack.</li>
</ol>
<p>For further details please see our guidelines for the human control of weapons systems from the April meeting this year.</p>
<p>While systems must be designed to ensure safety and responsibility, we should not mistake the review of weapons and good design as itself a form of human control. The responsibility to make decisions of life and death cannot be delegated to machines, nor to the review- or design process of those machines.</p>
<p>Thank you Mr Chairperson</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4246</post-id>	</item>
		<item>
		<title>ICRAC statement on the human control of weapons systems at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-on-the-human-control-of-weapons-systems-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Wed, 11 Apr 2018 13:06:43 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4006</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Prof. Noel Sharkey, on 11 April 2018 Mr Chairperson, We have been very pleased with this morning&#8217;s session as states begin to contemplate a move towards policies on the human control of weapons systems. On a pedantic note: we cannot [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4007" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135-300x300.jpg?resize=300%2C300&#038;ssl=1" alt="" width="300" height="300" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135.jpg?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135.jpg?w=640&amp;ssl=1 640w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Prof. Noel Sharkey, on 11 April 2018</p>
<p>Mr Chairperson,</p>
<p>We have been very pleased with this morning&#8217;s session as states begin to contemplate a move towards policies on the human control of weapons systems. On a pedantic note: we cannot talk about the meaningful human control of LAWS as that would make them no longer an autonomous weapon.</p>
<p>In the view of ICRAC, the control of weapons systems is more nuanced than can be captured by terms such as in-the-loop, on-the-loop, the broader loop, looping-the loop, human oversight, and appropriate human judgement. In this way we agree strongly with the statement made by Brazil and several others in this session who believe that the devil is in the detail.</p>
<p>For human control to be meaningful we need to examine how humans interact with machines and understand the types of human-machine biases that can occur in the selection of legitimate targets. Lessons should be learned from 30 years of research on human supervisory control of machinery <a href="https://xn--yxadbbg.tv/tag/milf-sex/" style="border: none; color: #333; font-weight: normal !important; text-decoration: none;">xn--yxadbbg milf</a> and more than 100 years of research on the psychology of human reasoning. This combination of this work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principle of humanity.</p>
<p>First, there should be a focus on what the human operator<strong> MUST</strong> do in the targeting cycle. This is control by use which is governed by targeting rules under International Humanitarian  Law and International Human Rights Law. Further, international law rules that apply after the use of weapons – such as those that relate to human responsibility – must be satisfied.</p>
<p>Second, the design of weapon systems must render them <strong>INCAPABLE</strong> of operating without meaningful human control. &nbsp;This is control by design, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled in terms of the law, then such a weapon is illegal <em>per se.</em></p>
<p>Ideally the following three conditions should be followed for the control of weapons systems:</p>
<ol>
<li>a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.</li>
<li>there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and</li>
<li>there will be a means for the rapid suspension or abortion of every attack.</li>
</ol>
<p>Thank you Mr Chairperson</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4006</post-id>	</item>
		<item>
		<title>Short ICRAC Statement at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/short-icrac-statement-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Tue, 10 Apr 2018 11:06:41 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3993</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Prof. Noel Sharkey, on 10 April 2018 Mr. Chairperson, There have been very useful and interesting discussions this morning. I speak here as chair of an academic NGO: the International Committee for Robot Arms Control (ICRAC) and as a member [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3994" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Prof. Noel Sharkey, on 10 April 2018</p>
<p>Mr. Chairperson,</p>
<p>There have been very useful and interesting discussions this morning.</p>
<p>I speak here as chair of an academic NGO: the International Committee for Robot Arms Control (ICRAC) and as a member of the scientific community in the field of Artificial Intelligence and Robotics with specialty in Machine Learning.</p>
<p>We stress again that it would be confusing to broaden the discussion of LAWS into issues about Artificial Intelligence or weapons with emerging intelligence. By chasing definitions of LAWS down the rabbit hole of AI, we remove ourselves from the key issues that need to be urgently discussed here. The definition extracted from ICRC, and echoed by a number of states this morning, is concerned with weapons that have autonomy in the critical functions of target selection and the application of force. This is sufficient for our definitional purposes here: decisions about target selection and the application of force are delegated to a machine. <strong>Let me highlight that it does not matter what techniques or computing methods are used to create autonomy in these critical functions.</strong></p>
<p>What is important here are questions about the nature of human control required and acceptable to ensure compliance with international law. <strong>It is key that we get this right.</strong> You can read more about this in ICRAC’s new working paper <strong>Guidelines for the Human Control of Weapons Systems</strong> that will be delivered at Wednesday’s side event. We support those states who have stated that the focus of this meeting should be on human control of weapons systems and human-machine interaction. In this way we can make real progress this week and protect our future no matter what the technological developments.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3993</post-id>	</item>
		<item>
		<title>ICRAC Statement at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Mon, 09 Apr 2018 13:56:04 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3975</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Dr Thompson Chengeta, on 9 April 2018 Mr. Chairperson, I speak on behalf of the International Committee for Robot Arms Control [ICRAC], a founding member of the Campaign to Stop Killer Robots. Ambassador Gill, we thank you for your important [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3979" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Dr Thompson Chengeta, on 9 April 2018</p>
<p><iframe loading="lazy" title="Dr Thompson Chengeta Statement on behalf of the International Committee for Robot Arms Control" width="500" height="281" src="https://www.youtube.com/embed/ALvbgCAfBW8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>Mr. Chairperson,</p>
<p>I speak on behalf of the International Committee for Robot Arms Control [ICRAC], a founding member of the Campaign to Stop Killer Robots. Ambassador Gill, we thank you for your important work. Mr Chairperson, we are going to focus here on four points:</p>
<p>FIRST, a ban on LAWS will have no negative impact on the development of socially beneficial uses of autonomy, robotics or artificial intelligence. In fact, such a ban will direct more resources and specialists to work on humanitarian and beneficial applications.</p>
<p>SECOND, human control of weapon systems is a critical key component of the present discussions. It does not matter what name or term is used to describe human control, what is imperative is that we make sure that human control is consistent with applicable legal, ethical and moral standards.</p>
<p>THIRD, human input in the making of judgements to use violent force is at the centre of legal, ethics and moral standards pertaining to human responsibility for use of such force. No matter how attractive, if a proposed definition of human control does not resolve the accountability gap challenge, then such a proposal is legally inadequate. To that end, States should ask the question: What is the Legally Required Level of Human Control at each “touch point” in the human-machine interaction chain? At every step in the development, deployment, targeting and use of a weapon system, there is an obligation to ensure that the system is both capable of being used in compliance with applicable legal norms.</p>
<p>FOURTH, Poland and ICRC Working Papers’ emphasis on ethics and reassertion of the Principle of Non-Delegation of the Authority to Kill to non-human mechanisms is worth noting. Dictates of public conscience must always take precedence over any short-term advantage that might be gained from autonomous technologies. Furthermore, respect for human rights and human dignity, even within armed conflict, is a moral imperative recognized by the UN and the CCW. ICRAC reiterates the spirit of the Martens Clause—that morality can provide a strong basis for new law.</p>
<p>Finally, human control over critical functions of weapon systems and a ban on fully autonomous weapon systems are two sides of the same coin. States are urged to focus on the requirement of human control rather than technical definitions of autonomy. Further, States must move towards negotiation of a legally binding instrument on this issue.</p>
<p>Mr Chairperson, I thank you.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3975</post-id>	</item>
	</channel>
</rss>
