<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Slider &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/category/slider/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 23 Jun 2025 12:49:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Statement on Ethical Considerations in Open Informal Meeting at UNGA 1st Committee</title>
		<link>https://www.icrac.net/statement-on-ethical-considerations-in-open-informal-meeting-at-unga-1st-committee/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:45:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Peter Asaro]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19944</guid>

					<description><![CDATA[UNGA Informals on LAWS ICRAC Statement on Ethical Considerations Delivered by Prof. Peter Asaro on May 13, 2025 Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="1024" height="800" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799-1024x800.jpg?resize=1024%2C800&#038;ssl=1" alt="Peter Asaro delivering ICRAC Statement on Ethics" class="wp-image-19940" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=1024%2C800&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=300%2C234&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=768%2C600&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?w=1536&amp;ssl=1 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></figure>



<p><strong>UNGA Informals on LAWS <br>ICRAC Statement on Ethical Considerations <br>Delivered by Prof. Peter Asaro on May 13, 2025</strong> </p>



<p><br>Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and ethics. ICRAC is a co-founding member of the Stop Killer Robots Campaign.</p>



<p>We appreciate the organizers of this Informal Meeting including a Session on Ethical Considerations. It has been many years since Ethics has been the primary focus of substantive discussion within the CCW GGE meetings. Yet ethics and morality has provided a valuable basis for international law in the past, and is precisely where we must ground new laws to prohibit and regulate AWS in the near future. That is, in our common shared humanity, and principles which transcend human laws, particularly human dignity in a deep sense as discussed by Prof. Chengeta, and ethical decisions as discussed by the Representative of the Holy See.</p>



<p>Whenever violent force is used, there are risks involved. But merely managing those risks is not sufficient to meet the requirements for morally justifiable killing. Understanding the reasons and the potential consequences for the use of force is required for its justification. It has been argued that AWS may be highly accurate and precise in their use of force, but these are not sufficient to meet the requirements for the ethically discriminant use of force, and do not begin to address the requirements of the proportionate use of force.</p>



<p>Following the outlines of the two-tiered approach advanced by the ICRC, regulated AWS would be permitted to target autonomously. In these limited cases, more specifically cases where the target is a military object by nature, such as military vehicles and installations, automated targeting must still be carefully regulated to ensure that humans can safely supervise those systems.</p>



<p>But as soon as we start considering civilian objects, even those which might be used for military purposes and might be lawfully targeted under IHL, we must not permit their targeting by automated processes. The moral argument that leads to this conclusion is clear. It may be tempting to think that we can automate proportionality decisions–how much force is needed, or how much risk is acceptable, or how much collateral harm to civilians might be acceptable relative to a military objective. But the nature of proportionality judgments is fundamentally moral.</p>



<p>These decisions are inherently about values–the value of a target to a military objective, the value of a military objective to an operation and an overall strategy; the value of civilian infrastructure to a family, a community, a country; the value of a natural environment; and above all the value of human lives and the cost of taking those lives. They are also about duties, our duties to protect, our duties to each other.</p>



<p>These values are not intrinsically numerical or quantitative in nature, and assigning them such values in a computer program is arbitrary at best. Computers do not “understand” in any meaningful sense. They represent the world through mathematical abstractions that we design and understand, and from which we assign and seek meaning. Worse, training an algorithm to “learn” these values from a dataset is to abdicate any human responsibility in establishing the values represented in the systems, including the value of human life and the necessary conditions of human flourishing.</p>



<p>These are moral values, only understood through the lived experience of human life, moral reflection, and ethical development. In those limited cases where the decision to end a human life can be morally justified, it must be made by a moral agent who truly understands these values. Any life lost by the decision of an algorithm is, by definition, taken arbitrarily. ICRAC appreciates the work of the CCW GGE and this section of latest draft of the Chair’s Rolling Text:</p>



<p><em>States should ensure context-appropriate human judgement and control in the use of<br>LAWS, through the following measures &#8230; [which] &#8230; includes ensuring assessment of legal<br>obligations and ethical considerations by a human, in particular, with regard to the effects<br>of the selection and engagement functions.</em></p>



<p>The ethical considerations of the use of force must remain a matter of human judgement. We must not eliminate ethical considerations altogether by delegating them to machines wholly incapable of grasping such considerations. Human dignity requires that we consider a human as human–no machine can do this for us.</p>



<p>Similarly for anti-personnel AWS, in order to design systems to autonomously target people, it would be necessary to create digital representations of people, or target profiles. The same moral logic applies here.</p>



<p>While from a legal perspective, it could be argued that unmounted infantry are military objects by nature, and can pose a threat just as a tank does. But there is an important moral difference between targeting people directly, versus targeting a tank, and accepting that people inside it may be killed. People are not to be treated as objects, but always as moral subjects.</p>



<p>The aim of war, and the moral justification of killing in war, depends critically on using force to diminish the ability of your adversary to use force against you. The ultimate aim is not to harm or kill the enemy directly, this is only a means to an end, namely the end of hostilities. Targeting a human directly is to make the destruction of a human a goal in itself, rather than the true goal of eliminating the threat they pose. This might sound like a minor distinction, but by making the targeting and killing of humans the goal of a machine, rather than the elimination of military threats, we stand to vastly undermine human dignity.</p>



<p>By designing systems to target people directly, we essentially and effectively “pre-authorize” the moral judgement to take their lives. By pre-authorizing the killing of humans, and making personnel the targets of autonomous weapons, we would fundamentally violate and diminish human dignity. If we accept that a soldier on the battlefield can be directly targeted, without a human moral judgement or moral justification, then we make it more acceptable to do so in other contexts as well.</p>



<p>When we violate human dignity, it is not just the immediate victim who loses their dignity. All of humanity suffers from this loss. This is why we feel such moral disgust at the injustices of slavery, and torture, and the dropping of bombs on children–these atrocities undermine our collective dignity as human beings and offend our moral sensibility.</p>



<p>While the use of violent force against unjust aggression is sometimes necessary, it is our moral responsibility to ensure that force is used justly. The only way to ensure that force is used justly is through moral judgement, and this requires a moral agent. Machines and automated algorithms, however sophisticated they may appear, are not moral agents, and are not capable of moral judgements–only thin and arbitrary approximations. We must not delegate our morality to machines, as doing so threatens the very essence of our human dignity.</p>



<p>To quote the wise words of Christof Heyns, “War without reflection is mechanical slaughter.”</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19944</post-id>	</item>
		<item>
		<title>Statement on Security in Open informal consultations at UN GA</title>
		<link>https://www.icrac.net/statement-on-security-in-open-informal-consultations-at-un-ga/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:11:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19933</guid>

					<description><![CDATA[Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”. Thank you Chair, Presenters, Delegates, My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s International Disarmament Institute and a member of the International Committee for Robot Arms Control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>Statement on Security in “Open informal consultations on lethal autonomous weapons systems held in accordance with General Assembly resolution 79/62, 12-13 May 2025”.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-19934" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/2ed3bb4b-6db9-4c6e-8ea9-563ea929406b83.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></figure>



<p><br>Thank you Chair, Presenters, Delegates,</p>



<p>My name is Dr. Matthew Breay Bolton, I am Co-Director of Pace University’s <a href="https://www.pace.edu/dyson/faculty-and-research/research-centers-and-initiatives/international-disarmament-institute">International Disarmament Institute</a> and a member of the International Committee for Robot Arms Control (ICRAC).</p>



<p>I would like to raise the importance of thinking about <em>human</em> security and protecting the integrity of the natural environment, considerations beyond traditional interpretations of security as strategic stability.</p>



<p>In this regard, I would like to highlight a report recently published by Pace’s International Disarmament Institute “<a href="https://bpb-us-w2.wpmucdn.com/blogs.pace.edu/dist/0/195/files/2025/05/Considerations-for-a-Victim-Assistance-Provision-in-a-Treaty-Banning-Killer-Robots-Submission-Draft-26-March-2025.pdf">Considering Victim Assistance and Remediation Provisions for a Treaty on Killer Robot</a>s.”</p>



<p>International diplomatic and advocacy discussions surrounding a possible treaty on autonomous weapons systems – “killer robots” – have neglected consideration of provisions on victim assistance and remediation. This departs from an almost three- decade trend in treaties banning and regulating weapons, which have included “positive obligations” to assist aMected communities and remediate contaminated environments.</p>



<p>Autonomous weapons systems have not yet been widely deployed and thus there are few who might be considered victims. Moreover, one hopes that a treaty will stymie widespread use of killer robots. Nevertheless, it is possible that some states will remain outside any eventual treaty and some non-state actors may remain outside the norm and may use autonomous weapons, whether in armed conflict, policing or terrorism. Therefore, it is important for diplomats and advocates to discuss whether positive obligations to address harms from killer robots belong in a treaty regulating and/or banning them. If so, further consideration should be given to the scope and shape of such provisions on victim assistance and remediation in advance of any negotiations.</p>



<p>To phrase this as a set of questions for the panelists:</p>



<ul class="wp-block-list">
<li>If an autonomous weapon sinks a ship, who would be responsible for addressing the resulting pollution, environmental injustices and insecurities? </li>
</ul>



<ul class="wp-block-list">
<li>If civilians are harmed or disabled by the use of an autonomous armed drone, how might we secure their medical care and rehabilitation, as well as prosecution of those responsible? How would we give them satisfaction that justice is secured?</li>
</ul>



<p>The specificity of autonomous weapons systems mean that diplomats and activists should not simply “copy and paste” the victim assistance and remediation provisions from other instruments into a killer robots treaty. In particular, care should be taken to ensure that provisions fill legal gaps and/or strengthen rather than undermine existing obligations.</p>



<ul class="wp-block-list">
<li>What complementarities are relevant in International Humanitarian Law, weapons treaties, but also the UN Voluntary Trust Fund on Torture or the Convention on the Rights of Persons with Disabilities?</li>
</ul>



<p>Diplomats, civil society advocates, humanitarian workers and activists engaged in discussions of a potential treaty on autonomous weapons systems should consider:</p>



<ul class="wp-block-list">
<li>Whether to include positive obligations addressing possible harms resulting from the use of killer robots, such as victim assistance and remediation of contaminated environments;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of precedent offered by recent international treaties and norms on weapons, which have included provisions on victim assistance and remediation of contaminated land;</li>
</ul>



<ul class="wp-block-list">
<li>The relevance of other normative frameworks for redress and remediation, such as from human rights and environmental law;</li>
</ul>



<ul class="wp-block-list">
<li>How to ensure that possible provisions fill legal gaps and strengthen rather than undermine existing obligations.</li>
</ul>



<p>We would be interested to hear from panelists, as well as states here today, their thoughts on the human and environmental security implications of autonomous weapons systems particularly how to remedy the harms resulting from their use, such as through practices of victim assistance and environmental remediation.</p>



<p>This is among several dimensions of autonomous weapons that have not yet been discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) mandated by the Convention on Certain Conventional Weapons (CCW). Discussion of these issues here demonstrates the potential value of this forum.</p>



<p>Thank you for the opportunity to address this meeting!</p>



<p></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19933</post-id>	</item>
		<item>
		<title>Frequently Asked Questions on LAWS</title>
		<link>https://www.icrac.net/frequently-asked-questions-on-laws/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Sat, 11 Nov 2017 20:05:12 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php56-3.dfw3-2.websitetestlink.com/?p=3344</guid>

					<description><![CDATA[Memorandum for delegates at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) Meeting on Lethal Autonomous Weapons Systems (LAWS) Geneva, 13-17 November 2017 ICRAC is an international not-for-profit association of scientists, technologists, lawyers and policy experts committed to the peaceful use of robotics and the regulation of robot weapons. Please visit our [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><strong>Memorandum for delegates at the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) Meeting on Lethal Autonomous Weapons Systems (LAWS)</strong></p>
<p><strong>Geneva, 13-17 November 2017<img data-recalc-dims="1" loading="lazy" decoding="async" class="alignright wp-image-3347 size-medium" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2017/12/ICRAC_CCWUN24-300x225-300x225.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" /></strong></p>
<p><strong>ICRAC</strong> is an international not-for-profit association of scientists, technologists, lawyers and policy experts committed to the peaceful use of robotics and the regulation of robot weapons. Please visit our website <a href="http://www.icrac.net/">www.icrac.net</a> and follow us on Twitter <a href="https://twitter.com/icracnet">@icracnet</a></p>
<p><strong>ICRAC</strong> is a founding member of the Campaign to Stop Killer Robots <a href="http://www.stopkillerrobots.org/">www.stopkillerrobots.org</a></p>
<p>&nbsp;</p>
<p><strong>What is Artificial Intelligence (AI)?</strong></p>
<p>The term AI tends to evoke science-fiction tropes and even notions of “super intelligence”. But in reality, AI is just an umbrella term given to computational techniques that automate tasks that we would normally consider to require human intelligence. This does not mean that these software programs themselves are intelligent.</p>
<p>&nbsp;</p>
<p><strong>How fast is AI progressing?</strong></p>
<p>Enthusiasm about the progress of AI has increased considerably in the last couple of years even while techniques have not improved much since the 1980s. This is largely because of two factors</p>
<p>(i) the acquisition of big data sets with billions of examples;</p>
<p>(ii) plummeting costs for massive processing power.</p>
<p>Both factors provide an ideal environment for a cluster of computational techniques called Machine Learning (ML). The exploitation of ML has led to the mass commercialization of AI over a wide range of applications by various companies. So current AI progress is best described as spreading sideways rather than moving upwards.</p>
<p><strong> </strong></p>
<p><strong>Do civilian and military applications of AI differ?</strong></p>
<p><strong>Yes.</strong> What makes any autonomous system relying on AI computational techniques work is brittle software based on algorithms and statistics. Thanks to the availability of large amounts of training data, we will hopefully soon be able to make these techniques work in applications such as self-driving cars, to name a prominent example from the civilian sector. But this does not translate into military applications. Aside from the fact that cars and weapon systems are designed for completely different purposes, the comparably structured and regulated environment of road traffic does not compare at all to the adversarial, chaotic environment of the battlefield. The fog of war will only allow for faulty or, at best, noisy data. So beware of false equivalences!</p>
<p>&nbsp;</p>
<p><strong>Would LAWS be “precision weapons”?</strong></p>
<p><strong>Possibly (yet illegal to use as well). </strong>LAWS could take various forms. For instance, a swarm of hobby drones fitted with a heat sensor and a small explosive payload could be programmed to attack everything that emits body temperature. Such a three-dimensional moving minefield of LAWS would be the opposite of a precision weapon.</p>
<p>But let’s assume, for the sake of the argument, LAWS designed with military-grade accuracy in mind. Fitted with better sensing and data processing hard- and software as well as payloads tailored to the system’s mission, those could be more precise than current weapon systems. But the technical potential for accuracy and the application of violent force to a legitimate target are two separate issues. Even the most high-tech precision weapon system has to be used in a manner that is legal under International Humanitarian Law (IHL).</p>
<p>IHL dictates that, when using a weapon system, constant care should be taken to avoid or minimize civilian casualties (principles of distinction and precautions in attack). It also prohibits to launch or continue an attack, when the expected civilian losses exceed the military advantage sought for (principle of proportionality). These concepts enshrined in IHL are only meaningful in the context of human judgment. Machines are a far cry from the reasoning that a human military commander acting responsibly and in compliance with the law would engage in. Machines will for the foreseeable future not be able to discriminate combatants from civilians, let alone judge which use of force or type of munition is proportionate in light of the military objective. Hence we cannot and must not expect modern weapon systems to free us from these legal obligations. On the contrary, we have to heed these principles in equivalence with our growing technological capabilities.</p>
<p>For example, before launching an attack, and throughout its execution, IHL requires military commanders to take all feasible precautions to spare the civilian population, by making use of all the information from all sources available to them. An autonomous weapon system fitted with various sensors for targeting purposes would thus require a commander to make use of the data that is gathered and the additional information that is generated whilst using the system. A commander cannot choose to treat this new “smart” precision weapon akin to the “dumb” weapons of the past, that is, as if this information were not being made available by the system or as if it could be ignored. Instead, weapon technology and legal obligations go hand in hand. Consequently, the more sophisticated our weapon systems become, the more meaningful human control becomes <em>feasible</em> regarding the critical functions of identifying (“fixing”), selecting and engaging targets. And hence the more care for ensuring meaningful human control is <em>required</em>.</p>
<p>This is not a particularly new insight of course, it is why advanced laser guided munitions are used with tactics, techniques and procedures that differ from those of simple free-falling bombs. So, in sum, fully autonomous weapon systems (=LAWS), that is, systems designed in a way that would require commanders to abdicate meaningful human control, are simply incompatible with the way IHL demands weapons to be used by human military commanders on the battlefield.</p>
<p>&nbsp;</p>
<p><strong>Would LAWS make war more humane?</strong></p>
<p><strong>No.</strong> It is sometimes argued that autonomy in weapons systems could make wars more humane by ensuring greater precision in targeting military objectives and by clearing the battlefield from human passions, such as anger, fear and vengefulness. Even assuming – but not conceding (see above: <em>Would LAWS be “precision weapons”?</em>) – that one day LAWS might somehow reach human or even “higher-than-human” performances with respect to adherence to IHL, this would not “humanize” future armed conflicts for at least three reasons</p>
<p>(i) delegating the power to take life-or-death decisions to machines blatantly denies the human dignity of the recipients of lethal force and their intrinsic worth as human beings;</p>
<p>(ii) LAWS trivialize the decision to take someone else’s life by relieving war-fighters from the moral burden inevitably associated with it;</p>
<p>(iii) while it is true that machines’ decision-making will never be influenced by negative human emotions, it is equally true that LAWS are also immune to compassion and empathy, which in certain situations could compel a human to refrain from using lethal force even when she or he would legally be entitled to do so.</p>
<p>&nbsp;</p>
<p><strong>Would LAWS proliferate?</strong></p>
<p><strong>Yes. </strong>LAWS need not necessarily take the shape of one specific weapon system akin to, for instance, a drone. LAWS also do not require a very specific military technology development path, the way nuclear weapons do, for example. As AI software and robotic hardware mature and continue to pervade the civilian sphere, militaries will feel prompted to increasingly adopt them (however, see above: <em>Do civilian and military applications of AI differ?</em>) in continuation of a dual-use-trend that is already observable in, for instance, armed drones.</p>
<p>Research and development for LAWS-related technology is thus already well underway and distributed over countless university laboratories and commercial enterprises, making use of economies of scale and the forces of the free market to spur competition, lower prices and shorten innovation cycles. This renders the military research and development effort in the case of LAWS different from those of past hi-tech conventional weapon systems. So (without even taking exports into account) it is easy to see that LAWS would be comparably easy to obtain (as well as reverse-engineer) and thus prone to quickly proliferate to a wide range of state and non-state actors.</p>
<p>&nbsp;</p>
<p><strong>Would LAWS threaten global stability?</strong></p>
<p><strong>Yes. </strong>LAWS promise a military advantage inter alia because they are expected to perform certain tasks much faster than a human could do. We argued above that IHL does not allow for relinquishing meaningful human control. In addition, there are considerations from a strategic perspective that also suggest restraining ourselves and keeping meaningful human control intact. Without meaningful human control, the actions and reactions of individual LAWS as well as swarms of LAWS would have to be controlled by software alone.</p>
<p>Consider the example of adversarial swarms deployed in close proximity to each other. Their respective control software would have to react to signs of an attack within a very short, split-second timeframe – by evading or, possibly, counter-attacking in a use-them-or-lose-them situation. Indications of an attack – sun glint interpreted as a rocket flame, sudden and unexpected moves of the adversary, or just some malfunction – could trigger escalation. It is within the nature of military conflict that these kinds of interactions between two adversarial systems or swarms would obviously not be tested or trained beforehand. In addition, it is, technically speaking, impossible to fathom all possible outcomes in advance. In other words, the interaction of LAWS, if handed over full autonomy, would be unpredictable and take place at operational speeds far beyond human fail-safe capabilities.</p>
<p>Comparable runaway interactions between algorithms are already observable in financial markets. Hence it is a real possibility that LAWS interactions could result in an unwanted escalation from crisis to war, or, within armed conflict, to unintended higher levels of violence. This means an increase in global instability and is unpleasantly reminiscent of Cold War scenarios of “accidental war”.</p>
<p>&nbsp;</p>
<p><strong>Would banning LAWS stifle technology?</strong></p>
<p><strong>No. </strong>On the contrary. Global Governance for LAWS would not mean a prohibition or control of specific technologies as such. The wide spread and the dual-use potential of AI software and robotics suggest that this would not only be a completely futile, luddite endeavor. It would also be severely misguided in light of the various benefits potentially flowing from the maturation of these technologies with regard to civilian applications.</p>
<p>What is more, a number of recent developments in fact suggest that technology companies would welcome a ban on LAWS since they do not want their products to be associated with “Killer Robots”. Google, for instance, stated already years ago that it is not interested in military robotics. The Canadian robot manufacturer Clearpath Robotics even officially joined forces with the Campaign to Stop Killer Robots in 2014 and “ask[s] everyone to consider the many ways in which this technology would change the face of war for the worse” and create robotic products solely “for the betterment of humankind” instead. And in 2017, 160 high profile CEOs of companies developing artificial intelligence technologies signed an open letter calling for the CCW to act.</p>
<p>So preventive arms control for LAWS would not mean the regulation or prohibition of specific technologies. Instead, it would give tech entrepreneurs and manufacturers guidance and assurance that their inventions and products cannot be misused. Hence arms control for LAWS is not about listing or counting (stockpiles of) individual weapon systems. Rather, it is about drawing a line regarding the use of autonomy in weapon systems, a line to retain meaningful human control and prohibit the application of autonomy in specific (especially the “critical”) functions of weapon systems.</p>
<p>The CCW has drawn a comparable line and established a strong norm like that before, with the preventive prohibition of laser blinding weapons in 1995. This prohibition protects a soldier’s eyes on the battlefield; it is, obviously, not a blanket ban on laser technology in all its other uses, be they military or, especially, civilian in nature. In other words, just as we got to keep our CD players and laser pointers back then, we will get to keep our smartphones and self-driving cars this time.</p>
<p><strong> </strong></p>
<p><strong>Further reading:</strong></p>
<p>Altmann, Jürgen/Sauer, Frank (2017): <a href="http://www.tandfonline.com/eprint/qnJKjAUPXWPhmyMjZ6cD/full">Autonomous Weapon Systems and Strategic Stability</a>, in: Survival 59: 5, 117–142.</p>
<p>Amoroso, Daniele/Tamburrini, Guglielmo (2017): The Ethical and Legal Case Against Autonomy in Weapons Systems, in: Global Jurist. Online first.</p>
<p>Asaro, Peter (2012): On Banning Autonomous Weapon Systems. Human Rights, Automation, and the Dehumanization of Lethal Decision-Making, in: International Review of the Red Cross 94: 886, 687–709.</p>
<p>Garcia, Denise (2016): Future Arms, Technologies, and International Law: Preventive Security Governance, in: European Journal of International Security 1: 1, 94-111.</p>
<p>Sauer, Frank (2016): <a href="https://www.armscontrol.org/ACT/2016_10/Features/Stopping-Killer-Robots-Why-Now-Is-the-Time-to-Ban-Autonomous-Weapons-Systems">Stopping “Killer Robots”. Why Now Is the Time to Ban Autonomous Weapons Systems</a>, in: Arms Control Today 46: 8, 8–13.</p>
<p>Sharkey, Noel (2012): The Evitability of Autonomous Robot Warfare, in: International Review of the Red Cross 94: 886, 787–799.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3344</post-id>	</item>
		<item>
		<title>Speed kills! Why we need to hit the brakes on “killer robots”</title>
		<link>https://www.icrac.net/speed-kills-why-we-need-to-hit-the-brakes-on-killer-robots/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Fri, 08 Apr 2016 17:42:09 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Slider]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2875</guid>

					<description><![CDATA[by Juergen Altmann and Frank Sauer This analysis originally appeared as a guest post on duckofminerva.com Autonomous weapon systems: rarely has an issue gained the attention of the international arms control community as quickly as these so-called killer robots. “Once activated, they can select and engage targets without further intervention by a human operator“, according [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<div class="entry">
<p><em>by Juergen Altmann and Frank Sauer</em></p>
<p><em>This analysis originally appeared as a guest post on <a href="http://duckofminerva.com/2016/04/speed-kills-why-we-need-to-hit-the-brakes-on-killer-robots.html">duckofminerva.com</a></em></p>
<p>Autonomous weapon systems: rarely has an issue gained <a href="http://www.cornellpress.cornell.edu/book/?GCOI=80140100234530&amp;fa=author&amp;person_id=4999">the attention of the international arms control community</a> as quickly as these so-called killer robots. “Once activated, they can select and engage targets without further intervention by a human operator“, according to the <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf">Pentagon</a>. They are, judging from the skepticism prevalent in <a href="http://futureoflife.org/open-letter-autonomous-weapons/">epistemic communities</a> and <a href="http://duckofminerva.com/2016/03/building-social-science-knowledge-on-public-attitudes-and-autonomous-weapons.html">public opinion</a> alike, a controversial development.</p>
<p>Come next Monday, the United Nations in Geneva will begin its third informal <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/37D51189AC4FB6E1C1257F4D004CAFB2?OpenDocument">experts meeting</a> on this emerging arms technology. For the third year in a row, various technical, legal and ethical questions surrounding autonomous weapons will be discussed at the UN’s Convention on Certain Conventional Weapons (CCW): Where does <a href="http://duckofminerva.com/2015/01/autonomous-or-semi-autonomous-weapons-a-distinction-without-difference.html">autonomy</a> begin, where does <a href="http://www.article36.org/wp-content/uploads/2014/05/A36-CCW-May-2014.pdf">meaningful human control</a> end? Can these systems function in compliance with <a href="https://www.icrc.org/en/document/lethal-autonomous-weapons-systems-LAWS">international humanitarian law</a>? Who is <a href="https://www.hrw.org/news/2015/04/08/killer-robots-accountability-gap">accountable</a> if things go awry? Can “outsourcing” kill-decisions to machines be <a href="https://www.icrc.org/eng/resources/documents/article/review-2012/irrc-886-asaro.htm">morally acceptable</a> in the first place?</p>
<p>Depending on how CCW States Parties answer these questions, the still nascent social taboo that forbids the use of machines autonomously making kill-decisions might spawn a <a href="http://onlinelibrary.wiley.com/doi/10.1111/1468-2346.12186/abstract">human security regime</a> and be codified in a CCW protocol. In short, a ban might be in the cards for killer robots.</p>
<p>And in fact, there is an additional set of compelling reasons for preventive arms control that received comparably less attention so far (with <a href="http://duckofminerva.com/2015/04/the-new-mineshaft-gap-killer-robots-and-the-un.html">notable exceptions</a>, of course): the impact of killer robots on peace and stability.</p>
<p><strong>Stability: not a Cold War relic</strong></p>
<p>Stability became a key notion in Cold War international thought for two reasons. First, the arms race. Arms competition instability exists if the classic dynamic of one side deploying systems which lead adversaries to respond in kind and vice versa goes unchecked, with horizontal and vertical proliferation in tow. Crises were the second reason. Crisis instability exists if there are significant incentives to <em>initiate</em> an attack quickly. These can also arise when (conventional) war is already underway; hastening the escalation to higher levels of conflict, potentially even across the nuclear threshold due to a “use them or lose them”-situation.</p>
<p>The vicious cycle of an uncurbed arms race as well as the dangers of <a href="http://www.palgrave.com/de/book/9781137533739">overboiling crises and deterrence failure</a> – backed up by the <a href="http://press.princeton.edu/titles/5301.html">accidental nuclear war</a> scares caused by early-warning slipups and <a href="http://www.penguinrandomhouse.com/books/303337/command-and-control-by-eric-schlosser/9780143125785/">human error</a> – provided cautionary tales and fueled the strive for stability via arms control during the Cold War, not only in the nuclear but also in the conventional realm with the Conventional Armed Forces in Europe (CFE) Treaty. IR and arms control literature documents these lessons. They carry over to the dawning age of autonomous weapons.</p>
<p><strong>Proliferation and arms race instability</strong></p>
<p>Strictly speaking, autonomous weapons do not exist yet. They are not to be confused with automatic defense systems capable of “firing without a human in the loop”. These are stationary or fixed on ships or trailers and mostly fire at inanimate targets such as incoming munitions. More importantly, they just repeatedly perform pre-programmed actions and operate in a comparably structured and controlled environment.</p>
<p>Autonomous weapon systems, in contrast, would have their own means of propulsion and be able to operate without human control or supervision in dynamic, unstructured, open environments over an extended period of time, potentially learning and adapting their behavior on the go. The military advantages – compared to today’s remotely piloted systems – are obvious. Think future autonomous combat drone sent off to seek, identify, track and attack targets on its own, and you’re spot on. They are called killer robots for a reason.</p>
<p>The <a href="http://sdi.sagepub.com/content/43/4/363.abstract">drone sector</a> gives an indication of what to expect. Between 2001 and 2015, the <a href="http://securitydata.newamerica.net/world-drones.html">number of countries with armed drones</a> has increased from two to ten (add Hamas and Hezbollah to that), and at least 11 countries are currently developing them.</p>
<p>Meanwhile, everything points toward weapon autonomy as the next logical step. The US, with its newly stated <a href="http://duckofminerva.com/2015/12/the-self-fulfilling-prophecy-of-high-tech-war.html">third offset strategy</a> explicitly embraces autonomy to achieve military-technological superiority and is consequently leading the way <a href="http://warisboring.com/articles/the-navys-first-carrier-drone-will-be-a-flying-gas-tank/">in the air</a>, <a href="http://www.defensenews.com/story/defense/land/army/2015/04/08/us-army-readying-unmanned-systems-doctrine/25473749/">on the ground</a>, <a href="https://www.youtube.com/watch?v=ITTvgkO2Xw4">on the sea</a> and <a href="http://www.naval-technology.com/features/feature-new-era-underwater-drones-unmanned-systems/">below it</a>. And while the US is the only country to have introduced a <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf">doctrine</a> for the deployment and use of autonomous weapon systems, claiming restraint, Deputy Secretary of Defense Bob Work just recently stated that <a href="http://www.defensenews.com/story/defense/land/army/2016/03/30/bob-work-autonomy-flight-ground-systems-robot-ai/82427024/">the delegation of lethal authority will inexorably happen</a>.</p>
<p>Absent an international ban, one would expect others to follow that lead. After all, who would allow a “<a href="http://duckofminerva.com/2015/04/the-new-mineshaft-gap-killer-robots-and-the-un.html">killer robot gap</a>”? Especially considering that implementing autonomy in already existing systems in a vibrant ecosystem of unmanned vehicles in various shapes and sizes is not the equivalent of starting a nuclear program from scratch – it’s a technical challenge, yes, but doable, particularly with significant portions of the hard- and software being dual-use. And we are not even considering technology export yet. In short, an unchecked robotics arms race is in the making – with weapons potentially proliferating to everyone, including <a href="http://duckofminerva.com/2016/03/autonomous-weapons-and-incentives-for-oppression.html">oppressive regimes</a> and <a href="http://duckofminerva.com/2015/04/the-new-mineshaft-gap-killer-robots-and-the-un.html">non-state actors</a>.</p>
<p><strong>Crisis escalation and instability</strong></p>
<p>Autonomous weapons are commonly projected as systems of systems operating in <a href="http://www.cnas.org/the-coming-swarm#.VTVl3Fwo1A8">swarms</a>. With that in mind, imagine a severe crisis, the swarms of adversaries operating in close proximity of each other. A coordinated attack of one could wipe out the other within missile flight time – that is seconds. The control software would have to react fast in order to use its weapons before they are lost. Sun glint in visual data misinterpreted as a rocket flame, sudden, unforeseen moves of the enemy swarm, a simple software bug could trigger an erroneous “counter”-attack. And while this could happen on a small scale at first, the sequence of events developing from two autonomous systems of systems interacting at rapid speed could never be trained nor tested nor, really, foreseen. The <a href="http://www.cnas.org/sites/default/files/publications-pdf/CNAS_Autonomous-weapons-operational-risk.pdf">stock market</a> provides cautionary tales of such unforeseeable algorithm interactions. Introducing algorithms in conflict bears an enormous risk of uncontrolled escalation from crisis to war.</p>
<p>In addition, swarms of autonomous weapons would generate new possibilities for disarming surprise attacks. Small, stealthy or extremely low-flying systems are difficult to detect, the absence of a remote-control radio link makes detection even harder. Russia already was not very amused when the idea of using <a href="http://thebulletin.org/2010/novemberdecember/how-us-strategic-antimissile-defense-could-be-made-work-1">stealthy drones for missile defense</a> was floated in the US. It’s easy to see why. When nuclear weapons or strategic command-and-control systems are, or are perceived to be, put at risk by undetectable swarms that are hard to defend against, autonomous conventional capabilities end up causing instability at the strategic level.</p>
<p><strong>Hitting the brakes</strong></p>
<p>The case of autonomous weapon systems is not one of “<em>we</em> need them because <em>they</em> have them”. After all, no one has them – yet. We would be well-advised to keep it this way. Preventive arms control is prudent. Not only would it curb the looming arms race, a ban would prevent the excessive acceleration of battle that threatens to escape human understanding and the possibility of staying in control during crises. Sometimes humans make mistakes, and humans are <a href="http://duckofminerva.com/2016/02/strategic-surprise-or-the-foreseeable-future.html">slower than machines</a>. But <a href="http://www.cnas.org/sites/default/files/publications-pdf/CNAS_Autonomous-weapons-operational-risk.pdf">when things threaten to get out of hand, slow is good</a>. That is why we need to hit the brakes now.</p>
<p><em>ICRAC’s Juergen Altmann, PhD, Researcher and Lecturer, Technische Universität Dortmund, </em><em>is a physicist and peace researcher specialized in the assessment of military technology and preventive arms control. He was among the first scholars to study the </em><a href="http://sdi.sagepub.com/content/35/1/61.abstract"><em>military uses of nanotechnology</em></a><em>.</em></p>
<p><em>ICRAC’s Frank Sauer, PhD, Senior Research Fellow and Lecturer, Bundeswehr University Munich, is an International Relations scholar focusing on issues of international security. He is the author of </em><a href="http://www.palgrave.com/de/book/9781137533739"><em>Atomic Anxiety: Deterrence, Taboo and the Non-Use of U.S. Nuclear Weapons</em></a><em>. </em></p>
</div>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2875</post-id>	</item>
		<item>
		<title>ICRAC Video: Peaceful Uses of Robotics and Banning LAWS</title>
		<link>https://www.icrac.net/new-icrac-video-on-peaceful-uses-of-robotics-and-banning-laws/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Thu, 12 Nov 2015 17:36:45 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Media]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[YouTube video]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2869</guid>

					<description><![CDATA[Stop the Killer Robots from Kamille Rodriguez on Vimeo. The video explains that a ban on killer robots would not have negative impacts on the development of other robotics applications and research. It was created for ICRAC by digital animation artist Kamille Rodriguez: http://www.kamillerodriguez.com/ &#160;<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><iframe loading="lazy" src="https://player.vimeo.com/video/117102411" width="550" height="300" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><a href="https://vimeo.com/117102411">Stop the Killer Robots</a> from <a href="https://vimeo.com/user21751690">Kamille Rodriguez</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<p>The video explains that a ban on killer robots would not have negative impacts on the development of other robotics applications and research.</p>
<p>It was created for ICRAC by digital animation artist Kamille Rodriguez:<br />
<a href="http://www.kamillerodriguez.com/">http://www.kamillerodriguez.com/</a></p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2869</post-id>	</item>
	</channel>
</rss>
